• Scala or python or java • Apache spark • Devops
• Hands on Experience in Data Engineering with Python/Scala/java
• Work with large, complex datasets.
• Solve difficult, non-routine analysis problems, applying advanced analytical methods needed
• Conduct analysis that includes data gathering and requirement specification, processing, analysis, ongoing deliverables and presentation
• Interact cross-functionally, making business recommendations (Cost benefit, forecasting etc)
• Should be able design the end to end system for Data management and should we aware of AWS services
• Work closely with Internal team and Business partners
• Able to Perform POC for client requirements and guide team on technical design and solution
• 10+ years of software experience and at least 4+ years of hands-on experience in data engineering.
• Proficient in Scala or Python or Java.
• Experience in building highly scalable platforms using Apache Kafka and Kafka Connect.
• Experienced in building applications using KSQL, KTables and Confluent Cloud
• Experience in building streaming and batch data pipelines using Apache Spark. Proficient using Spark •DataSet APIs and SparkSQL.
• Experience in building big data applications on cloud such as AWS, Azure or GCP and experience using services such as Azure DataFactory, AWS Glue, AWS Kinesis, Managed Kafka, Azure StreamAnalytics, AWS •EMR, Azure HDInsight, Google DataCatalog, Hadoop etc.
• Experience with Apache Flink, Druid, Superset is a plus.
• Experience building applications using REST/gRPC is a plus.
• Experience with analytics and BI is a plus.
• Experience with Nifi or Airflow is a bonus.
• DevOps with CI/CD is a plus.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.