fb

Job Description

Data Analyst


Expires on : 20221106
Job Code : 3370
Costa Rica
Experience : 4-7 Years
Location : Costa Rica

AWS Cloud, AWS S3, Data lakes, AWS Redshift, RDS AWS Glue, AWS Lambda functions, various batch processing technologies and ETL tools Exposure to Stream processing with Kinesis/Kafka, SNS/SQS Familiarity with ELT/ETL patterns with DWH, dimensional modelingAWS, EMR, Redshift, S3, Spark/Python/Scala, Airflow, SQL and ETL Athena, Glue

Responsibilities
• Build distributed, real-time, high-volume data pipelines and work together with others to enable high-scale data science projects. • Integrate third-party systems into existing infrastructure and workflow. • Leverage Spark (Databricks), Kafka (Confluent), Redshift, and other technologies. • Join a tightly knit team of data analysts and scientists, solving hard problems the right way. • Experience in building and operating data pipelines for real customers in production systems. • Expertise knowledge of SQL and ETL/ELT tools and wrangling huge amounts of data and exploring new data sets. • Fluent in Python and/or Scala. • Good to have: • Knowledge and experience using Spark and operationalizing machine learning models. And knowledge and experience using Spark of the AWS platform (Kinesis, Redshift, Lambda)
Requirements
• AWS Cloud, AWS S3, Data lakes, AWS Redshift, RDS AWS Glue • AWS Lambda functions, various batch processing technologies and ETL tools Exposure to Stream processing with Kinesis/Kafka • SNS/SQS Familiarity with ELT/ETL patterns with DWH, dimensional modelingAWS, EMR • Redshift, S3, Spark/Python/Scala, Airflow, SQL and ETL Athena, Glue

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close