¡Activa las notificaciones laborales por email!

Data Engineer

Financecolombia

Argentina

Presencial

ARS 23.959.000 - 46.588.000

Jornada completa

Hace 30+ días

Descripción de la vacante

A leading company in data processing solutions is seeking a motivated Data Engineer in Argentina. Responsibilities include designing ETL processes and efficient data pipelines using technologies like Python and AWS. The role requires strong database skills, particularly in Postgres, with benefits like a competitive salary and annual team trips.

Servicios

Competitive salary
Annual offsite team trip
Learn from a high performing team
Training from US mentors

Formación

  • Strong Python skills and proficiency in associated libraries for data processing.
  • Expertise in AWS Glue/pyspark, AWS Lambda, and Kafka for ETL development.
  • Experience designing and implementing relational databases, especially Postgres.

Responsabilidades

  • Design and develop efficient data pipelines using various technologies.
  • Implement and optimize ETL processes for data extraction and loading into databases.
  • Collaborate on data warehouse architectures, ensuring proper modeling and access.

Conocimientos

Python
ETL processes
Data pipeline development
Data integration
Apache Kafka
Data quality monitoring
Relational databases
AWS Glue
AWS Lambda
Data warehousing
Descripción del empleo

We're hiring a highly motivated Data Engineer with expertise in Python, AWS Glue/pyspark, AWS Lambda, Kafka, and relational databases (specifically Postgres). You'll be responsible for designing, developing, and maintaining data processing and management solutions, ensuring data integrity and availability through collaboration with multidisciplinary teams.

Responsibilities:

  • Design and develop efficient data pipelines using Python, AWS Glue/pyspark, AWS Lambda, Kafka, and related technologies.
  • Implement and optimize ETL processes for data extraction, transformation, and loading into relational databases, especially Postgres.
  • Collaborate on data warehouse architectures, ensuring proper data modeling, storage, and access.
  • Utilize tools like StitchData and Apache Hudi for data integration and incremental management, improving efficiency and enabling complex operations.
  • Identify and resolve data quality and consistency issues, implementing monitoring processes across pipelines and storage systems.

Requirements:

  • Strong Python skills and proficiency in associated libraries for data processing and manipulation.
  • Expertise in AWS Glue/pyspark, AWS Lambda, and Kafka for ETL workflow development and streaming architecture.
  • Experience in designing and implementing relational databases, specifically Postgres.
  • Practical knowledge of data pipeline development, ETL processes, and data warehouses.
  • Familiarity with data integration tools like StitchData and Apache Hudi for efficient incremental data management.
  • Advanced level of English for effective communication and collaboration.

Benefits:

  • Competitive salary.
  • Annual offsite team trip.
  • Learn from a very high performing team.
  • Training from US mentors.

If you're a passionate Data Engineer with experience in these technologies and seek a stimulating and challenging environment, join our team and contribute to the success of our advanced data processing and management solutions!

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.