¡Activa las notificaciones laborales por email!

Senior Data Engineer (English Required)

DaCodes

México

A distancia

MXN 70,000 - 90,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A leading software development firm in Mexico is looking for a Senior Data Engineer to design, build, and optimize data pipelines for large-scale applications. The ideal candidate will have strong experience in data architecture, ETL processes, and cloud platforms. This role offers the chance to work in a fast-paced environment with cutting-edge technologies and a diverse team.

Servicios

Major medical insurance
Life insurance
Vacation days
Day off on your birthday
Access to courses and certifications

Formación

  • 5+ years of experience in data engineering, data architecture, or backend development.
  • Strong expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.).
  • Cloud expertise with AWS (preferred), GCP, or Azure.
  • Proficiency in Python, Java, or Scala for data processing and pipeline development.
  • Experience with big data frameworks like Apache Spark, Hadoop, or Flink.
  • Hands-on experience with ETL/ELT processes and data pipeline orchestration tools.

Responsabilidades

  • Design, develop, and maintain scalable and efficient data pipelines for batch and real-time processing.
  • Build and optimize data lakes, warehouses, and analytics solutions on cloud platforms.
  • Implement ETL/ELT workflows using tools such as Apache Airflow, dbt, or Prefect.
  • Ensure data integrity, accuracy, and governance through proper architecture.
  • Integrate data from various sources including APIs and databases.

Conocimientos

Data engineering
SQL and NoSQL databases
Cloud platforms (AWS, GCP, Azure)
Python
Big data frameworks (Apache Spark, Hadoop, Flink)
ETL/ELT processes
Message queues and streaming technologies
Containerization and orchestration
Problem-solving
English proficiency (B2 or higher)

Herramientas

Apache Airflow
Docker
Kubernetes
Descripción del empleo
Overview

We are a team of experts in software development and high-impact digital transformation.

For over 10 years, we’ve created technology and innovation-driven solutions thanks to our team of 220+ talented #DaCoders, including developers, architects, UX/UI designers, PMs, QA testers, and more. Our team integrates into projects with clients across LATAM and the United States, delivering outstanding results.

At DaCodes, you'll accelerate your professional growth by collaborating on diverse projects across various industries and sectors.

Working with us will make you versatile and agile, giving you the opportunity to work with cutting-edge technologies and collaborate with top-level professionals.

Our DaCoders play a crucial role in the success of our business and that of our clients. You’ll become the expert contributing to our projects while gaining access to disruptive startups and global brands. Does this sound interesting to you?

We’re looking for talent to join our team—let’s work together!

The ideal candidate brings a unique mix of technical experience, curiosity, a logical and analytical mindset, proactivity, ownership, and a passion for teamwork.

We are looking for a Senior Data Engineer to join our team and help design, build, and optimize data pipelines for large-scale applications. The ideal candidate has strong experience in data architecture, ETL/ELT processes, cloud platforms, and distributed systems.

This role requires expertise in handling big data, real-time processing, and data lakes while ensuring scalability, performance, and security. The candidate should be comfortable working in a fast-paced, agile environment and collaborating with data scientists, analysts, and software engineers to deliver high-quality data solutions.

Required Qualifications
  • 5+ years of experience in data engineering, data architecture, or backend development.
  • Strong expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.).
  • Cloud expertise with AWS (preferred), GCP, or Azure.
  • Proficiency in Python, Java, or Scala for data processing and pipeline development.
  • Experience with big data frameworks like Apache Spark, Hadoop, or Flink.
  • Hands-on experience with ETL/ELT processes and data pipeline orchestration tools (Apache Airflow, dbt, Luigi, or Prefect).
  • Experience with message queues and streaming technologies (Kafka, Kinesis, Pub/Sub, or RabbitMQ).
  • Knowledge of containerization and orchestration tools (Docker, Kubernetes).
  • Strong problem-solving skills and the ability to optimize performance and scalability.
  • English proficiency (B2 or higher) to collaborate with international teams.
Nice-to-Have Skills (Preferred)
  • Experience with data lakehouse architectures (Delta Lake, Iceberg, Hudi).
  • Familiarity with Machine Learning (ML) and AI-related data workflows.
  • Experience with Infrastructure as Code (Terraform, CloudFormation) for managing data environments.
  • Knowledge of data security and compliance regulations (GDPR, CCPA, HIPAA).
Key Responsibilities
  • Design, develop, and maintain scalable and efficient data pipelines for batch and real-time processing.
  • Build and optimize data lakes, warehouses, and analytics solutions on cloud platforms (AWS, GCP, or Azure).
  • Implement ETL/ELT workflows using tools such as Apache Airflow, dbt, or Prefect.
  • Ensure data integrity, accuracy, and governance through proper architecture and best practices.
  • Integrate data from various sources (structured and unstructured), including APIs, streaming services, and databases.
  • Work with data scientists and analysts to ensure high availability and accessibility of data for analytics and machine learning models.
  • Monitor, troubleshoot, and improve the performance of data pipelines.
  • Implement security best practices for data access, encryption, and compliance.
  • Collaborate with software engineers to integrate data pipelines into applications and services.
  • Stay up to date with the latest trends in big data, cloud technologies, and data engineering best practices.
  • Integration with global brands and disruptive startups.
  • Remote work/Home office. You will be informed from the first session if any positions require a hybrid or on-site format. Don’t worry, most are remote!
  • Work schedule aligned with your assigned team/project. (Client's time zone)
  • Monday to Friday work week.
  • Legal benefits.
  • Official holidays according to your assigned team/project.
  • Vacation days. You can use these days after six months with the company.
  • Day off on your birthday.
  • Major medical insurance.
  • Life insurance.
  • Virtual integration events and interest groups.
  • Meetups with special guests from companies, IT professionals, and prestigious universities.
  • Constant feedback and performance tracking.
  • Access to courses and certifications.
  • Multicultural work teams.
  • English classes.
  • Opportunities across our different business lines.

Proudly certified as a Great Place to Work!

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.