Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Senior Data Engineer (Azure)

Applaudo Studios

Ciudad de México

Presencial

MXN 516,000 - 775,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A technology transformation company based in Mexico seeks an experienced Data Engineer to design scalable data architectures and optimize ETL/ELT pipelines on Azure. The ideal candidate has over 5 years of experience with SQL, Python (PySpark), and cloud data services. Strong English communication skills are essential for collaborating with international teams. Join a high-performance culture that prioritizes growth, accountability, and innovation through technology.

Formación

  • 5+ years of experience designing, building, and maintaining production‑grade data pipelines at scale.
  • Expert‑level SQL skills, including window functions and query optimization.
  • Advanced hands‑on experience using Python for Data Engineering, including large‑scale PySpark transformations.

Responsabilidades

  • Design and implement conceptual, logical, and physical data models aligned with analytics and business requirements.
  • Build, optimize, and monitor end‑to‑end ETL/ELT pipelines using Azure Data Factory and Databricks.
  • Ensure data quality, lineage, governance, and security across environments.

Conocimientos

SQL
Python
PySpark
Data Modeling
Azure Data Services
Git Workflows
Communication

Educación

Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related field

Herramientas

Azure Data Factory
Databricks
Snowflake
PostgreSQL
Descripción del empleo

Job Description

About You

You are a Data Engineer passionate about building scalable, production‑grade data ecosystems on Azure. You thrive on transforming complex and fragmented data into reliable analytical assets that drive meaningful business decisions. You work with a high level of autonomy, bring strong architectural foundations, and consistently apply high-quality engineering practices across data modeling, pipelines, orchestration, and performance optimization.

You enjoy simplifying complexity whether migrating legacy SQL logic into distributed PySpark jobs, designing canonical data layers, or ensuring data integrity, governance, and observability across systems. Collaboration, clarity, and business impact are core to how you work.

You Bring to Applaudo the Following Competencies:
  • Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related field, or equivalent practical experience.
  • 5+ years of experience designing, building, and maintaining production‑grade data pipelines at scale.
  • Expert‑level SQL skills, including window functions, query optimization, partitioning, and execution plan tuning.
  • Strong expertise in data modeling concepts such as star and snowflake schemas, facts and dimensions, SCDs, and curated/canonical data layers.
  • Advanced hands‑on experience using Python for Data Engineering, including large‑scale PySpark transformations.
  • Strong experience working with Azure data services, including: Azure Data Factory, Azure Databricks, ADLS Gen2 / Azure Storage, Azure SQL, Azure Logic Apps (orchestration).
  • Experience building incremental ETL/ELT pipelines with dependencies, CDC strategies, retries, and failure handling.
  • Hands‑on experience optimizing big data workloads, including partitioning strategies and Delta Lake performance (OPTIMIZE, Z‑ORDER, VACUUM).
  • Experience integrating REST APIs and handling schema drift and pagination.
  • Proficiency with Git workflows and CI/CD pipelines for data codebases.
  • Strong communication skills and ability to work autonomously within Agile environments.
  • Experience with Snowflake, PostgreSQL, or cloud cost optimization strategies (nice to have).
  • Advanced English proficiency, with the ability to communicate clearly and collaborate directly with US‑based clients.
You Will Be Accountable for the Following Responsibilities
  • Design and implement conceptual, logical, and physical data models aligned with analytics and business requirements.
  • Build, optimize, and monitor end‑to‑end ETL/ELT pipelines using Azure Data Factory, Databricks, and Logic Apps.
  • Migrate legacy SQL‑based logic into scalable, resilient PySpark‑based processing jobs.
  • Apply and automate performance best practices, including partitioning, indexing, schema evolution, and storage optimization.
  • Ensure data quality, lineage, governance, and security across environments.
  • Establish observability standards, including logging, data quality checks, alerting, and SLA monitoring.
  • Collaborate closely with analysts, architects, and business stakeholders to define requirements and data contracts.
  • Design and maintain cost‑efficient Azure data architectures while continuously improving pipeline reliability and delivery velocity.
Qualifications
  • 5+ years of experience as a Data Engineer.
  • Strong experience building data pipelines on Azure.
  • Advanced SQL and Python (PySpark) experience.
  • Experience working with production‑grade data platforms at scale.
  • Advanced English proficiency.
Additional Information
About Us

At Applaudo, we partner with ambitious companies to transform through technology with an AI‑native mindset at the core of how we think, build, and deliver. We combine strategic clarity, world‑class execution, and modern engineering practices to help clients accelerate measurable business outcomes and stay competitive.

We are building a high‑performance culture grounded in five values: Empowering Excellence, Collaborative Teamwork, Unsolicited Respect, Consistent Transparency, and Efficient Communication. These define how we work, how we make decisions, and how we hold ourselves accountable.

Applaudo is a place for people who want to grow fast, take ownership, and work alongside strong teams. Joining us means being part of an organization that is evolving intentionally, investing in modern ways of working, and embracing AI as a lever for productivity, innovation, and impact.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.