¡Activa las notificaciones laborales por email!

Staff Engineer - DataOps Engineer

Nagarro

Región Centro

Presencial

MXN 600,000 - 900,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A global digital product engineering firm is seeking a DataOps Engineer in Mexico, Jalisco, Región Centro. You will manage data operations and ensure the reliability of data platforms, utilizing SQL, Python, and tools such as AWS, Jenkins, and Terraform. The ideal candidate has strong experience in DataOps and cloud platforms, alongside a collaborative spirit within dynamic teams.

Formación

  • 6 years in DataOps, Data Engineering Operations, or Analytics Platform Support.
  • Proficiency in SQL and Python/Shell scripting.
  • Experience with AWS and exposure to Azure/GCP.

Responsabilidades

  • Manage and support data pipelines and analytics platforms.
  • Execute data validation and quality checks using SQL and Python/Shell.
  • Implement monitoring using Datadog, Grafana, and Prometheus.

Conocimientos

DataOps
Python
SQL
AWS
DevOps

Herramientas

Jenkins
Terraform
Ansible
Datadog
Grafana
Prometheus
Descripción del empleo
Company Description

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!

Job Description

We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams, playing a key role in ensuring the reliability, automation, and performance of our analytics and data platforms. This role is primarily DataOps-focused, combining elements of DevOps and SRE to sustain and optimize data-driven environments across global business units.

You will manage end-to-end data operations from SQL diagnostics and data pipeline reliability to automation, monitoring, and deployment of analytics workloads on cloud platforms. You'll collaborate with Data Engineering, Product, and Infrastructure teams to maintain scalable, secure, and high-performing systems.

Key Responsibilities
  • Manage and support data pipelines, ETL processes, and analytics platforms, ensuring reliability, accuracy, and accessibility
  • Execute data validation, quality checks, and performance tuning using SQL and Python/Shell scripting
  • Implement monitoring and observability using Datadog, Grafana, and Prometheus to track system health and performance
  • Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins, Azure DevOps, Git)
  • Apply infrastructure-as-code principles (Terraform, Ansible) for provisioning and automation of data environments
  • Support incident and request management via ServiceNow, ensuring SLA adherence and root cause analysis
  • Work closely with security and compliance teams to maintain data governance and protection standards
  • Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads
Required Skills & Experience
  • 6 years in DataOps, Data Engineering Operations, or Analytics Platform Support, with good exposure to DevOps/SRE practices
  • Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
  • Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus)
  • Familiarity with CI/CD tools (Jenkins, Azure DevOps), version control (Git), and IaC frameworks (Terraform, Ansible) - Working knowledge of monitoring tools (Datadog, Grafana, Prometheus)
  • Understanding of containerization (Docker, Kubernetes) concepts
  • Strong grasp of data governance, observability, and quality frameworks
  • Experience in incident management and operational metrics tracking (MTTR, uptime, latency)
Qualifications

Must have Skills: Python (Strong), SQL (Strong), DevOps - AWS (Strong), DevOps - Azure (Strong), DataDog.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.