Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Senior DataOps Engineer

dLocal

Madrid

Presencial

EUR 60.000 - 80.000

Jornada completa

Hace 2 días
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A global fintech company in Madrid seeks a Senior DataOps Engineer to design and enhance its data platform. Responsibilities include building scalable infrastructure using Kubernetes and Databricks, implementing CI/CD pipelines, and ensuring data governance. Candidates should have a degree in a related field, experience in data engineering, and proficiency in Python or SQL. This is an opportunity to thrive in a diverse global team while making a significant impact in emerging markets.

Servicios

Flexible schedules
Referral bonus program
Learning & development access
Free language classes
Social budget for team activities
Opportunity to rent houses for team coworking

Formación

  • Bachelor’s degree or equivalent in a technical field.
  • Proven experience in data engineering and backend software development.
  • Strong skills in Python or SQL, building data or platform tooling.
  • Experience with distributed data processing frameworks such as Apache Spark.
  • Solid understanding of AWS and/or GCP.

Responsabilidades

  • Design and build scalable data infrastructure on Kubernetes and Databricks.
  • Maintain CI/CD pipelines for data applications automating testing and deployment.
  • Implement robust data governance practices for data access and quality.
  • Monitor and improve data services and resolve complex data issues.

Conocimientos

Data engineering
Platform engineering
Python or/and SQL
Apache Spark
Cloud platforms (AWS, GCP)
Containerization and orchestration
CI/CD pipelines
Monitoring & observability
Analytical thinking

Educación

Bachelor’s degree in Computer Engineering, Data Engineering, or Computer Science

Herramientas

Databricks
Kubernetes
Docker
Terraform
GitHub Actions
Descripción del empleo

Why should you join dLocal?

dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we operate, we make it possible for our merchants to make inroads into the world’s fastest-growing, emerging markets.

By joining us you will be a part of an amazing global team that makes it all happen. Being a part of dLocal means working with 1000+ teammates from 30+ different nationalities and developing an international career that impacts millions of people’s daily lives. We are builders, we never run from a challenge, we are customer-centric, and if this sounds like you, we know you will thrive in our team.

What’s the opportunity?

As a Senior DataOps Engineer, you'll be a strategic professional shaping the foundation of our data platform. You’ll design and evolve scalable infrastructure on Kubernetes, operate Databricks as our primary data platform, enable data governance and reliability at scale, and ensure our data assets are clean, observable, and accessible.

What will I be doing?
  • Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently, using Kubernetes and Databricks as core building blocks.
  • Design, build, and maintain Kubernetes-based infrastructure, owning deployment, scaling, and reliability of data workloads running on our clusters.
  • Operate Databricks as our primary data platform, including workspace and cluster configuration, job orchestration, and integration with the broader data ecosystem.
  • Work in improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency across batch and streaming workloads.
  • Build and maintain CI/CD pipelines for data applications (DAGs, jobs, libraries, containers), automating testing, deployment, and rollback.
  • Implement release strategies (e.g., blue/green, canary, feature flags) where relevant for data services and platform changes.
  • Establish and maintain robust data governance practices (e.g., contracts, catalogs, access controls, quality checks) that empower-functional teams to access and trust data.
  • Build a framework to move raw datasets into clean, reliable, and well-modeled assets for analytics, modeling, and reporting, in partnership with Data Engineering and BI.
  • Define and track SLIs/SLOs for critical data services (freshness, latency, availability, data quality signals).
  • Implement and own monitoring, logging, tracing, and alerting for data workloads and platform components, improving observability over time.
  • Lead and participate in on-call rotation for data platforms, manage incidents, and run structured postmortems to drive continuous improvement.
  • Investigate and resolve complex data and platform issues, ensuring data accuracy, system resilience, and clear root-cause analysis.
  • Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability.
  • Work closely with the Data Enablement team, BI, and ML stakeholders to continuously evolve the data platform based on their needs and feedback.
  • Stay current with industry trends and emerging technologies in DataOps, DevOps, and data platforms to continuously raise the bar on our engineering practices.
What skills do I need?
  • Bachelor’s degree in Computer Engineering, Data Engineering, Computer Science, or a related technical field (or equivalent practical experience).
  • Proven experience in data engineering, platform engineering, or backend software development, ideally in cloud-native environments.
  • Deep expertise in Python or/and SQL, with strong skills building data or platform tooling.
  • Strong experience with distributed data processing frameworks such as Apache Spark (Databricks experience strongly preferred).
  • Solid understanding of cloud platforms, especially AWS and/or GCP.
  • Hands-on experience with containerization and orchestration: Docker, Kubernetes / EKS / GKE / AKS (or equivalent)
  • Proficiency with Infrastructure-as-Code (e.g., Terraform, Pulumi, CloudFormation) for managing data and platform components.
  • Experience implementing CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins, CircleCI, ArgoCD, Flux) for data workloads and services.
  • Experience in monitoring & observability (metrics, logging, tracing) using tools like Prometheus, Grafana, Datadog, CloudWatch, or similar.
  • Experience with incident management: Participating in or leading on-call rotations.
  • Handling incidents and running postmortems
  • Building automation and guardrails to prevent regressions
  • Strong analytical thinking and problem-solving skills, comfortable debugging across infrastructure, network, and application layers.
  • Able to work autonomously and collaboratively.
Nice to have
  • Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools (Dagster, Prefect, Argo Workflows).
  • Familiarity with modern data formats and table formats (e.g., Parquet, Delta Lake, Iceberg).
  • Experience acting as a Databricks admin/developer, managing workspaces, clusters, compute policies, and jobs for multiple teams.
  • Exposure to data quality, data contracts, or data observability tools and practices.
What do we offer?

Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you:

- Flexibility: we have flexible schedules and we are driven by performance.

- Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity.

- Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded.

- Learning & development: get access to a Premium Coursera subscription.

- Language classes: we provide free English, Spanish, or Portuguese classes.

- Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!

- dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team? We’ve got your back!

What happens after you apply?

Our Talent Acquisition team is invested in creating the best candidate experience possible, so don’t worry, you will definitely hear from us. We will review your CV and keep you posted by email at every step of the process!

Also, you can check out our webpage, Linkedin and Youtube for more about dLocal!

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.