¡Activa las notificaciones laborales por email!

Data Engineer

Luxoft

Región Centro

Presencial

MXN 200,000 - 400,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A global technology services provider is looking for a skilled Data Engineer to design and implement scalable data pipelines with Databricks and Kafka. The ideal candidate will have over 5 years of experience in data engineering, with a strong background in real-time data streaming solutions and cloud platforms. This is a great opportunity to collaborate in a cross-functional team environment while contributing to innovative data solutions.

Formación

  • 5+ years of experience in data engineering roles.
  • Strong hands-on experience with Databricks and Apache Kafka.
  • Proficient in Python or Scala for data pipeline development.

Responsabilidades

  • Design and implement scalable data pipelines using Databricks and Kafka.
  • Build and maintain real-time streaming solutions for high-volume data.
  • Collaborate with cross-functional teams to integrate data flows.

Conocimientos

Data engineering experience
Databricks expertise
Apache Kafka proficiency
Streaming frameworks knowledge
Cloud platform experience
Python proficiency
CI/CD pipeline familiarity

Herramientas

Databricks
Apache Kafka
AWS
Azure
GCP
GitLab
Jenkins
Descripción del empleo

Project description

We are seeking a skilled and hands‑on Data Engineer with proven experience in Databricks, Apache Kafka, and real‑time data streaming solutions.

Responsibilities
  • Design and implement scalable data pipelines using Databricks and Kafka
  • Build and maintain real‑time streaming solutions for high‑volume data
  • Collaborate with cross‑functional teams to integrate data flows into broader systems
  • Optimize performance and reliability of data processing workflows
  • Ensure data quality, lineage, and compliance across streaming and batch pipelines
  • Participate in agile development processes and contribute to technical documentation
Skills
Must have
  • +5 years of experience in data engineering roles
  • Proven expertise with Databricks (Spark, Delta Lake, notebooks, performance tuning)
  • Strong hands‑on experience with Apache Kafka (topics, producers/consumers, schema registry)
  • Solid understanding of streaming frameworks (e.g., Spark Structured Streaming, Flink, or similar)
  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Proficiency in Python or Scala for data pipeline development
  • Familiarity with CI/CD pipelines (GitLab, Jenkins) and agile tools (Jira)
  • Exposure to data lakehouse architectures and best practices
  • Knowledge of data governance, security, and observability
Nice to have
  • Scrum / Agile
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.