Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Big Data Engineer

Bluehill.dev

Ciudad de México

Presencial

MXN 1,074,000 - 1,612,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A technology company is seeking Big Data Engineers in Ciudad de México to architect and implement data pipelines. The ideal candidate has over 5 years of experience in software development using Python, Scala, or Java, along with strong skills in data engineering and analytics. Responsibilities include collaborating on data needs across teams and supporting existing data processes. This role offers an opportunity for mentorship and leadership in a dynamic environment.

Formación

  • 5+ years of software development experience.
  • Experience leading scalable, reliable services and workflows.
  • Expert in pipeline monitoring and data validation.
  • Passionate about data engineering and analytics.

Responsabilidades

  • Architect, design, and implement data pipelines/products.
  • Collaborate on data needs with product managers and analysts.
  • Support existing data processes and self-service tooling.

Conocimientos

Python
Scala
Java
Airflow
Hive
Spark
Kafka
EMR

Educación

Degree in Computer Science or related technical field
Descripción del empleo

We are looking for Big Data Engineers who are ready to take their career to the next level. You will evangelize and build Data Products, simplify critical ML and Analytics products to enrich the customer experience, and simplify marketing operations. You will partner with other data engineering teams and platform teams within AI to lead the architecture, implementation, and operations of big data pipelines and tools for building high-quality data marts

Responsibilities
  • Architect, design, build, implement and support data pipelines/products to serve ML and Analytical use cases
  • Collaborate with product managers, engineers, data scientists, and analysts on mission-critical property data needs to build world-class datasets
  • Identify opportunities to evangelize and support existing data processes
  • Contribute back to common tooling/infrastructure to enable self-service tooling to expedite customer onboarding.
Requirements
  • 5+ years of software development experience using Python/Scala/Java and experience leading the design/implementation of config-driven, scalable, reliable services and workflows/pipelines using Airflow, Hive, Spark, Kafka, EMR, or equivalents.
  • A degree in Computer Science or a related technical field; or equivalent work experience
  • Expert in establishing and promoting high standards in pipeline monitoring, data validation, testing, etc.
  • You have extensive experience applying automation to data engineering (DataOps).
  • Passionate about data engineering/analytics and distributed systems.
  • Excellent interpersonal skills and passion for collaborating across organizational boundaries.
  • Comfortable distilling informal customer requirements into problem definitions, resolving ambiguity, and balancing challenging objectives.
  • Excited about mentorship and coaching/onboarding/leading teammates
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.