Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Senior Data Engineer - Grata

Datasite

México

A distancia

MXN 800,000 - 1,100,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A leading tech firm in Mexico seeks a Senior Data Engineer. In this role, you will design and scale data platforms, ensure the quality of data pipelines, and mentor junior engineers. Successful candidates will have over 6 years of experience in building data systems and strong proficiency in Python and SQL. Join a diverse team where your contributions will make a significant impact on data-driven decision-making and technology advancements.

Formación

  • 6+ years building and operating production data systems at scale.
  • Experience running workloads in AWS or similar cloud environments.
  • Proven track record mentoring engineers.

Responsabilidades

  • Design, build, and operate performant ELT/ETL pipelines.
  • Drive dimensional modeling and incremental patterns.
  • Architect and run real-time jobs to deliver product value.
  • Implement automated testing and anomaly detection.
  • Establish SLOs and robust monitoring for data platforms.
  • Work with Product and Data Science on feature design.

Conocimientos

Python
SQL
Spark/Databricks
Data modeling
Cloud computing (AWS)
Orchestration (Airflow, dbt)
Mentoring
Descripción del empleo
Job Description

Grata is the leading private‑market dealmaking platform—bringing the most comprehensive, accurate, and searchable proprietary data on private companies, financials, and owners to investors, advisers, and corporate deal teams. With 700+ customers and recognition from G2 and PE Wire, we’re growing fast and shaping the future of data‑driven dealmaking.

We’re looking for a Senior Data Engineer to design, scale, and own the data platforms that power Grata’s products and analytics. You’ll lead the development of reliable batch and streaming pipelines, evolve our lakehouse and warehouse models, improve data quality and governance, and mentor engineers—partnering closely with Product, Data Science, and Application Engineering to ship business‑critical capabilities.

Responsibilities
  • Own mission‑critical pipelines: Design, build, and operate performant ELT/ETL in Python/SQL/Spark on Databricks (and related orchestration), with strong SLAs and clear data contracts.
  • Evolve the lakehouse & warehouse: Drive dimensional modeling/star schemas and incremental patterns (e.g., Delta Lake/CDC), balancing cost, performance, and usability.
  • Streaming & event data: Architect and run real‑time/near‑real‑time jobs where it delivers product value; set the bar for idempotency and exactly‑once semantics.
  • Quality, lineage, and governance: Implement automated testing, anomaly detection, validation, lineage/metadata, and documentation.
  • Scale & reliability: Establish SLOs, on‑call rotations for data platforms, robust monitoring/alerting, and capacity/cost management.
  • Partner across functions: Work with Product and DS on source selection, feature readiness, and experiment design; with App Eng to expose data via stable APIs and semantic layers.
  • Mentor & uplift: Coach Data Engineers (and adjacent SWE/Analytics Eng), review designs/PRs, and lead brown‑bag sessions to raise the team’s technical bar. (Builds on our culture of sharing knowledge and collaboration.)
  • Ship outcomes: Break down work, sequence delivery, and land measurable improvements to freshness, completeness, and query performance.
Qualifications
  • 6+ years building and operating production data systems at scale.
  • Deep fluency with Python and SQL; expert in Spark/Databricks and lakehouse patterns.
  • Strong data modeling skills
  • Experience running workloads in AWS (or similar cloud): storage, compute, networking basics, cost controls.
  • Hands‑on with orchestration (Airflow/Databricks Workflows/dbt), CI/CD for data, and IaC.
  • Proven track record mentoring engineers and leading ambiguous initiatives to clear results.
Bonus
  • Event platforms (Kafka/Kinesis), vector/feature stores, ML feature engineering with DS partners.
  • Data observability platforms; data catalog/lineage tooling; data access governance.
  • Experience shaping product‑facing datasets and semantic layers (e.g., for BI and APIs).

Our company is committed to fostering a diverse and inclusive workforce where all individuals are respected and valued. We are an equal opportunity employer and make all employment decisions without regard to race, color, religion, sex, gender identity, sexual orientation, age, national origin, disability, protected veteran status, or any other protected characteristic. We encourage applications from candidates of all backgrounds and are dedicated to building teams that reflect the diversity of our communities.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.