¡Activa las notificaciones laborales por email!

Principal Data Engineer

Salesforce, Inc..

Ciudad de México

Presencial

MXN 1,200,000 - 1,600,000

Jornada completa

Hace 2 días
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A leading tech company in Mexico City is seeking a Principal Data Engineer to design scalable data systems that power intelligent decisions across various platforms. You will build resilient batch and streaming pipelines, work with advanced data tools and cloud infrastructure, and collaborate with diverse teams to transform product signals into actionable insights. This role requires a deep understanding of data engineering and a passion for innovation.

Formación

  • 8+ years of experience in data engineering.
  • Strong software engineering fundamentals.
  • Expertise with big data frameworks like Spark and Trino.

Responsabilidades

  • Build and scale batch and streaming data pipelines.
  • Design consumption layers for product signals.
  • Collaborate with telemetry engineers and product leaders.

Conocimientos

Data engineering
Software engineering fundamentals
Big data frameworks
Streaming systems
Cloud infrastructure
Communication skills

Herramientas

Spark
Trino
DBT
Kafka
AWS
Descripción del empleo
Overview

PRINCIPAL DATA ENGINEER, Mexico City

About the Role: We’re building the product data platform that will power Salesforce’s next era of agentic intelligence — delivering smarter, adaptive, and self-optimizing product experiences.

As a Full-Stack Data Engineer, you’ll design and build scalable systems that process hundreds of thousands of context-rich product signals. These signals fuel analytics, customer-facing products, ML models, and autonomous agents. You’ll work on:

  • Near real-time and batch telemetry pipelines for trusted signal capture
  • Semantic layers and data products for reusable insights
  • Programmatic discovery via metadata, MCP, and knowledge graphs

This isn’t a typical data engineering role. We’re looking for creative, systems-minded engineers working outside of the “data engineer” box, fluent in both data and AI, and excited to navigate ambiguity, cross boundaries, and drive real impact.

What You’ll Do
  • Build and scale fault tolerant batch and streaming data pipelines using Spark, Trino, Flink, Kafka, DBT
  • Design programmatic consumption layers to make product signals easy to define, discover, and reuse
  • Apply software engineering best practices to data systems: testing, CI/CD, observability
  • Evolve systems to support not just human analysis, but autonomous agent reasoning
  • Contribute to a trusted data foundation powering decisions, AI agents, and adaptive products
  • Collaborate across orgs with telemetry engineers, product leaders, data scientists, and AI builders
What We’re Looking For
  • 8+ years of experience in data engineering, with strong software engineering fundamentals
  • Expertise with big data frameworks: Spark, Trino/Presto, DBT, Snowflake
  • Experience with streaming systems like Flink and Kafka, incl. distribution strategy (topics & partitions)
  • Solid understanding of semantic layers, data modeling, and metrics systems
  • Experience with cloud infrastructure, particularly AWS (e.g., S3, EMR, ECS, IAM), Containerization
  • Bonus: fluency in AI data engineering patterns and tools like MCP
  • Bonus: experience with knowledge graphs and modern metadata systems
  • Strong communicator and collaborator — comfortable working across teams and domains
  • Curious, pragmatic, and impact-driven mindset
Why Join Us

Help shape how Salesforce leads the digital labor revolution. You’ll be at the heart of transforming raw product signals into intelligent decisions — for everyone from engineers to sales reps to AI agents. If you’re excited to design resilient, trusted, and intelligent systems at scale, we’d love to hear from you.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.