Overview
Are you an experienced, passionate pioneer in technology? A Senior Data Engineer who wants to work in a collaborative environment? As an experienced Specialist Data Engineer, you will have the ability to share new ideas and collaborate on projects as a consultant without the extensive demands of travel. Americas Delivery Mexico (ADMX) leverages scale and talent to provide high quality, cost-effective service to our clients.
ADMX is a member of the Global Delivery Network which has presence across the world with Delivery centers in the United States, Romania, India, Spain, China, and the Philippines. ADMX is in Queretaro, Mexico. We provide consulting services to help our clients achieve a higher level of service in operational efficiency and business value. We are a team of professionals passionate about serving clients with distinction and learning, and we are driven by our purpose: Making an impact that matters for our clients, our people, and society.
Team & Role Context
As a Senior Consultant, you will work with diverse global clients across a wide range of industries. You will have a variety of client facing responsibilities such as diagnosing issues using advanced analytical techniques, interviewing staff, formulating and making recommendations, and helping clients implement proposed solutions.
Responsibilities
- This cross-functional Data Engineer is a key technical leader responsible for building, optimizing, and governing data pipelines and architectures across Azure, Databricks, and Snowflake—driving enterprise-grade Master Data Management (MDM) and operational excellence within the healthcare industry. The role balances hands-on development, data architecture, compliance awareness, and operational accountability, including the implementation and maintenance of incident management and reporting models to sustain robust, reliable, and secure healthcare data ecosystems.
- MDM & Data Architecture: Architect, implement, and operationalize robust MDM frameworks using Azure, Databricks (including Unity Catalog and Delta Lake), and Snowflake tailored for healthcare master data domains (e.g., patient, provider, payer, clinical, billing).
- End-to-End Data Pipelines: Engineer, optimize, and maintain batch and streaming pipelines that ensure the reliable ingestion, transformation, and delivery of high-quality data from EHRs, claims systems, and third-party sources while supporting analytics, reporting, and regulatory needs.
- Operational Excellence: Develop, run, and continuously improve data operate models: Oversee incident management processes for detection, diagnosis, escalation, and remediation of data pipeline failures and data quality/MDM incidents.
- Lead root-cause analysis and documentation for incident reporting, preventive measures, and regulatory compliance audits (HIPAA, HITECH).
- Establish automated monitoring, alerting, and reporting for production pipeline health and MDM compliance.
- Data Governance & Compliance: Lead data governance, metadata management, data quality, stewardship initiatives, and unified access control while ensuring end-to-end regulatory compliance (HIPAA, 21st Century Cures Act, GDPR).
- Analytics & ML Enablement: Build, maintain, and secure model-ready datasets and feature stores; collaborate with healthcare data scientists to operationalize AI/ML use cases, including population health, claims analytics, and clinical outcomes prediction.
- Leadership & Mentorship: Mentor data engineering teams on best practices, operate model implementation, healthcare data stewardship, and cross-functional data initiatives.
- Stakeholder Collaboration: Liaise with IT, compliance, and business units, translating healthcare requirements into scalable technical solutions aligned with both data architecture and operational support needs.
The Team
Join our AI & Engineering team in transforming technology platforms, driving innovation, and helping make a significant impact on our clients' success. You’ll work alongside talented professionals reimagining and re-engineering operations and processes that are critical to businesses. Your contributions can help clients improve financial performance, accelerate new digital ventures, and fuel growth through innovation.
AI & Engineering leverages cutting-edge engineering capabilities to build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions are powered by engineering for business advantage, transforming mission-critical operations. We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms. Our delivery models are tailored to meet each client's unique requirements.
Qualifications
Required
- 6-10+ years of consulting and/or industry experience
- Completion of coursework (Egresado) in any pertinent field or industry
- Responsible for supporting and leading project workstreams and/or teams
- Identifies key drivers, defines problems and proposes solutions
- Advanced English level
- 5+ years in data engineering (healthcare data preferred), with significant hands-on expertise in Azure, Databricks, and Snowflake.
- Demonstrated experience delivering and operating enterprise-grade MDM frameworks and incident management processes for data platforms.
- Strong background in Python, SQL, Databricks (Spark/Delta Lake/Unity Catalog), Azure Data Factory/Synapse, Snowflake, DBT and/or Airflow.
- Direct experience managing and remediating data pipeline or data quality incidents, including root-cause analysis and incident reporting in regulated environments.
- Deep knowledge of HIPAA, HITECH, and relevant healthcare data privacy requirements.
- Excellent collaboration and communication skills; ability to work with compliance, data governance, and clinical/business stakeholders.
Preferred Qualifications
- Experience deploying, running, and maintaining GenAI/LLM models within regulated healthcare data environments.
- Certification or experience with specific MDM platforms (e.g., Informatica, Reltio) or automated incident management solutions.
- Experience with FHIR, HL7, or other industry-specific interoperability/data standards.
- Databricks Certified Data Engineer (Associate or Professional)
- DBT Advanced
- Health IT certifications (e.g., Certified Professional in Healthcare Information and Management Systems (CPHIMS))
- MDM-specific or incident management platform certification
Our people and culture
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives and bring more creativity and innovation to help solve our client most complex challenges. This makes Deloitte one of the most rewarding places to work.
From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career.
Accommodations
We are committed to providing equal opportunity and reasonable accommodation for people with disabilities. To request a reasonable accommodation, contact our Talent Relations team at
As used in this posting, "Deloitte" means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see for a detailed description of the legal structure of Deloitte LLP and its subsidiaries.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
HeyDonto Mission & Role
HeyDonto builds reliable data pipelines that connect fragmented healthcare platforms to modern APIs. We synchronize and standardize data from both on-premise and cloud-based EHR systems into clean, interoperable formats. Our mission is simple: make healthcare data work the way software should — predictably, securely, and without silos.
The Role
As a Distinguished Engineer (L7) in the DevOps Tribe, you’ll define and evolve the infrastructure that powers HeyDonto’s ecosystem—from Kubernetes clusters and Terraform modules to developer tooling and multi-environment automation. You’ll lead through technical depth, setting standards for reproducibility, reliability, and cloud portability across every environment.
What You’ll Do
- Architect and evolve multi-environment infrastructure across GKE, CloudSQL, Confluent, Temporal, and Cloudflare, encoded in reusable Terraform modules and remote state.
- Lead deployment automation strategy — CLI orchestration and Helm releases—to keep clusters converged deterministically across environments.
- Design and enforce the secrets lifecycle integrating Terraform outputs, SOPS, and 1Password for secure, auditable rotation and distribution.
- Define and implement automated drift detection, IAM regression suites, and compliance guardrails for infrastructure reliability.
- Own the CUE-based configuration system that exports Compose stacks, environment templates, secrets, and Helm values through just export-cue.
- Shape environment parity and portability — abstract provider specifics behind clear interfaces (DNS, storage, ingress, identity) to reduce lock-in and enable repeatable deployments across clouds
- Standardize vendor-neutral telemetry with OpenTelemetry and consistent log/metric conventions to keep observability portable.
- Establish portable identity patterns (OIDC, workload identity, least-privilege IAM mappings) that translate across providers.
- Mentor senior engineers, codify expectations in documentation and tooling, and steward technical decisions across tribes.
- Lead incident response and RCA, strengthening feedback loops between SRE and development teams.
Tech You’ll Work With
- Languages: TypeScript, Python, Bash
- Infrastructure: Terraform (multi-provider), Helm, Kubernetes (GKE primary; portable to other managed K8s), Temporal Cloud, Confluent Cloud, Cloudflare
- Cloud-Agnostic Interfaces: OpenTelemetry, OIDC/OAuth2, CSI/Ingress abstractions, external-DNS patterns, OCI registries
- Configuration: CUE, Just, Docker Compose, SOPS, 1Password, env templates
- CI/CD: GitHub Actions, Conventional Commits, automated drift and policy checks
What We Value
- Clarity over cleverness — explicit, predictable systems.
- Idempotency, type safety, and observability in everything we build.
- Portability by design — clean interfaces, minimal provider coupling, documented escape hatches.
- Shared ownership of infrastructure and developer experience.
- Documentation and tooling as part of engineering craft.
- Reliability as the ultimate measure of quality.
Qualifications
Required
- 7+ years building and operating distributed systems or production infrastructure.
- Proven expertise with Terraform module design (multi-provider), Kubernetes/Helm operations, and environment automation.
- Experience designing portable architectures—clear separation of concerns, provider-agnostic interfaces, and migration-ready patterns.
- Advanced knowledge of secure secret distribution with SOPS and 1Password.
- Proficiency in Python, Node.js, and Bash for automation and operational tooling.
- Strong understanding of Kafka, Temporal, and distributed workflow systems.
- Track record of leading through influence—setting technical standards, mentoring seniors, and driving architectural coherence.
Preferred
- Experience designing and implementing solutions across multiple cloud providers (e.g., AWS, GCP, Azure) to ensure resilience and avoid vendor lock-in.
- Hands-on experience with OpenTelemetry rollouts to build a unified observability platform, helping proactively identify and resolve performance bottlenecks.
- Solid understanding of Kubernetes networking, especially configuring Ingress controllers and managing traffic flow.
- Familiarity with CUE or similar declarative configuration frameworks.
- Open-source contributions or published writing that demonstrates passion for systems thinking and quality craftsmanship.
Why HeyDonto
HeyDonto is a place where senior engineers work at depth. We build systems that last—secure, observable, portable, and self-documenting. We believe in small expert teams solving hard problems the right way, with full ownership from concept to delivery. If you value clarity, autonomy, and precision—and you want your work to make a measurable difference in real systems—this is the place for you.
- Work Type: Hybrid
- If you are interested in applying, please send your English Resume through LinkedIn or send it to mentioning the name of the role you are applying for in the subject of the email.
When applying, please include:
- Salary expectations
- Availability for interviews