Overview
This role is located in Mexico and can be fully remote if the selected candidate is based outside of a commutable distance to our Mexico City office.
We are seeking an experienced Senior Data Engineer to join our multidisciplinary data product team. This role plays a critical part in architecting, developing, and optimizing scalable data pipelines, enabling advanced analytics and machine learning (ML) solutions across the organization.
As a Senior Data Engineer, you will design and implement robust data infrastructure, enforce best practices in data architecture and governance, and collaborate cross-functionally to deliver reliable, high-quality data products.
You will help shape the technical direction of data initiatives, improve performance and reliability of existing systems, and support strategic data-driven decision-making processes. This is a high-impact role ideal for a proactive engineer who thrives in collaborative, fast-paced environments.
If you are interested in applying, please submit a resume in English.
Responsibilities
Data Pipeline Development & Maintenance
- Design, build, and maintain efficient, scalable, and secure ETL pipelines to support data science and machine learning workloads.
- Optimize data flow and collection processes for both batch and real-time systems.
- Automate data ingestion, transformation, and integration workflows to support advanced analytics and ML pipelines.
- Monitor and troubleshoot pipeline issues to ensure reliability and scalability.
Data Infrastructure & Architecture
- Manage connectivity with Business Intelligence (BI) tools and other platforms.
- Implement best practices in database architecture, performance tuning, security, and cost efficiency.
- Work with various databases, data warehouses, and cloud storage systems.
Data Quality, Governance & Security
- Ensure the accuracy, consistency, and reliability of data across pipelines.
- Collaborate with team members to identify, document, and resolve data issues.
- Implement data validation and quality processes and measures.
- Enforce security and compliance standards, including access controls and encryption for sensitive data.
Collaboration & Support
- Partner closely with Data Scientists and the MLOps team to enable model development, scoring, and deployment.
- Build and support pipelines for model retraining, performance tracking, and feature engineering.
- Translate analytical and product requirements into scalable data architecture solutions.
- Assist in monitoring and maintaining production ML models.
- Contribute to team documentation and knowledge sharing on pipelines and processes.
Learning & Development
- Mentor junior engineers and promote a culture of engineering excellence and peer learning.
- Participate in code reviews, sprint planning, and architectural discussions.
- Stay current on emerging technologies in data engineering, ML infrastructure, and cloud computing.
- Complete all responsibilities as outlined in the annual performance review and/or goal setting
- Complete all special projects and other duties as assigned.
Note: All interviews will be conducted in English.
Qualifications
Education & Experience:
- Typically requires a Bachelor’s degree in data engineering, big data, data analytics/science, computer science or other quantitative fields and a minimum of 5 years of relevant experience;
- OR Master’s degree with a minimum of 3 years of relevant experience;
- OR PhD with no experience.
Technical Proficiency:
- Minimum of 5 years of professional experience in data engineering, with hands-on work supporting machine learning or data science teams.
- Proven experience working with big data tools and platforms such as Spark, Hadoop, Oracle, or AWS S3.
- Advanced proficiency in SQL, with experience in databases like SQL Server, MySQL, or Oracle.
- Strong understanding of data modeling for ML, including feature store management and serving strategies.
- Hands-on experience in building ETL pipelines, data warehousing solutions, and integrating analytic tools.
- Experience with MLOps tools such as MLflow, Airflow, or Kubeflow is a plus.
- Proficient in version control systems, CI/CD pipelines, and data testing frameworks.
- Familiarity with Databricks and/or Snowflake environments is a plus.
- Highly analytical, detail-oriented, and well-organized.
Communication & Collaboration:
- Excellent written and verbal communication skills with the ability to engage both technical and non-technical stakeholders.
- Fully bilingual in English and Spanish (written and verbal).
- Ability to work independently and in a self-organized team environment using agile methods.
- Highly proficient in Microsoft Office (PowerPoint, Excel, Word).
Note: All interviews will be conducted in English.
Base compensation ranges from $45,000 to $50,000 pesos per month. Specific offers are determined by various factors, such as experience, education, skills, certifications, and other business needs.
Cotiviti offers team members a competitive benefits package to address a wide range of personal and family needs.