Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Data Engineer II

Plexus

Región Centro

Presencial

MXN 1,393,000 - 1,742,000

Jornada completa

Ayer
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A technology solutions company located in Mexico is seeking a Data Engineer II to design and maintain scalable data pipelines and cloud infrastructure. The ideal candidate has over 5 years of experience, strong skills in SQL, Python, and familiarity with major cloud platforms like AWS and Azure. The role requires collaboration on data modeling and ensures data quality through validation processes. This position also includes mentoring junior team members and enforcing data engineering best practices.

Formación

  • 5+ years of related experience in data engineering preferred.
  • Proficient in at least one programming language like Python or Java for data manipulation.
  • Familiar with data governance and security principles.

Responsabilidades

  • Design and optimize ETL/ELT pipelines using cloud technologies.
  • Collaborate on data modeling and implement data quality validation.
  • Mentor junior team members and promote best practices in data engineering.

Conocimientos

Cloud data platforms
SQL
Python
Data modeling and schema design
Big data technologies
Problem-solving

Educación

Bachelor’s Degree

Herramientas

AWS
Azure
Google Cloud Platform
Spark
Descripción del empleo
Purpose Statement

The Data Engineer II designs, builds, and maintains robust and scalable data pipelines and infrastructure to support the organization's data needs, particularly related to cloud-based technologies. This includes acquiring, transforming, and optimizing large datasets to ensure high-quality, reliable, and accessible data for analytics, reporting, machine learning and AI initiatives.

Key Job Accountabilities
  • Lead Data Pipeline and Infrastructure Development: Design, develop, and optimize end-to-end ETL/ELT pipelines(on-premises and cloud) using Python, SQL, Spark, etc. to ingest data into scalable cloud infrastructure, including Data Lakes and Data Warehouses (AWS, Azure, or GCP).
  • Collaborate on data modeling and schema design: Collaborate with stakeholders to design and implement efficient data models and schema structures. Enforce data quality through robust validation, monitoring, and alerting mechanisms to ensure accuracy and reliability.
  • Optimize Performance, Cost, and Scalability: Proactively evaluate, tune, and improve the performance, scalability, and cost-efficiency of all data pipelines and underlying cloud infrastructure.
  • Enforce Best Practices and Documentation: Champion and enforce modern data engineering best practices, including version control, automated testing, and CI/CD. Create and maintain comprehensive technical documentation for all data systems and operational procedures.
  • Mentorship: Provide technical guidance, mentorship, and perform code reviews for junior team members, actively fostering a culture of continuous learning and high standards within the data engineering team.
  • All GT team members are responsible for upholding the organization's cybersecurity posture by adhering to security policies and procedures, actively participating in training, protecting data and systems, actively identifying and mitigating vulnerabilities, and promptly reporting any suspicious activity or potential security incidents.
Education/Experience Qualifications
  • Bachelor’s Degree with 5 or more years of related experience is preferred. An equivalent combination of education and/or experience will be considered.
Other Qualifications
  • Proven experience with cloud data platforms (e.g., AWS S3, Redshift, Glue; Azure Data Lake, Synapse, Data Factory; Google Cloud Storage, BigQuery, Dataflow).
  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java, Scala) for data manipulation scripting.
  • Experience with big data technologies (e.g., Spark, Hadoop) is a plus.
  • Familiarity with data governance, data security, and compliance principles.
  • Excellent problem-solving, analytical, and communications skills.

This document does not represent a contract of employment and is not intended to capture every possible assignment the incumbent could be asked to perform.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.