¡Activa las notificaciones laborales por email!

Python Data Engineer

Everscale Group

México

Presencial

MXN 400,000 - 600,000

Jornada completa

Hace 18 días

Descripción de la vacante

A leading technology consultancy based in Mexico is seeking an experienced Data Engineer. In this role, you will design and implement data processing pipelines, collaborate with cross-functional teams, and ensure data integrity. The ideal candidate has 5+ years of data engineering experience, strong skills in Python and ETL processes, and familiarity with the AWS Data Analytics stack. This position offers a challenging opportunity to shape how our clients leverage their data for business success.

Formación

  • 5+ years of data engineering experience required.
  • Strong experience in designing and developing ETL processes.
  • Experience in AWS Data Analytics stack is crucial.

Responsabilidades

  • Design and implement scalable data processing pipelines using Python.
  • Collaborate with teams to translate data requirements into specs.
  • Ensure data quality and consistency across solutions.

Conocimientos

Data engineering expertise
ETL and data pipeline design with Python
AWS Data Analytics stack
Data modeling knowledge
Consulting skills

Herramientas

AWS Glue
Amazon Athena
Terraform
PySpark
Descripción del empleo

We are looking for a Data Engineer to work on interesting projects to help our clients scale their data solutions to make data-driven decisions. As a Data Engineer, you’ll work closely with the client to understand both their business processes and analytics needs to design and build data pipelines and cloud data solutions. You will have the opportunity to guide your client through best practices in data lake, data processing, and data pipeline design to help them achieve their business goals. You will collaborate with your team including analysts, dashboard developers, and technical project managers to design solutions and work together to deliver a world-class solution. The ideal candidate will have the balance of technical skills and business acumen to help the client better understand their core needs while understanding technical limitations.

Responsibilities:
  • Design and implement data processing pipelines using Python, ensuring scalability, efficiency, and reliability.
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications.
  • Develop and maintain data integration solutions, ensuring data quality and consistency.
  • Utilize Python libraries and frameworks for data manipulation, transformation, and analysis.
  • Optimize and troubleshoot existing data pipelines to enhance performance and reliability.
  • Implement and maintain ETL (Extract, Transform, Load) processes for diverse datasets.
  • Work with databases and storage systems to manage and organize large volumes of data effectively.
  • Stay current with industry best practices, emerging technologies, and trends in data engineering.
About you:
  • Collaborative partner who can patiently communicate at the appropriate level to both business and technology teams to understand business needs and pain points
  • Creative in meeting the client’s core needs with their technology
  • Determined and able to manage obstacles while maintaining a positive outlook
  • Self-driven lifelong learner passionate about learning new data tools and best practices
Qualifications:
  • Must-Have:
    • 5+ years of data engineering experience
    • Strong experience designing and developing ETL and data pipelines with Python
    • Experience working with AWS Data Analytics stack: Amazon Athena, AWS Glue, etc.
    • Experience working with businesses to understand the appropriate data model (relational, tabular, transactional) for their data solution
    • Understanding of data modeling (such as Kimball, Inmon, Data Vault design approaches)
    • Excellent foundation of consulting skills: analytical, written and verbal communication, and presentation skills
    • Demonstrated ability to identify business and technical impacts of user requirements and incorporate them into the project schedule
    • Deep experience designing and building ELT jobs to move and transform data from various source types and performing exploratory data analysis, data cleansing, and aggregation
Preferred Qualifications:
  • Experience with Terraform, Star schema, and PySpark
  • Experience working in the utility industry
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.