¡Activa las notificaciones laborales por email!

Python Data Engineer

Everscale Group

Ciudad de México

Presencial

USD 60,000 - 100,000

Jornada completa

Hace 30+ días

Mejora tus posibilidades de llegar a la entrevista

Elabora un currículum adaptado a la vacante para tener más posibilidades de triunfar.

Descripción de la vacante

An established industry player is seeking a skilled Data Engineer to join their innovative team. In this role, you will design and implement robust data processing pipelines, ensuring scalability and efficiency. You will collaborate with cross-functional teams to understand business needs and translate them into technical solutions. This is an exciting opportunity to work with cutting-edge technologies and guide clients in optimizing their data strategies. If you are a self-driven learner passionate about data engineering and eager to make a significant impact, this role is perfect for you.

Formación

  • 5+ years of data engineering experience with a strong focus on ETL and data pipelines.
  • Deep experience in using Python for data manipulation and building data solutions.

Responsabilidades

  • Design and implement scalable data processing pipelines using Python.
  • Collaborate with teams to translate data requirements into technical specifications.

Conocimientos

Data Engineering
ETL Processes
Python
Data Modeling
Consulting Skills

Educación

Bachelor's Degree in Computer Science or related field

Herramientas

AWS Data Analytics Stack
Terraform
PySpark

Descripción del empleo

We are looking for a Data Engineer to work on interesting projects to help our clients scale their data solutions to make data-driven decisions. As a Data Engineer, you’ll work closely with the client to understand both their business processes and analytics needs to design and build data pipelines and cloud data solutions. You will have the opportunity to guide your client through best practices in data lake, data processing, and data pipeline design to help them achieve their business goals. You will collaborate with your team including analysts, dashboard developers, and technical project managers to design solutions and work together to deliver a world-class solution. The ideal candidate will have the balance of technical skills and business acumen to help the client better understand their core needs while understanding technical limitations.

Responsibilities:
  • Design and implement data processing pipelines using Python, ensuring scalability, efficiency, and reliability.
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications.
  • Develop and maintain data integration solutions, ensuring data quality and consistency.
  • Utilize Python libraries and frameworks for data manipulation, transformation, and analysis.
  • Optimize and troubleshoot existing data pipelines to enhance performance and reliability.
  • Implement and maintain ETL (Extract, Transform, Load) processes for diverse datasets.
  • Work with databases and storage systems to manage and organize large volumes of data effectively.
  • Stay current with industry best practices, emerging technologies, and trends in data engineering.
About you:
  • Collaborative partner who can patiently communicate at the appropriate level to both business and technology teams to understand business needs and pain points.
  • Creative in meeting the client’s core needs with their technology.
  • Determined and able to manage obstacles while maintaining a positive outlook.
  • Self-driven lifelong learner passionate about learning new data tools and best practices.
Qualifications:
  • Must-Have:
    • 5+ years of data engineering experience.
    • Strong experience designing and developing ETL and data pipelines with Python.
    • Experience working with AWS Data Analytics stack: Amazon Athena, AWS Glue, etc.
    • Experience working with businesses to understand the appropriate data model (relational, tabular, transactional) for their data solution.
    • Understanding of data modeling (such as Kimball, Inman, Data Vault design approaches).
    • Excellent foundation of consulting skills: analytical, written and verbal communication, and presentation skills.
    • Demonstrated ability to identify business and technical impacts of user requirements and incorporate them into the project schedule.
    • Deep experience designing and building ELT jobs to move and transform data from various source types and performing exploratory data analysis, data cleansing, and aggregation.
Preferred Qualifications:
  • Experience with Terraform, Star schema, and PySpark.
  • Experience working in the utility industry.
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.