Attiva gli avvisi di lavoro via e-mail!

Data Engineer

TN Italy

Pavia

Remoto

EUR 40.000 - 60.000

Tempo pieno

Ieri
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading company in Italy is seeking a Data Engineer to join their remote team. The role involves designing data pipelines, ensuring data quality, and collaborating with various teams to deliver high-quality data assets. Ideal candidates should have strong software engineering skills, proficiency in Python, and experience with data engineering tools like PySpark and Databricks.

Competenze

  • Proficiency in Python and strong software engineering skills.
  • Experience with CI/CD processes and data testing.

Mansioni

  • Design and implement data pipelines in collaboration with teams.
  • Ensure data quality and optimal performance of data assets.

Conoscenze

Software Engineering
Data Engineering
Communication
Python
CI/CD Processes

Strumenti

PySpark
Databricks
Azure Data Factory
Synapse Analytics

Descrizione del lavoro

Social network you want to login/join with:

Role: Data Engineer

Herzumis is seeking a Data Engineer to join our team and support one of our clients.

Key responsibilities:
  1. Design and implement data pipelines in collaboration with data users and other engineering teams.
  2. Ensure reliability, data quality, and optimal performance of data assets.
  3. Translate complex business and analytics requirements into high-quality data assets.
  4. Deliver high-quality code focusing on simplicity, performance, and maintainability.
  5. Redesign and implement existing data pipelines to leverage new technologies and best practices where applicable.
  6. Collaborate with solution engineers, data scientists, and product owners to deliver end-to-end products.
  7. Support partners with proof of concept initiatives and data-related technical questions.
Required skills:
  1. Strong software/data engineering skills and proficiency in at least one programming language, preferably Python.
  2. Good knowledge of distributed computing frameworks; PySpark is a must-have, others are an advantage.
  3. Familiarity with system design, data structures, algorithms, storage systems, and cloud infrastructure.
  4. Understanding of data modeling and data architecture concepts.
  5. Experience with CI/CD processes, data testing, and monitoring.
  6. Knowledge of Delta Lake protocol and Lakehouse architectures.
  7. Experience with Databricks and Azure services such as Azure Data Factory, Synapse Analytics, or Fabric.
Additional skills:
  1. Ability to work effectively with both technical and non-technical team members.
  2. Strong communication skills to convey complex technical concepts clearly to non-technical audiences.
  3. Proficiency in English.
  4. Experience with YAML is a plus.
Work mode:

Full remote

Join Us!

Become part of a team driven by innovation, talent, and a commitment to excellence. Your next career step starts here.

This announcement is addressed to all genders, ages, and nationalities, in accordance with applicable laws and legislative decrees.

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.