Attiva gli avvisi di lavoro via e-mail!

Data Engineer

TN Italy

Bergamo

Remoto

EUR 40.000 - 70.000

Tempo pieno

3 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

A leading company is seeking a Data Engineer to design and implement data pipelines, ensuring data quality and performance. The role involves collaboration with various teams to deliver high-quality data solutions. This position is fully remote, offering an innovative work environment and a chance to grow your career.

Competenze

  • Proficiency in Python and PySpark required.
  • Experience with CI/CD processes and data testing.

Mansioni

  • Design and implement data pipelines.
  • Ensure reliability and performance of data assets.
  • Collaborate with engineers and data scientists.

Conoscenze

Software Engineering
Data Engineering
Communication
Teamwork
Problem Solving

Strumenti

Python
PySpark
Databricks
Azure Data Factory
Synapse Analytics

Descrizione del lavoro

Social network you want to login/join with:

Role: Data Engineer

Herzumis is seeking a Data Engineer to join our team and support one of our clients.

Key responsibilities:
  1. Design and implement data pipelines in collaboration with data users and engineering teams.
  2. Ensure reliability, data quality, and optimal performance of data assets.
  3. Translate complex business and analytics requirements into high-quality data assets.
  4. Deliver high-quality code focusing on simplicity, performance, and maintainability.
  5. Redesign and implement existing data pipelines using the latest technologies and best practices where applicable.
  6. Collaborate with solution engineers, data scientists, and product owners to deliver end-to-end products.
  7. Support partners with proof of concept initiatives and technical questions related to data.
Required skills:
  1. Strong software/data engineering skills with proficiency in at least one programming language, preferably Python.
  2. Good knowledge of distributed computing frameworks, with PySpark being a must-have; additional frameworks are a plus.
  3. Familiarity with system design, data structures, algorithms, storage systems, and cloud infrastructure.
  4. Understanding of data modeling and data architecture concepts.
  5. Experience with CI/CD processes, data testing, and monitoring.
  6. Knowledge of Delta Lake protocol and Lakehouse architectures.
  7. Experience with Databricks and Azure data services such as Azure Data Factory, Synapse Analytics, or Fabric.
Additional skills:
  1. Ability to work effectively in teams with both technical and non-technical individuals.
  2. Strong communication skills to explain complex technical concepts clearly to non-technical audiences.
  3. Proficiency in English.
  4. Experience with YAML is a plus.
Work mode:

Full remote

Join Us!

Become part of a team driven by innovation, talent, and a commitment to excellence. Your next career step starts here.

This announcement is addressed to all genders, ages, and nationalities, in accordance with applicable laws.

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.