Attiva gli avvisi di lavoro via e-mail!

Senior Data Engineer (Databricks)

Eurofirms Group | People first

Italia

In loco

EUR 40.000 - 80.000

Tempo pieno

7 giorni fa
Candidati tra i primi

Aumenta le tue possibilità di ottenere un colloquio

Crea un curriculum personalizzato per un lavoro specifico per avere più probabilità di riuscita.

Descrizione del lavoro

Join a forward-thinking company that is redefining the agriculture and animal health industry. As a Senior Data Engineer, you will play a critical role in building and optimizing data pipelines within a cutting-edge Databricks environment. This position offers the chance to work with advanced analytics and collaborate with a talented team dedicated to transforming data into actionable insights. If you are passionate about data engineering and eager to innovate, this role provides an exciting opportunity to make a significant impact while working with emerging technologies and mentoring junior engineers. Embrace the challenge and help shape the future of our industry!

Competenze

  • 5+ years of experience in data engineering with strong ETL skills.
  • Proficient in SQL, Python, and big data platforms.

Mansioni

  • Build and maintain scalable data pipelines for complex data processing.
  • Collaborate with teams to ensure timely and accurate data delivery.

Conoscenze

Data Engineering
ETL Processes
SQL
Python
PySpark
Data Pipeline Creation
Communication Skills

Formazione

Bachelor's degree in Computer Science
Master's degree

Strumenti

Databricks
AWS
Azure
Azure Data Factory
Kedro Framework

Descrizione del lavoro

Our Company

At Kynetec, we're proud to be at the forefront of the intersection between agriculture, sustainability, and animal health. We’re redefining our industry with unparalleled insights and leading technology, whilst on an ambitious growth plan to supersede our influence from the food on our plates, to the health of our livestock and the care of our beloved pets at home.

We owe our success to our industry experts. They are the driving force behind our reputation as a global leader in the industry - Their innovative ideas and expertise have helped us achieve new heights. From seasoned insights specialists, and client leaders to innovative tech genius. What connects us? A shared passion for Agriculture and Animal Health! We don’t settle for “business as usual”.

Each day, we are taking strides towards transforming our industry and improving the lives of people and animals around the world. If you’re looking for a company who challenges the norm and fosters a culture of innovation, Kynetec is the place for you.

The Role

The Senior Data Engineer will play a pivotal role in building, optimizing, and maintaining our data pipeline architecture, ensuring data quality and accessibility for cross-functional teams. This role will be a key member of our Advanced Analytics team and will focus on building and supporting our data pipelines in a Databricks environment.

Responsibilities

  1. Create and maintain large-scale data processing systems and infrastructure.
  2. Build robust, performant, and scalable data pipelines to ingest, transform, and store complex data from multiple sources.
  3. Collaborate with data science and software engineering team members to provide required data in an accessible, timely, and accurate manner.
  4. Optimize and refine processes, algorithms, and systems to enhance data quality and reliability.
  5. Ensure data privacy and security compliance.
  6. Implement monitoring, logging, and alert systems to ensure data pipeline health and performance.
  7. Collaborate with infrastructure and IT teams to ensure optimal data storage and retrieval mechanisms.
  8. Drive the optimization, testing, and tooling to improve data quality.
  9. Mentor junior data engineers, imparting knowledge and promoting best practices.
  10. Stay updated with emerging technologies and introduce them as needed to improve the data engineering ecosystem.

Requirements

  1. Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus.
  2. 5+ years of experience in data engineering, ETL processes, and database systems.
  3. Experience with Databricks (Over 3 years)
  4. Proficient in SQL, Python, PySpark, and experience with big data platforms.
  5. Experience creating data pipelines and working with workflow management tools.
  6. Strong experience with relational and NoSQL databases.
  7. Working experience in cloud platforms like AWS and Azure.
  8. Excellent communication and collaboration skills.

Preferred Skills

  1. Experience with Azure Data Factory and Kedro framework
  2. Experience with Machine Learning processes
  3. Relevant Industry Certifications
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.