Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer

AI Futures

Essen

Remote

EUR 60.000 - 75.000

Vollzeit

Vor 4 Tagen
Sei unter den ersten Bewerbenden

Erhöhe deine Chancen auf ein Interview

Erstelle einen auf die Position zugeschnittenen Lebenslauf, um deine Erfolgsquote zu erhöhen.

Zusammenfassung

A unique opportunity for a Data Engineer to work with a leading healthcare company in Germany on a new data lake platform. Join a team focused on data-driven innovation and contribute to shaping a next-generation data infrastructure in a highly regulated environment. Leverage your skills in SQL, Python, and Spark to have a real-world impact.

Qualifikationen

  • Experience in developing ETL pipelines for large-scale data.
  • Good knowledge of working with structured and unstructured data.
  • Fluent in German (C1 or native level) is essential.

Aufgaben

  • Design and maintain end-to-end data pipelines.
  • Integrate diverse data sources into the data infrastructure.
  • Use DevOps practices for data process efficiency.

Kenntnisse

SQL
Python
Spark

Jobbeschreibung

Data Engineer | Berlin (Remote in Germany) | up to €75k

We have partnered with a leading healthcare company in Berlin who have posted €10B revenues last year and are at the forefront of data-driven innovation. In this highly regulated, high-security environment, you'll work on a greenfield data lake platform designed to handle vast volumes of sensitive healthcare data.

Their on-premise solution leverages Hadoop and other big data technologies to ensure robust, scalable performance. You’ll play a key role in integrating new data sources and building reliable ETL pipelines, shaping the future of our data infrastructure from the ground up. This is a unique opportunity to work at the core of a critical industry, where your work has real-world impact.

The Role :

As a Data Engineer you will :

  • Design, develop and maintain robust end-to-end data pipelines with vast amounts of data powering mission-critical analysis.
  • Integrate diverse data sources and contribute to shaping a next-generation data infrastructure.
  • Use modern DevOps practices to ensure a robust data process.
  • Drive continuous improvement , bringing fresh ideas and exploring new technologies to enhance efficiency and data quality.

The Candidate :

  • You have good knowledge of SQL, Python and Spark.
  • You have experience in developing and implementing ETL pipelines for large-scale data transfer.
  • You have experience dealing with structured and unstructured data from various sources as well as developing scripts for rapid data processing.
  • Fluent in German (C1 or native level) – essential for operating in a highly regulated, German-speaking environment.

They are looking to move fast with this hire so please apply or send your CV directly to [emailprotected]

AI Futures | Filling the AI skills gap

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.