Aktiviere Job-Benachrichtigungen per E-Mail!

Senior Data Engineer

SR2 | Socially Responsible Recruitment | Certified B Corporation

Frankfurt

Hybrid

EUR 65.000 - 90.000

Vollzeit

Vor 14 Tagen

Zusammenfassung

A recruitment firm seeks a Senior Data Engineer in Germany to build scalable data pipelines and support AI/ML initiatives. The ideal candidate will have over 3 years of experience, strong knowledge of Databricks, and cloud platforms, particularly Azure. This role offers a competitive salary, performance bonuses, and a hybrid work model with an international team.

Leistungen

Competitive salary package + performance bonus
Hybrid work model
Professional development budget & certifications
International, diverse, and collaborative work environment

Qualifikationen

  • 3+ years of professional experience as a Data Engineer, preferably in a cloud environment.
  • Strong knowledge of cloud platforms with preference for Azure.
  • Fluent in English; German proficiency is a strong plus.

Aufgaben

  • Design, develop, and maintain ETL / ELT pipelines on Databricks.
  • Collaborate with Data Scientists to operationalize machine learning models in production.
  • Optimize data workflows for performance, scalability, and cost efficiency.

Kenntnisse

Data Engineering
Databricks (PySpark, SQL, Delta Lake)
Cloud Platforms (Azure, AWS, GCP)
Python Programming
Machine Learning Integration
Data Governance
APIs & Automation
CI/CD and DevOps Practices

Ausbildung

Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field

Tools

MLflow
Power BI
Tableau
Jobbeschreibung
Overview

Job Title: Senior Data Engineer – Databricks & AI

Location: Germany (Hybrid: Berlin / Munich / Remote within Germany)

Employment Type: Full-time, Permanent

About the Role

We are looking for an experienced Data Engineer to join our client’s growing Data & AI team. The successful candidate will be responsible for building and optimizing scalable data pipelines, enabling advanced analytics, and supporting AI / ML initiatives. You will play a key role in shaping the data infrastructure and ensuring smooth collaboration with data scientists, analysts, and business stakeholders.

Responsibilities
  • Design, develop, and maintain ETL / ELT pipelines on Databricks (PySpark, Delta Lake, SQL).
  • Collaborate with Data Scientists to operationalize machine learning models in production.
  • Optimize data workflows for performance, scalability, and cost efficiency.
  • Work closely with cross-functional teams to define data governance, quality, and security standards.
  • Integrate multiple structured and unstructured data sources into a unified lakehouse architecture.
  • Support AI / ML model deployment and monitoring.
Required Skills & Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • 3+ years of professional experience as a Data Engineer (preferably in a cloud environment).
  • Hands-on experience with Databricks (PySpark, SQL, Delta Lake).
  • Strong knowledge of cloud platforms (Azure, AWS, or GCP; Azure preferred).
  • Proven experience with AI / ML pipelines and integrating ML models into production.
  • Solid programming skills in Python and experience with APIs & automation.
  • Familiarity with data governance, CI / CD, and DevOps practices.
  • Fluent in English; German proficiency is a strong plus.
Nice to Have
  • Experience with MLflow, Feature Stores, and MLOps practices.
  • Knowledge of BI tools (Power BI, Tableau).
  • Prior experience in financial services, manufacturing, or e-commerce domains.
Benefits
  • Competitive salary package + performance bonus.
  • Hybrid work model (office presence in Berlin or Munich, flexible remote options).
  • Cutting-edge projects in Data, AI, and Cloud.
  • Professional development budget & certifications.
  • International, diverse, and collaborative work environment.
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.