Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

Lead Data Architect

Trust In SODA

München

Hybrid

EUR 80.000 - 110.000

Vollzeit

Gestern
Sei unter den ersten Bewerbenden

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A leading data consultancy in Germany seeks a Lead Data Architect to design and implement cloud native data platforms utilizing Azure and Databricks. The successful candidate will have over 7 years of experience in data engineering and 2 years as a data architect, coupled with strong skills in Python/PySpark and Java/Scala. Fluent German is mandatory. This hybrid role offers the opportunity to work closely with talented engineering teams across multiple locations in Germany.

Qualifikationen

  • 7+ years in data or platform engineering, with at least 2 years as a data architect.
  • Experience designing and delivering ETL and ELT pipelines for large-scale systems.
  • Fluent in German.

Aufgaben

  • Architect cloud native data platforms on Azure and Databricks.
  • Define and implement engineering standards for data processing.
  • Lead technical decisions across compute patterns and storage formats.
  • Work with engineering teams to implement observability and reliability.

Kenntnisse

Data platform engineering
Cloud architecture (Azure, Databricks)
Python/PySpark programming
Java/Scala programming
ETL/ELT pipeline design
Data governance and tooling
Jobbeschreibung

Lead / Principal Data Architect | Germany (Hybrid)

I’m partnered with a firm that has been turning engineering excellence into consistent growth for over five decades, generating strong revenue while maintaining technical integrity and independence. They are now looking to hire a Lead Data Architect.

You’ll architect cloud native data platforms on Azure and Databricks, including workspace separation, Unity Catalog integration, lakehouse and medallion structures, compute patterns, access models and cost-efficient orchestration. You’ll define engineering standards for PySpark processing, streaming, metadata management and CI / CD for data. You’ll also diagnose complex performance issues, advise on governance and shape the tooling that supports the full data lifecycle. Delivery happens in small teams of 1–10 people, so your decisions have real impact.

You bring 7+ years in data or platform engineering and at least 2 years as a data architect. You’ve designed and delivered ETL and ELT pipelines, worked with large-scale distributed processing and understand how data platforms behave in production. You can code in Python / PySpark or are capable of reviewing, debugging and refactoring code to a high standard. Strong Java or Scala backgrounds are also welcome.

Expect :

  • Architect platforms built on Fabric, Databricks Lakehouse and modern Azure or AWS services
  • Influence ingestion and transformation patterns, schema evolution and Delta optimisation
  • Lead decisions across compute patterns, cluster sizing, storage formats and table design
  • Implement observability and reliability practices that scale under real workloads
  • Work with strong engineering teams who expect technical leadership, not oversight
  • Shape ML / AI infrastructure for model training, deployment and monitoring
  • Solve the challenging edge cases internal teams typically avoid

German fluency is mandatory.

Hybrid in Germany across multiple locations.

If this feels like a good fit, send over your profile and we’ll take it from there.

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.