Aktiviere Job-Benachrichtigungen per E-Mail!

Data Engineer – Limited Contract (ANÜ)

Darwin Recruitment

Berlin

Hybrid

EUR 65.000 - 85.000

Vollzeit

Heute
Sei unter den ersten Bewerbenden

Zusammenfassung

A recruitment agency seeks a Senior Data Engineer in Berlin to design and implement a high-performance analytical platform. The role involves building data structures and maintaining batch data pipelines. Expertise in Spark SQL and Python is a must. This hybrid position prefers in-office presence 1-2 days a week.

Qualifikationen

  • Expertise in Spark SQL for large-scale data processing.
  • Strong background in batch processing and target definition.
  • Competent in Python programming.

Aufgaben

  • Model and build data structures optimized for analytical queries.
  • Design, develop, and maintain batch data pipelines.
  • Define targets and implement efficient batch processing workflows.

Kenntnisse

Spark SQL expertise
Batch processing
Python programming
Clear communication
ETL/ELT paradigms understanding

Tools

Kafka
ClickHouse
Jobbeschreibung
Overview

A platform engineering team is building a new backend data layer to guarantee data availability and power internal analytics for customer account managers. After a period of significant change, the team is expanding to meet fast-growing demand. They are seeking a Senior Data Engineer to partner closely on data modeling and pipeline development.

Why this role

You will help design and implement the foundation for a high-performance analytical platform. Your work will enable fast, reliable queries across large datasets and directly support internal applications used by account managers worldwide.

Key Responsibilities
  • Model and build data structures optimised for fast analytical queries (star-schema / OLAP).
  • Design, develop, and maintain batch data pipelines to move and transform large datasets.
  • Define targets and implement efficient batch processing workflows.
  • Collaborate with stakeholders to understand requirements and clearly explain technical concepts.
  • Ensure data quality, integrity, and security while optimising performance and scalability.
  • Troubleshoot and resolve pipeline or processing issues.
Required Skills
  • Spark SQL expertise for large-scale data processing (primary tool).
  • Strong background in batch processing and target definition.
  • Competent Python programming skills.
  • Clear communicator, comfortable engaging with technical and non-technical stakeholders.
  • Understanding of ETL/ELT paradigms and data modeling principles.
Nice to Have
  • Experience with streaming data and event-driven architectures (Kafka or similar).
  • Background in software engineering for long-term platform development.
  • Familiarity with OLAP engines or columnar databases such as ClickHouse.
Contract Details
  • Work setup: Hybrid, with ability to be in Berlin 1-2 days per week strongly preferred.
  • Start: As soon as possible.
Interview Process
  1. Panel Interview – broad discussion of background and experience.
  2. Technical Interview – deep dive into data engineering and Spark SQL expertise.
  3. Final Interview (if required) – to address any remaining questions.

Darwin Recruitment is acting as an Employment Business in relation to this vacancy.

Alex Hevey

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.