Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

Senior / Staff Data Engineer

WeatherPromise

Berlin

Vor Ort

EUR 70.000 - 90.000

Vollzeit

Heute
Sei unter den ersten Bewerbenden

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A pioneering travel company in Berlin is seeking a Senior / Staff Data Engineer to design and build an end-to-end data platform that connects diverse data sources. You will be instrumental in establishing best-in-class data practices, enabling data-driven decision-making across the organization. The right candidate will possess strong expertise in Python, SQL, and cloud infrastructure. This role provides a unique opportunity for ownership and impact within a rapidly growing company.

Qualifikationen

  • Strong proficiency in Python and SQL.
  • Experience with Spark - any flavour.
  • Hands-on experience with AWS or similar cloud infrastructure.
  • Familiarity with orchestration platforms like Airflow.
  • Experience building ETL/ELT pipelines in production.

Aufgaben

  • Design and implement ETL/ELT pipelines.
  • Centralise raw data into a reliable data warehouse.
  • Establish data engineering best practices.
  • Design scalable data models.
  • Support end-user experiments and A/B tests.

Kenntnisse

Python
SQL
SparkSQL
Pyspark
AWS
ETL/ELT pipelines
Data modelling
Tableau
Jobbeschreibung

Senior / Staff Data Engineer

Location: Berlin, Germany

Team: Data

Reports to: Head of Data

Who we are — Weather Promise

WeatherPromise is revolutionising travel by doing something no one else dares to: guaranteeing good weather. If the weather doesn’t deliver, customers get their money back.

Behind this bold promise is data. Lots of it. Weather models, travel data, bookings, partner feeds, experiments, customer behaviour, and more — all coming together to power a smarter way to travel.

We’re building the data foundations that make this possible, and we’re looking for a Senior / Staff Data Engineer to help lead that journey.

This is a rare opportunity to join at an early stage, design the data ecosystem from the ground up, and grow into a leadership role as we scale.

Overview

We are looking for a Senior / Staff Data Engineer to scale our analytics capabilities and establish best-in-class data practices across the organisation.

You will design and build the end-to-end data platform that connects multiple internal and external data sources across destinations, powering analytics, reporting, experimentation, and decision-making at every level of the business.

You will collaborate closely with stakeholders across product, commercial, operations, and leadership to turn raw data into a strategic asset and enable true data-driven decision making.

Key Responsibilities
Build and scale our data platform
  • Design and implement robust ETL/ELT pipelines to ingest data from diverse internal and external sources
  • Centralise raw data into a reliable, well‑structured data warehouse/lake
  • Own data quality, reliability, and monitoring across pipelines
  • Establish data engineering best practices, standards, and documentation
  • Select and implement the right tooling and architecture for scale
Data modelling & analytics enablement
  • Design scalable, well‑structured data models that make data easy to use across the organisation
  • Transform raw data into analytics‑ready datasets for business intelligence and experimentation
  • Partner with Business Analytics and stakeholders to define and expose key metrics and KPIs
  • Ensure data is accessible, understandable, and trusted
Reporting & visualisation enablement
  • Own ETLs end‑to‑end from ingestion to dashboard consumption
  • Collaborate with analysts and stakeholders to power reporting and visualisations (Tableau, Looker, Superset, etc.)
  • Enable self‑serve analytics across teams through clean data structures
Experimentation & insights
  • Support the design and execution of end‑user experiments and A/B tests
  • Build datasets and pipelines required to analyse experimental results
  • Work with product and commercial teams to turn experiment results into actionable insights
Cross‑functional collaboration & leadership
  • Work closely with technical and non‑technical teams to understand data needs
  • Translate business questions into data solutions
  • Act as a key data partner to leadership and stakeholders
  • Help grow the data function from a small team into an organisation‑wide capability
What we’re looking for
Technical expertise
  • Strong proficiency in Python and SQL
  • Experience with Spark - any flavour - SparkSQL, Pyspark, Scala etc
  • Hands‑on experience with cloud infrastructure (AWS or similar)
  • Experience with orchestration platforms (Airflow or similar)
  • Experience building and maintaining ETL/ELT pipelines in production
  • Strong understanding of data modelling, warehousing, and best practices
  • Nice to have: Familiarity with BI and visualisation tools (Tableau, Looker, Superset, or similar)
  • Nice to have: Experience with stream processing frameworks (Kafka, Kinesis, Pub/Sub, or similar)
Mindset & ways of working
  • Demonstrated ability to work cross‑functionally with technical and non‑technical teams
  • A builder’s mindset — excited by ambiguity, eager to create structure and systems, biased toward action
  • Exceptional communication skills, including the ability to tell data stories to executives and partners
  • Strong business intuition and the ability to translate complex data into actionable decisions
  • Comfortable owning problems end‑to‑end in a fast‑moving environment
Why this role is exciting
  • You’ll design the data foundations of a category‑defining travel company
  • You’ll work with unique datasets at the intersection of weather, travel, and customer behaviour
  • You’ll move from being a hands‑on builder to shaping the future of the data function
  • You’ll have real ownership, real impact, and real influence from day one
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.