Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

(Senior) Data Engineer (m/w/d)

GALVANY

Berlin

Hybrid

EUR 70.000 - 90.000

Vollzeit

Gestern
Sei unter den ersten Bewerbenden

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A green-tech startup in Berlin is seeking a Data Engineer to architect and maintain the data infrastructure for their AI-based Operating System. The role involves processing real-time energy data and building scalable data pipelines using Kafka and SQL. Ideal candidates have over 4 years of experience in fast-paced environments, strong coding skills in Python, and a commitment to data quality. This position offers competitive compensation and an opportunity to contribute to sustainable energy solutions.

Leistungen

Competitive salary
Flexible work perks
In-person collaboration opportunities

Qualifikationen

  • 4+ years of experience in high-performance environments.
  • Proven end-to-end responsibility in data engineering.
  • Strong coding skills, particularly in Python and SQL.

Aufgaben

  • Design and build scalable data pipelines.
  • Ensure data quality and consistency.
  • Collaborate with ML engineers for data preparation.

Kenntnisse

Data pipeline design
Python proficiency
SQL skills
Experience with Kafka
Knowledge of Azure

Ausbildung

Relevant degree in Computer Science, Mathematics, or Data Engineering

Tools

Benthos
GoLang
PyTorch
Jobbeschreibung

At GALVANY, our goal is to make climate-neutral living a reality for everyone. We focus on execution – developing concrete, smart solutions and making heat pumps, battery storage, and smart metering easy to access, reliable, and affordable. GALVANY-Tech drives the energy transition through software. We're building an AI-based Operating System for sales, planning, installation and operation of heat pumps and integrated energy systems – from single-family homes to multi-unit buildings, and across our Energy Community as a Virtual Power Plant.

We are a profitable green-tech startup and believe that sustainable impact and long-term growth are only possible when grounded in a healthy business model. Driven by customer value, clarity, and responsibility, we stand for high-quality heating solutions and an environment where people take responsibility and drive lasting impact.

The Role

As a Data Engineer at GALVANY, you''ll architect and maintain the data infrastructure that powers our AI-based Operating System. You''ll process real-time energy data from heat pumps and integrated systems, build pipelines that fuel AI models and analytics, and ensure data quality across our entire ecosystem – from individual homes to our Virtual Power Plant.

Responsibilities:
  • Design and build scalable data pipelines using Kafka, Benthos, Clickpipes and related tools.
  • Ensure data quality, consistency, and freshness across the energy ecosystem and internal processes.
  • Collaborate with ML engineers to deliver the right data in the right format for AI models.
  • Build monitoring and validation into pipelines from the start.
  • Translate business questions into data requirements and vice versa.
  • Leverage LLMs and AI tools to innovate data solutions and accelerate development.

Tech Stack: Data tools including Kafka, Benthos (data pipeline), Python, SQL; Backend with GoLang; ML with Python, PyTorch, and LLMs; General tools including Azure, GitHub, Linear, Notion.

Requirements
  • Experience. 4+ years of experience in high-performance environments (e.g. top-tier consulting, fast-scaling startups, or similar).

  • Track Record. Proven end-to-end responsibility in data engineering. From ideation to release.

  • Background. Strong background in relevant fields, e.g. Computer Science, Mathematics, Data Engineering, or similar.

  • Technical Skills. Strong coding skills with proficiency in Python and SQL. Experience with streaming architectures and data pipeline tooling. Fluent in spec-driven, AI-assisted development (e.g. Claude Code).

  • Mindset. Self-driven problem-solving mindset – no need for micromanagement or specific tickets.

  • Technical Aptitude. Technical mindset with a passion for understanding systems, data flows, and integrations, coupled with enthusiasm for continuous learning and problem solving.

  • Language. Fluent in English; German is a plus.

Skills and Qualities That Are Important to Us

  • Outcome-Led. You focus on value over volume. You embrace iteration, adapt quickly to new information, and prioritize what moves the needle for business and user.

  • Systems Thinker. You see the bigger picture. You understand how your work connects to the wider ecosystem – ensuring features contribute to a cohesive, scalable whole.

  • Pragmatic. You choose the simplest effective path to solve problems – especially by leveraging AI tools.

  • Customer Champion. You keep the end-user central in all decisions. You seek direct exposure to how customers experience the product.

  • Data Quality Guardian. You obsess over accuracy, freshness, and consistency. You build validation and monitoring into pipelines from the start.

  • Pipeline Architect. You design scalable data flows from source to insight. You balance real-time needs with batch efficiency.

Benefits
  • Strong Growth & High Impact. A unique opportunity to join during a hypergrowth phase and actively contribute to company success.

  • Compensation. Competitive salary and flexible perks (sports, mobility, learning) tailored to your needs.

  • Real-World Impact. Your work drives decarbonization – measurable in CO₂ savings, energy efficiency (kWh), and cost reductions (€).

  • Office. Prime location in Berlin Charlottenburg, regular company events and all-hands. We value in-person collaboration and connection, while partial remote work remains an option.

  • No Corporate Theater. Skip endless alignment meetings, politics and waiting for permission. You talk to the people who matter and ship.

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.