
Aktiviere Job-Benachrichtigungen per E-Mail!
Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf
Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren
A health technology company in Berlin is seeking an experienced Senior Data Engineer to build high-performance data pipelines and create data products for AI consumption. The ideal candidate has extensive experience with ETL processes, cloud data warehousing, and a strong proficiency in Python. This role includes flexible work arrangements with a supportive team culture focused on innovative data solutions.
We're seeking an experienced Senior Data Engineer to help shape the future of hearing care through cutting edge data infrastructure. You will take full ownership of the complete data lifecycle from ingestion and transformation to powering AI agents and LLMs through our semantic layer. This isn't traditional analytics; you're building data products that drive intelligent automation and decision making at scale.
Design and build robust, high‑performance data pipelines using our modern stack (Airflow, Snowflake, Pulsar, Kubernetes) that feed directly into our semantic layer and data catalog.
Create data products optimized for consumption by AI agents and LLMs where data quality, context, and semantic richness are crucial.
Structure and transform data to be inherently machine‑readable, with rich metadata and clear lineage that powers intelligent applications.
Take responsibility from raw data ingestion through to semantic modeling, ensuring data is not just accurate but contextually rich and agent‑ready.
Champion best practices in building LLM‑consumable data products, optimize for both human and machine consumers, and help evolve our dbt transformation layer.
Built data products for AI/LLM consumption, not just analytics dashboards.
5+ years of hands‑on experience with complex ETL processes, data modeling, and large‑scale data systems.
Production experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift) on AWS, GCP, or Azure.
Proficiency in building and optimizing data transformations and pipelines in Python.
Experience with columnar storage, MPP databases, and distributed data processing architectures.
Ability to translate complex technical concepts for diverse audiences, from engineers to business stakeholders.
Experience with semantic layers, data catalogs, or metadata management systems.
Familiarity with modern analytical databases like Snowflake, BigQuery, ClickHouse, DuckDB, or similar systems.
Experience with streaming technologies like Kafka, Pulsar, Redpanda, or Kinesis.
audibene / hear.com is one of the fastest‑growing health technology companies ever. Our unique digital business model has revolutionised the industry and the way hearing care is provided. Since we started our journey in Berlin in 2012, we have scaled up our team from 2 to over 1,200 people in 8 international locations from Denver to Seoul. Driven by our belief that every person should hear well to live well, we have helped more than 200,000 customers bring the joy of life back.
As an equal‑opportunity employer, we welcome applications from candidates regardless of race, religion, gender identity, sexual orientation, age, disability or background. We are committed to inclusive hiring, and this data will allow us to measure our progress in attracting diverse talent to audibene.
We are aware that there are many different dimensions of diversity; however, we have decided to begin with tracking gender. To support us in our diversity efforts, we would really appreciate your voluntary self‑identification. Any information that you provide will be held in a confidential file and will not be considered in the hiring process. This data will also be deleted after 6 months, in accordance with GDPR.