Aktiviere Job-Benachrichtigungen per E-Mail!

Senior Data Engineer (m/f/d)

Tentamus Group GmbH

Berlin

Hybrid

EUR 70.000 - 90.000

Vollzeit

Vor 3 Tagen
Sei unter den ersten Bewerbenden

Zusammenfassung

A global product and safety firm is looking for a Senior Data Engineer to architect and develop cloud-native data solutions. This role entails collaborating with cross-functional teams to ensure high-quality data is available for various analytics and AI initiatives. Candidates should have extensive experience in data engineering, cloud services, and a strong background in SQL and Python. The position offers a hybrid work model based in Berlin, with long-term career prospects and a family-friendly environment.

Leistungen

Flexible working hours
Individual training offers
Corporate Benefits
Team events

Qualifikationen

  • 5+ years’ experience in data engineering, cloud data architecture, or related field.
  • Deep understanding of data governance, data security, and quality frameworks.
  • Experience with large-scale AI/LLM workflows, including data preparation for machine learning.

Aufgaben

  • Design, develop, and optimize scalable ETL/ELT pipelines for data.
  • Lead and manage data solutions in cloud environments.
  • Support development of AI and LLM-powered applications.

Kenntnisse

SQL
Python
PySpark
Distributed data processing
Databricks
Apache Spark
Power BI
Data governance
Cloud data architecture

Ausbildung

Bachelor's degree in a relevant field

Tools

Azure
AWS
GCP
MLflow
CI/CD tools
Git

Jobbeschreibung

Our Company

Tentamus Group is a global product and safety firm with a core presence in Europe, UK, Israel, Asia and North America. We test and consult on products involving the human body with strong focus on food, pharmaceuticals, nutritional supplements, medical devices and cosmetics. With our Headquarters in Berlin, Tentamus is represented in 100 locations across 23 countries, with just under 4,000 highly trained staff members.


Why us?

Your contact

Tentamus Group GmbH
Human Resources
M: +49 1514 4063058
If you have any questions about the application process, please do not hesitate to contact us. We are always happy to answer your questions.


Required skills and qualifications

Technical & Professional

  • 5+ years’ experience in data engineering, cloud data architecture, or related field.
  • Expert in SQL, Python, PySpark, and distributed data processing.
  • Proficient in Databricks, Delta Lake, and Apache Spark in cloud environments (Azure, AWS, or GCP).
  • Solid experience with MLflow, CI/CD, and Git-based workflows.
  • Deep understanding of data governance, data security, and quality frameworks.
  • Hands-on with BI tools, especially Power BI, and DAX-based modeling.
  • Experience working with large-scale AI/LLM workflows, including data preparation for machine learning and prompt engineering environments.
Core Competencies
  • Analytical mindset with a focus on performance, scalability, and problem-solving.
  • Strong communication skills to engage both technical and non-technical stakeholders.
  • Passion for innovation, AI/ML technologies, and continuous improvement.
Agile & Product Delivery
  • Strong experience working in Agile environments (Scrum/Kanban), using tools like Jira or Azure DevOps.
  • Proven track record of delivering data products from concept to production.
  • Familiarity with data mesh, domain-oriented ownership, and data-as-a-product principles
Languages
  • Written and oral fluency in English, German (Level C1)
Important note: Please be advised that a valid work permit for Germany is required for non-EU citizens. Unfortunately, application without a valid work permit may not be considered.
Summary/Objective

Job Title:Senior Data Engineer (m/f/d)
Reports to: Data & Analytics Manager
Department: IT (Data & Analytics)
Location: Berlin Office / Hybrid
Employment Type: Full-Time

As a Senior Data Engineer (m/f/d), you will be instrumental in architecting, developing, and scaling cloud-native data solutions that power our advanced analytics, machine learning, and AI initiatives. You’ll work closely with cross-functional teams including data scientists, AI/ML engineers, and business analysts to ensure the availability of high-quality, well-governed data for operational and strategic decision-making. This role provides a unique opportunity to work at the intersection of cloud engineering, modern data platforms like Databricks, and AI infrastructure.


We Offer
  • Challenging tasks, a motivated team, and long-term prospects in a healthy owner-managed family business
  • Permanent employment contract with long-term career prospects
  • Flexible working hours and hybrid options, supporting a strong work-life balance
  • A motivated team, flat hierarchies, and a very family-friendly working environment
  • Individual training offers (Tentamus Academy) and the opportunity to help shape the growth of a medium-sized company
  • Joint team events and leisure opportunities
  • Corporate Benefits

Primary Responsibilities

Data Engineering & Architecture

  • Design, develop, and optimize scalable ETL/ELT pipelines for structured and unstructured data using Python, SQL, Spark, and cloud-native technologies.
  • Architect data solutions leveraging Databricks, Delta Lake, Apache Spark, and cloud-based orchestration frameworks (e.g., Azure Data Factory, Apache Airflow).
  • Implement modular and reusable data models (dimensional/star/snowflake) to support analytics, AI, and LLM workflows.
Cloud & Platform Development
  • Lead and manage data solutions in cloud environments (Azure, AWS, or GCP), with a strong focus on Databricks, Lakehouse architectures, and serverless computing.
  • Integrate MLOps capabilities, including model tracking, versioning, and automated deployment using MLflow and CI/CD tools (e.g., GitHub Actions, Azure DevOps).
  • Implement and maintain data cataloging, metadata management, and data governance using tools like Unity Catalog.
LLM, AI & Advanced Analytics Enablement
  • Support development of AI and LLM-powered applications, ensuring optimized data pipelines for model training, inference, and monitoring.
  • Collaborate with Data Scientists and ML Engineers to deploy and scale ML/AImodels, providing performant and explainable data features.
Collaboration & Stakeholder Engagement
  • Act as a technical liaison between data engineering and business domains, helping to shape and translate data needs into robust technical solutions.
  • Provide mentorship to junior engineers and contribute to code reviews, architectural discussions, and knowledge-sharing sessions.
Visualization & BI Integration
  • Support BI teams with semantic modeling and data curation for Power BI, enabling self-service analytics and AI-integrated dashboards.
  • Build reusable and scalable data marts and Power BI datasets, optimizing performance and governance.
Documentation & Best Practices
  • Maintain clear, comprehensive documentation for pipelines, architecture, data models, and operational procedures.
  • Drive engineering best practices, including version control, testing (unit/integration), monitoring, and alerting.
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.