Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

Data & Test Engineer – Robotics & ML (m/f/d)

RobCo GmbH

München

Hybrid

EUR 60.000 - 80.000

Vollzeit

Heute
Sei unter den ersten Bewerbenden

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A robotics technology company is seeking a Data & Test Engineer to build and maintain evaluation infrastructure for robot learning models. This role involves managing datasets, developing testing tools, and collaborating with cross-functional teams. Ideal candidates will have a strong background in data engineering and proficiency in Python and ML tools. The position offers a hybrid work model with flexible hours, focusing on improving the rigor and scalability of robot evaluations.

Leistungen

Hybrid work model
Flexible hours
Modern equipment

Qualifikationen

  • Academic background in Data Engineering, Data Science, QA Engineering, Technical Test Engineering or related fields.
  • Strong experience managing datasets, data pipelines, versioning, and quality control.
  • Proficiency in Python and common ML/data tooling.

Aufgaben

  • Build evaluation infrastructure for robot learning models.
  • Develop tools for model testing and automate evaluation processes.
  • Manage and version multimodal datasets including logs and sensor data.
  • Define and maintain simulation testing procedures and real-world tests.
  • Establish evaluation metrics and track performance trends.
  • Collaborate cross-functionally with various engineering teams.

Kenntnisse

Managing datasets
Data pipelines
Python
ML/data tooling (NumPy, Pandas, PyTorch)
Creating metrics and dashboards
Documentation

Ausbildung

Academic background in Data Engineering, Data Science, or QA Engineering

Tools

Python
NumPy
Pandas
PyTorch
Spark
Ray Data
Unity
Unreal
Isaac Sim
Jobbeschreibung
Your Mission

As a Data & Test Engineer for Robotics & ML Evaluation, you will own the ecosystem that measures how well our robot learning models perform - in simulation and on real robots. You will build datasets, metrics, tools, and testing workflows that enable ML researchers and robotics engineers to evaluate models reliably, reproducibly, and at scale.

Your work ensures that every model deployed on our robots is backed by clear, high-quality evaluation signals: robust datasets, well-defined metrics, automated test flows, and consistent test procedures. If you thrive at the intersection of data engineering, QA, simulation, and robotics, this role will give you ownership of a core pillar of our learning stack.

Your Responsibilities
  • Build evaluation infrastructure – Develop and maintain reproducible test frameworks for robot learning models and integrate them into CI/CD and release pipelines.

  • Develop tools for model testing – Enable ML engineers to run evaluations easily and obtain standardized performance metrics (success rates, robustness, generalization, latency, regressions).

  • Manage datasets & test sets – Organize, annotate, and version multimodal datasets including demonstrations, trajectories, logs, and sensor data.

  • Coordinate simulation & real-world tests – Define and maintain scenes, assets, and procedures for simulation testing; align real-world test setups to ensure reproducibility and safety.

  • Define metrics & reporting – Establish evaluation metrics, build dashboards or analytics tools, and track performance trends and regressions over time.

  • Collaborate cross-functionally – Work with ML, robotics, autonomy, simulation, and product teams to align evaluation with real-world requirements and maintain data quality standards.

Your Profile
  • Academic background in Data Engineering, Data Science, QA Engineering, Simulation, Technical Test Engineering or related fields

  • Strong experience managing datasets, data pipelines, versioning, and quality control

  • Proficiency in Python and common ML/data tooling (NumPy, Pandas, PyTorch for evaluation, Spark, Ray Data or other large-scale tools for running large-scale evaluations)

  • Experience creating metrics, analytics, dashboards, or performance reporting tools

  • Familiarity with simulation frameworks (Unity, Unreal, Isaac Sim, or equivalents)

  • Excellent documentation, organization, and communication skills

  • Comfortable working across multiple engineering disciplines and aligning on evaluation criteria

Why us?
  • Own a central, high-impact component of RobCo’s robot learning pipeline

  • Work closely with ML researchers, robotics engineers, and simulation experts

  • Define best practices for evaluation in a fast-evolving, high-growth robotics environment

  • Shape the reliability, rigor, and scalability of our robot learning stack

  • Hybrid work model, flexible hours, and modern equipment

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.