Job Search and Career Advice Platform

Aktiviere Job-Benachrichtigungen per E-Mail!

Senior Systems Engineer (AI Cloud Infrastructure) Technical · Munich, Germany

Hyperproof

München

Hybrid

EUR 80.000 - 100.000

Vollzeit

Gestern
Sei unter den ersten Bewerbenden

Erstelle in nur wenigen Minuten einen maßgeschneiderten Lebenslauf

Überzeuge Recruiter und verdiene mehr Geld. Mehr erfahren

Zusammenfassung

A fast-growing deep-tech company in Munich seeks a Senior Systems Engineer to lead the software development for AI Gigafactory. This role requires extensive experience in systems programming and cloud architectures, ensuring efficient orchestration of high-performance compute clusters. The candidate should possess strong skills in Kubernetes and systems debugging, alongside knowledge of GPU management. Join a diverse team to tackle real-world challenges in AI and quantum computing. This position offers hybrid working arrangements and several appealing perks.

Leistungen

Indefinite contract
Equal pay guaranteed
Variable performance bonus
Signing bonus
Relocation package
Private health insurance
Educational budget eligibility
Flexible working hours
Career plan opportunities
Collaborative culture

Qualifikationen

  • 10+ years of software engineering experience.
  • Proficiency in Go, C++, or Rust.
  • Hands-on experience managing NVIDIA GPU clusters.
  • Deep understanding of Linux kernel, cgroups, namespaces.
  • Mastery of declarative infrastructure tools like Terraform, Ansible.

Aufgaben

  • Designing and developing the software layer for AI Gigafactory.
  • Architecting scheduling solutions for distributed training jobs.
  • Tuning the software-defined networking layer for low-latency.
  • Writing custom Kubernetes Operators and CRDs.
  • Investigating and resolving deep systems issues.

Kenntnisse

Go (Golang)
C++
Rust
Kubernetes internals
NVIDIA GPU management
Linux kernel
Terraform
Ansible
Debugging complex systems

Tools

NVIDIA drivers
CUDA toolkit
Terraform
Ansible
Jobbeschreibung
Technical • Munich, Germany
Senior Systems Engineer (AI Cloud Infrastructure)

Multiverse Computing

Multiverse is a well-funded, fast-growing deep-tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world.

With 180+ employees and growing, our team is fully multicultural and international. We deliver hyper-efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.

Our flagship products, CompactifAI and Singularity, address critical needs across various industries:

  • CompactifAI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.
  • Singularity is a quantum- and quantum‑inspired optimization platform used by blue‑chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.

You’ll be working alongside world‑leading experts to develop solutions that tackle real‑world challenges. We’re looking for passionate individuals eager to grow in an ethics‑driven environment that values sustainability and diversity.

We’re committed to building a truly inclusive culture—come and join us.

Role description

We are looking for a Senior Engineer to lead a critical initiative within our Platform Engineering team: building the software layer for AI Gigafactory. In this role, you will move beyond consuming public cloud resources to architecting and building a private “Neo‑cloud” from the ground up. You will design the control planes that manage high‑performance compute clusters, orchestrate thousands of GPUs, and optimize the hardware‑software interface for massive AI workloads.

This role sits at the intersection of High‑Performance Computing (HPC), Kubernetes Internals, and Bare Metal Engineering.

What you will be doing
  • Building the Control Plane: Designing and developing the software layer (APIs, Controllers, Agents) that automates the lifecycle of bare‑metal AI infrastructure.
  • Orchestrating High‑Scale Compute: Architecting scheduling solutions for large‑scale distributed training jobs across massive clusters of GPUs (NVIDIA H200/B200/B300), ensuring efficient bin‑packing and gang scheduling.
  • Optimizing the Fabric: Tuning the software‑defined networking layer to support low‑latency interconnects (InfiniBand/RDMA/RoCEv2) essential for multi‑node training.
  • Developing Kubernetes Extensions: Writing custom Kubernetes Operators and CRDs to abstract complex hardware realities (topology awareness, GPU partitioning) into usable interfaces for our Data Scientists.
  • Hardware‑Level Debugging: Investigating and resolving deep systems issues, ranging from PCIe bus errors and NCCL communication timeouts to kernel panics on bare‑metal nodes.
  • Defining Standards: Creating the “Golden Image” for AI workloads, managing drivers, firmware, and OS optimizations to squeeze maximum performance out of the hardware.
Requirements
  • Systems Programming Expertise: 10+ years of software engineering experience with strong proficiency in Go (Golang), C++, or Rust. You must be comfortable building system agents, APIs, and CLI tools.
  • Deep Kubernetes Knowledge: You understand K8s internals beyond simple deployment. Experience with Custom Resource Definitions (CRDs), Operators, and the Kubernetes API server architecture.
  • GPU Ecosystem Experience: Hands‑on experience managing NVIDIA GPU clusters. Familiarity with NVIDIA drivers, CUDA toolkit, and the container runtime (NVIDIA Container Toolkit).
  • Linux Internals: Deep understanding of the Linux kernel, cgroups, namespaces, and system performance tuning.
  • Infrastructure as Code: Mastery of declarative infrastructure tools (Terraform, Ansible) but with a focus on provisioning physical hardware rather than just cloud VMs.
  • Problem Solving: A proven track record of debugging complex distributed systems where the root cause could be code, network, or silicon.
Preferred qualifications
  • HPC Background: Experience working with traditional supercomputing schedulers (Slurm, PBS) or modern batch schedulers (Volcano, Kueue, Ray).
  • Bare Metal Provisioning: Experience with tools like Cluster API (CAPI), Metal3, Tinkerbell, Canonical MaaS, or OpenStack Ironic.
  • High‑Speed Networking: Knowledge of RDMA, InfiniBand, GPUDirect, and how to expose these technologies to containerized workloads.
  • AI/ML Familiarity: Understanding of how distributed training works (e.g., PyTorch Distributed, Megatron‑LM, DeepSpeed) and the infrastructure requirements of Large Language Models (LLMs).
  • Observability: Experience building monitoring for hardware health (DCGM) and distributed tracing for long‑running jobs.
Location

Applicants must have legal authorization to work in the country where the position is based.

Perks & Benefits
  • Indefinite contract.
  • Equal pay guaranteed.
  • Variable performance bonus.
  • Signing bonus.
  • Relocation package (if applicable).
  • Private health insurance.
  • Eligibility for educational budget according to internal policy.
  • Hybrid opportunity.
  • Flexible working hours.
  • Working in a high‑paced environment, working on cutting‑edge technologies.
  • Career plan. Opportunity to learn and teach.
  • Progressive Company. Happy people culture.
Equal Opportunity Employer

As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.

Already working at MULTIVERSE COMPUTING?

Let’s recruit together and find your next colleague.

Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.