Enable job alerts via email!

Senior ML Platform Engineer - Lepton

NVIDIA

Myrtle Point (OR)

Hybrid

USD 184,000 - 288,000

Full time

Today
Be an early applicant

Job summary

A leading tech company is seeking a ML Platform Engineer to architect and build high-performance ML infrastructure. You will focus on creating automated platforms to empower scientists and engineers to train and deploy advanced ML models. Ideal candidates have extensive experience in software engineering, Infrastructure-as-Code tools like Ansible and Terraform, and a solid understanding of ML workflows. This position offers a competitive salary and equity opportunities.

Benefits

Equity options
Comprehensive benefits package

Qualifications

  • 8+ years in software/platform engineering or SRE roles.
  • 3+ years focused on ML infrastructure or distributed compute systems.
  • Solid understanding of ML workflows from data preprocessing to deployment.

Responsibilities

  • Design and maintain ML platform infrastructure as code.
  • Apply SRE principles to resolve system issues.
  • Collaborate with ML researchers to understand infrastructure needs.

Skills

Infrastructure-as-Code (IaC)
SRE principles
Automation tooling
Python
Go
Kubernetes
Docker

Education

BS/MS in Computer Science, Engineering, or equivalent experience

Tools

Ansible
Terraform
Job description

NVIDIA is at the forefront of innovations in Artificial Intelligence, High-Performance Computing, and Visualization. Our invention—the GPU—functions as the visual cortex of modern computing and is central to groundbreaking applications from generative AI to autonomous vehicles. We are now looking for a ML Platform Engineer to help accelerate the next era of machine learning innovation.

In this role, you will architect, build, and scale our high-performance ML infrastructure using modern Infrastructure-as-Code practices. Your primary focus will be on creating reliable, automated platforms that empower scientists and engineers to train and deploy the most advanced ML models on some of the world’s most powerful GPU systems. Join our top team and apply your SRE and software engineering skills to craft robust, user-friendly platforms for seamless ML development.

What You'll Be Doing:
  • Design, build, and maintain our core ML platform infrastructure as code, primarily using Ansible and Terraform, ensuring reproducibility and scalability across large-scale, distributed GPU clusters.
  • Apply SRE principles to diagnose, troubleshoot, and resolve complex system issues across the entire stack, ensuring high availability and performance for critical AI workloads.
  • Develop robust internal automation and tooling for ML workflow orchestration, resource scheduling, and platform operations, with a strong focus on software engineering best practices.
  • Collaborate with ML researchers and applied scientists to understand infrastructure needs and build solutions that streamline their end-to-end experimentation.
  • Evolve and operate our multi-cloud and hybrid (on-prem + cloud) environments, implementing monitoring, alerting, and incident response protocols.
  • Participate in on-call rotation to provide support for platform services and infrastructure running critical ML jobs, driving root cause analysis and implementing preventative measures.
  • Write high-quality, maintainable code (Python, Go) to contribute to the core orchestration platform and automate manual processes.
  • Drive the adoption of modern GPU technologies and ensure smooth integration of next-generation hardware into ML pipelines (e.g., GB200, NVLink, etc.).
What We Need To See:
  • BS/MS in Computer Science, Engineering, or equivalent experience.
  • 8+ years in software/platform engineering or SRE roles, including 3+ years focused on ML infrastructure or distributed compute systems.
  • Strong proficiency in Infrastructure-as-Code (IaC) tools, specifically Ansible and Terraform, with a proven track record of building and managing production infrastructure.
  • SRE-oriented mindset with extensive experience in diagnosing system-level issues, performance tuning, and ensuring platform reliability.
  • Solid understanding of ML workflows and lifecycle—from data preprocessing to deployment.
  • Proficiency in operating containerized workloads with Kubernetes and Docker.
  • Strong software engineering skills in languages such as Python or Go, with a focus on automation, tooling, and writing production-grade code.
  • Experience with Linux systems internals, networking, and performance tuning at scale.
Ways To Stand Out From The Crowd:
  • Experience building or operating ML platforms supporting frameworks like PyTorch or TensorFlow at scale.
  • Deep understanding of distributed training techniques (e.g., data/model parallelism, Horovod, NCCL).
  • Expertise with modern CI/CD methodologies and GitOps practices.
  • Passion for building developer-centric platforms with great UX and strong operational reliability.
  • Proven ability to contribute code to complex orchestration or automation platforms.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until November 8, 2025. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.