Enable job alerts via email!

ML Infrastructure Engineer - Distributed Training, AWS Neuron, Annapurna Labs

Amazon

Cupertino (CA)

On-site

USD 99,000 - 200,000

Full time

6 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative company is seeking a Senior Machine Learning Engineer to join their Distributed Training team. This role focuses on developing and optimizing large-scale ML models using cutting-edge technology. You'll collaborate with experts to enhance distributed training capabilities in popular frameworks like PyTorch and JAX, leveraging specialized AI hardware. This position offers a unique opportunity to work at the intersection of machine learning and hardware acceleration, providing an exciting pathway for growth in the rapidly evolving field of ML infrastructure. If you're passionate about technology and eager to tackle complex challenges, this role is perfect for you.

Qualifications

  • Earned or will earn a Bachelor's or Master's degree between Dec 2022 and Sep 2025.
  • Experience with ML frameworks, particularly PyTorch and/or JAX.

Responsibilities

  • Develop and improve distributed training capabilities in ML frameworks.
  • Optimize ML models to run efficiently on AWS's custom AI chips.

Skills

C++
Python
Machine Learning
Parallel Computing
CUDA Programming

Education

Bachelor's Degree
Master's Degree

Tools

PyTorch
JAX
CUDA
TensorRT
vLLM

Job description

By applying to this position, your application will be considered for all locations we hire for in the United States.

Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.

AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Senior Machine Learning Engineer in the Distribute Training team for AWS Neuron, responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive-scale Large Language Models (LLM) such as GPT and Llama, as well as Stable Diffusion, Vision Transformers (ViT) and many more.

The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.

Key job responsibilities
You'll help develop and improve distributed training capabilities in popular machine learning frameworks (PyTorch and JAX) using AWS's specialized AI hardware. Working with our compiler and runtime teams, you'll learn how to optimize ML models to run efficiently on AWS's custom AI chips (Trainium and Inferentia). This is a great opportunity to bridge the gap between ML frameworks and hardware acceleration, while building strong foundations in distributed systems.

We're looking for someone with solid programming skills, enthusiasm for learning complex systems, and basic understanding of machine learning concepts. This role offers excellent growth opportunities in the rapidly evolving field of ML infrastructure.

About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.

BASIC QUALIFICATIONS


- To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025.
- Working knowledge of C++ and Python
- Experience with ML frameworks, particularly PyTorch and/or JAX
- Understanding of parallel computing concepts and CUDA programming

PREFERRED QUALIFICATIONS

- Open source contributions to ML frameworks or tools
- Experience optimizing ML workloads for performance
- Direct experience with PyTorch internals or CUDA optimization
- Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT)

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits . This position will remain posted until filled. Applicants should apply via our internal or external career site.

Posted: May 6, 2025 (Updated about 5 hours ago)

Posted: May 6, 2025 (Updated about 5 hours ago)

Posted: May 6, 2025 (Updated about 5 hours ago)

Posted: May 6, 2025 (Updated about 8 hours ago)

Posted: May 5, 2025 (Updated about 8 hours ago)

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability or other legally protected status.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Sr. Technical Product Manager - AWS Neuron, Annapurna Labs

Amazon

Cupertino

On-site

USD 136,000 - 236,000

2 days ago
Be an early applicant

Sr. Technical Product Manager - AWS Neuron, Annapurna Labs

Amazon

Sunnyvale

On-site

USD 136,000 - 236,000

4 days ago
Be an early applicant

Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training

Amazon

California

On-site

USD 151,000 - 262,000

2 days ago
Be an early applicant