Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Join a forward-thinking company at the forefront of the AI revolution! This exciting role as a software engineer in the Compiler team for a cutting-edge SDK focuses on optimizing deep learning models for advanced AWS hardware. You will tackle complex compiler optimization challenges, collaborate with diverse teams, and contribute to open-source projects. With a supportive environment that emphasizes mentorship and knowledge-sharing, this position offers a unique opportunity to grow your engineering expertise while making a significant impact in the AI landscape. If you're passionate about technology and eager to innovate, this is the perfect opportunity for you.
Job ID: 2933964 | Amazon Web Services, Inc. - A97
Do you want to be part of AI revolution? At AWS our vision is to make deep learning pervasive for everyday developers and to democratize access to AI hardware and software infrastructure. In order to deliver on that vision, we’ve created innovative software and hardware solutions that make it possible. AWS Neuron is the SDK that optimizes the performance of complex ML models executed on AWS Inferentia and Trainium, our custom chips designed to accelerate deep-learning workloads.
This role is for a software engineer in the Compiler team for AWS Neuron. As part of this role, you will be responsible for building next generation Neuron compiler which transforms ML models written in ML frameworks (e.g, PyTorch, TensorFlow, and JAX) to be deployed AWS Inferentia and Trainium based servers in the Amazon cloud. You will be responsible for solving hard compiler optimization problems to achieve optimum performance for variety of ML model families including massive scale large language models like Llama, Deepseek, and beyond as well as stable diffusion, vision transformers and multi-model models. You will be required to understand how these models work inside-out to make informed decisions on how to best coax the compiler to generate optimal implementation instruction. You will leverage your technical communications skill to partner with internal and external customers/stakeholders and will be involved in pre-silicon design, bringing new products/features to market, ultimately, making Neuron compiler highly performant and easy-to-use.
Experience in object-oriented languages like C++/Java is a must, experience with compilers or building ML models using ML frameworks on accelerators (e.g., GPUs) is preferred but not required. Experience with technologies like OpenXLA, StableHLO, MLIR will be added bonus!
Key job responsibilities:
A day in the life:
As you design and code solutions to help our team drive efficiencies in compiler architecture, you’ll create compiler optimization and verification passes, build features surface features and peculiarities of AWS accelerators to developers, implement tools to analyze numerical errors, and resolve the root cause of compiler defects. You’ll also participate in design discussions, code review, and communicate with internal (other Neuron SDK and Amazon wide teams) and external stakeholders (open-source communities). Lastly, work in a startup-like development environment, where you’re always working on the most important stuff.
About the team:
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Posted: March 20, 2025 (Updated about 1 hour ago)