Attiva gli avvisi di lavoro via e-mail!
A leading cloud services provider in Italy is seeking an experienced software developer to join their team focused on machine learning optimization and high-performance computing. This role involves designing and implementing ML kernels, optimizing performance on AWS accelerators, and mentoring engineers. Ideal candidates will have a robust background in software development, a degree in computer science, and experience in GPU optimization. The company offers a collaborative and inclusive environment with a strong emphasis on work-life balance and career development.
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team focuses on maximizing performance for AWS's custom ML accelerators, crafting high-performance kernels for ML functions at the hardware-software boundary to optimize performance for demanding workloads. The AWS Neuron SDK includes an ML compiler, runtime, and application framework that integrates with popular ML frameworks like PyTorch to enable accelerated ML inference and training performance. The broader Neuron Compiler organization works across frameworks, compilers, runtime, and collectives, optimizing current performance and contributing to future architecture designs while engaging with customers to enable models and ensure optimal performance. This role offers an opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, helping shape the future of AI acceleration technology.
This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a team of engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We are inventing. We are experimenting. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure machine learning workloads achieve optimal performance on AWS ML accelerators.
Explore the product and our history!
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://aws.amazon.com/machine-learning/neuron/
https://github.com/aws/aws-neuron-sdk
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
Our kernel engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you’ll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:
As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also:
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.