Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Join a forward-thinking company as a key player in developing cutting-edge AI compiler strategies. This role offers the chance to work on the industry-leading PyTorch AI framework, optimizing performance and deployment for advanced deep learning models. Collaborate closely with AI researchers and hardware design teams to create innovative solutions that push the boundaries of machine learning on next-generation hardware. If you are passionate about AI and eager to contribute to groundbreaking projects, this opportunity is perfect for you.
Meta
Toronto
CAD 80,000 - 140,000
In this role, you will be a member of the MTIA (Meta Training & Inference Accelerator) Software team and part of the bigger industry-leading PyTorch AI framework organization. MTIA Software Team has been developing a comprehensive AI Compiler strategy that delivers a highly flexible platform to train & serve new DL/ML model architectures, combined with auto-tuned high performance for production environments across specialized hardware architectures. The compiler stack, DL graph optimizations, and kernel authoring for specific hardware, directly impacts performance and deployment velocity of both AI training and inference platforms at Meta. You will be working on one of the core areas such as PyTorch framework components, AI compiler and runtime, high-performance kernels and tooling to accelerate machine learning workloads on the current & next generation of MTIA AI hardware platforms. You will work closely with AI researchers to analyze deep learning models and lower them efficiently on MTIA hardware. You will also partner with hardware design teams to develop compiler optimizations for high performance. You will apply software development best practices to design features, optimization, and performance tuning techniques. You will gain valuable experience in developing machine learning compiler frameworks and will help in driving next generation hardware software codesign for AI domain specific problems.