Enable job alerts via email!

Principal Architect – Hardware Efficient AI Foundation Model Training

Huawei Technologies Canada Co., Ltd.

Markham

On-site

CAD 125,000 - 150,000

Full time

7 days ago
Be an early applicant

Job summary

A leading global technology company in Markham seeks a Principal Architect to lead the design of foundational model architecture for AI applications. The successful candidate will have experience in training AI models at a large scale. This role involves collaboration with internal and external teams to ensure cutting-edge performance and hardware efficiency in AI systems.

Qualifications

  • Experience training and deploying AI models at a scale of 10B+ parameters.
  • Deep understanding of AI algorithm mechanisms.
  • Solid publication records in AI systems or chip design.

Responsibilities

  • Lead design of foundational model architecture for AI subfields.
  • Propose technical requirements for distributed training infrastructures.

Skills

Training and optimizing AI models
Proficiency in AI architecture
Solid command of AI frameworks
Familiarity with AI chip architecture

Education

PhD in AI architecture or related fields

Tools

PyTorch
vLLM
SGLang
Job description
Overview

Huawei Canada has an immediate permanent opening for a Principal Architect.

About the team

The Computing Data Application Acceleration Lab aims to create a leading global data analytics platform organized into three specialized teams using innovative programming technologies. This team focuses on full-stack innovations, including software-hardware co-design and optimizing data efficiency at both the storage and runtime layers. This team also develops next-generation GPU architecture for gaming, cloud rendering, VR/AR, and Metaverse applications.

One of the goals of this lab are to enhance algorithm performance and training efficiency across industries, fostering long-term competitiveness.

About the job
  • Collaborate with internal and external organizations to lead the design of foundational model architecture for LLM/Code/Multimodal subfields by breakthroughs in post-training and continual training. Develop a foundational model with state-of-the-art performance and hardware efficiency, and establish industry impact.
  • Propose the technical requirements for large-scale distributed training and inference infrastructures such as parallelization and operator fusion, analyze the computational characteristics of typical architectures, and ensure the accuracy and advancement of AI hardware & infrastructure evolution.
About the ideal candidate
  • Experience in training and optimizing cutting-edge AI models/applications, especially in training and deploying AI models at a scale of 10B+ parameters.
  • Proficiency in the latest AI architecture (such as long-sequence, reinforcement learning, multimodal, and agents). Deep understanding of AI algorithm mechanisms.
  • Solid command of the underlying implementation of AI frameworks (such as PyTorch, vLLM, and SGLang), and mainstream distributed training and inference techniques.
  • Familiarity with AI chip architecture (such as GPU, NPU, and TPU). Understanding of memory hierarchy and interconnect technologies is an asset.
  • PhD preferred in AI architecture, computer architecture, or related fields.
  • Solid publication records in the field of AI systems or chip design are an asset.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.