Enable job alerts via email!

Distinguished Engineer - AI Computing System

Huawei Technologies Canada Co., Ltd.

Burnaby

On-site

CAD 172,000 - 230,000

Full time

2 days ago
Be an early applicant

Job summary

A leading communication technology firm in Metro Vancouver is looking for a Distinguished Engineer - AI Computing System. This role involves leading AI framework development for large model training. Candidates should have over 5 years of expertise in R&D for large models, proficiency in model structures, and experience with GPU/NPU optimization. The position offers a base salary ranging from $172,000 to $230,000 depending on qualifications and experience.

Qualifications

  • Over 5 years of R&D experience in large model training and optimization.
  • Proficient in model structures like Deepseek and Llama.
  • Experience optimizing AI systems with software-hardware collaboration.

Responsibilities

  • Plan and layout AI frameworks and software features for large model training.
  • Lead team in building key technologies for model training optimization.
  • Collaborate with experts on standards and patents in AI training.

Skills

Large model training optimization
AI frameworks
Cluster computing
Software optimization
Team leadership

Education

Degree in artificial intelligence, computer science, or related field

Tools

GPU
NPU

Job description

Huawei Canada has an immediate permanentopening for a Distinguished Engineer - AI Computing System

About the team:

The Advanced Computing and Storage Lab, currently a part of the Vancouver Research Centre, aims to explore adaptive computing system architectures to address the challenges posed by flexible and variable application loads in the future. It assists in ensuring the stability and quality of training clusters, constructs dynamic cluster configuration strategy solvers, and establishes precision control systems to create stable and efficient computing power clusters. One of the lab's goals is to focus on key industry AI application scenarios such as large model training/inference, based on key technologies like low-precision training, multi-modal training, and reinforcement learning, responsible for bottleneck analysis and the design and development of optimization solutions, thereby improving training and inference performance as well as usability.

About the job:

  • As a leading expert in the industry in the field of training cluster software frameworks and technologies, gain insights into the evolution direction of industry AI large model training frameworks and key features. Plan and layout AI frameworks and software features for scenarios such as large model pre-training, post-training, and integrated training and inference, building key capabilities for the company's training cluster software framework.

  • Focusing on the company's large model training optimization field, lead the team to build key technologies such as low-precision training, parallel strategy tuning, and training resource optimization, promoting the commercial implementation of large model perception optimization-related technologies.

  • Focusing on the company's training servers and super nodes and other products, lead the team to build large model AI training frameworks, operator libraries, acceleration libraries, and other software frameworks and acceleration features, fully leveraging system engineering and software-hardware collaboration capabilities to enhance AI cluster computing efficiency.

  • Identify high-quality academic resources in the direction of large model training, collaborate with domain experts and scholars on projects, layout related standards and patents, support the company's continuous innovation in the training cluster field, and build long-term competitiveness in the AI training cluster direction.

  • Cultivate a team of technical experts and key technical backbone in the direction of AI training cluster frameworks and software optimization.

The base salary for this position ranges from $172,000 to $230,000 depending on education, experience and demonstrated expertise.


About the ideal candidate:

  • Major in artificial intelligence, computer science, software, automation, physics, mathematics, electronics, microelectronics, information technology, or related fields, with more than 5 years of R&D experience in large model training and optimization.

  • Proficient in common model structures of large models such as Deepseek and Llama, with deep technical expertise in large model training and inference optimization in fields like LLM, MoE, and multimodal learning.

  • Familiar with the hardware architecture and programming systems of AI accelerators such as GPU and NPU, with experience in optimizing AI systems with software-hardware-cores collaboration.

  • Familiar with cluster computing and cloud computing fields, with experience in software architecture design for cluster scheduling.

  • Enjoys research, has strong learning ability, good communication skills, and teamwork ability.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs