Enable job alerts via email!

Member of Technical Staff, Training and Inference

Boson AI

Toronto

On-site

CAD 150,000 - 600,000

Full time

Today
Be an early applicant

Job summary

An innovative AI startup in Toronto is looking for research scientists and engineers to optimize advanced AI models. Candidates should have expertise in CUDA, distributed optimization, and experience with deep learning frameworks. The role encompasses enhancing model architectures and efficient training techniques, with a competitive salary ranging from $150,000 to $600,000 annually.

Qualifications

  • Experience in writing clean and efficient code.
  • Proficiency in at least one deep learning framework.
  • Participated in at least 1 research project related to distributed training.

Responsibilities

  • Optimize model architectures for various data types.
  • Implement and optimize kernels for efficient training.
  • Conduct performance optimizations on the system level.
  • Engage in distributed optimization and training.

Skills

Clean and efficient coding
Deep learning frameworks (PyTorch, JAX)
Distributed optimization

Education

Master or Doctoral degree in computer science or equivalent

Tools

CUDA
Triton
Job description
Overview

Boson AI is an early-stage startup building large audio models for everyone to enjoy and use. Our founders (Alex Smola, Mu Li), and a team of Deep Learning, Optimization, NLP, and Statistics scientists and engineers are working on high quality generative AI models for language and beyond.

We are seeking research scientists and engineers to join our team full-time in our Santa Clara office. As part of your role, you will work on implementing and improving distributed optimization algorithms, performance tune architectures, improve inference and help us make Deep Networks perform efficiently on our cluster. The ideal candidate will possess a strong background in CUDA, Triton, PyTorch, distributed optimization and deep learning architectures.

We encourage you to apply even if you do not believe you meet every single qualification. As long as you are motivated to learn and join the development of foundation models, we’d love to chat.

Responsibilities
  • Optimize model architectures and loss objectives to handle combinations of images, video, text, speech, and audio data.
  • Implement and optimize kernels for efficient training on Hopper and Blackwell GPUs.
  • Performance optimization (floating point formats, sparsity, systems level optimization).
  • Distributed optimization and training.
Qualifications
  • Experience in writing clean and efficient code; Master or Doctoral degree in computer science or equivalent.
  • Proficiency in at least one deep learning framework, such as PyTorch or JAX.
  • Participated in at least 1 research project related to distributed training or inference.
Strong candidates may also have
  • Experience in implementing your own kernels in CUDA or another compiler/toolkit (Triton, ThunderKittens, PTX, etc.).
  • Experience in distributed optimization (e.g. using DeepSpeed, FSDP), ideally designing performance optimizations.
  • Experience in computer networking (e.g. Infiniband, using SHARP).
  • Experience in handling data at billions-scale.

$150,000 - $600,000 a year

Total compensations includes base pay, equity, and benefits.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.