
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A cutting-edge AI technology company based in Toronto is looking for an experienced engineer to develop high-performance software for their innovative distributed system architecture. The role requires expertise in low-level programming such as C/C++, multithreading, and performance optimization. You'll tackle challenges related to processing large model and training data, leveraging hardware resources to scale neural networks with over 100 trillion parameters. Join us to be at the forefront of AI advancements while enjoying a supportive work culture.
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
You will develop high-performance code running the distributed system consisting of the CS-2 and multiple heterogenous servers. Feeding model and training data to the CS-2 in the Weight Streaming regime, as well as CPU-side processing of this data, is a huge challenge that requires optimized data structures and algorithms that take full advantage of the available hardware resources, including CPU, memory, storage, and network bandwidth.
The software must be built with a high degree of concurrency across threads, processes, cores, and systems. It is central to the Cerebras Weight Streaming architecture which scales to brain-sized neural networks with over 100T parameters.
Read our blog: Five Reasons to Join Cerebras in 2025.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.