Job Search and Career Advice Platform

Enable job alerts via email!

Senior LLMOps Engineer -Cloud / AI Infrastructure

TEEMA Solutions Group

Toronto

Hybrid

CAD 120,000 - 160,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A rapid-growth technology firm in Toronto is seeking a Staff LLMOps Engineer to lead the design and optimization of large language model infrastructure on the cloud. The ideal candidate has over 6 years of experience in DevOps and expertise in deploying LLMs in cloud environments. Responsibilities include architecting deployment pipelines and ensuring high-performance AI applications. Competitive salary and equity are included in the offer.

Benefits

Competitive salary
Meaningful equity
Innovative work culture

Qualifications

  • 6+ years in DevOps or cloud platform engineering.
  • 2+ years of experience with LLMs deployment.
  • Expertise with GPU-accelerated inference.

Responsibilities

  • Architect and operationalize LLM deployment pipelines on AWS.
  • Build and scale multi-GPU inference infrastructure.
  • Optimize inference performance using various frameworks.

Skills

DevOps expertise
ML infrastructure knowledge
Cloud platform experience
Python proficiency
Monitoring tools integration

Tools

AWS
Kubernetes
Terraform
Prometheus
Grafana
Job description

Location: Downtown Toronto
Hybrid: 4 days in office

Ready to build what powers the next generation of AI?

We’re looking for a Staff LLMOps Engineer to lead the design, deployment, and optimization of large language model (LLM) infrastructure on the cloud.
You’ll be the driving force behind taking trained models from lab to production—scaling efficiently across multi-GPU clusters and pushing the boundaries of inference performance for enterprise-grade AI applications.

If you thrive at the intersection of AI, cloud engineering, and systems optimization, this is your chance to shape the future of large-scale model serving in a high-impact environment.

What You’ll Do

Architect and operationalize LLM deployment pipelines on AWS and Kubernetes/EKS.

Build and scale multi-GPU inference infrastructure for low latency, high availability, and cost efficiency.

Optimize inference using frameworks like vLLM, SGLang, and DeepSpeed-Inference.

Implement advanced serving techniques: continuous batching, speculative decoding, KV-cache management, and distributed scheduling.

Collaborate with AI researchers to convert model training outputs into production-grade APIs and services.

Establish observability and monitoring for latency, throughput, GPU utilization, and failure recovery.

Automate provisioning, scaling, and upgrades using Terraform and CI/CD pipelines.

Ensure compliance, security, and efficiency in multi-tenant LLM hosting for enterprise clients.

What We’re Looking For

6+ years in DevOps, ML infrastructure, or cloud platform engineering.

2+ years of direct experience deploying and optimizing LLMs or large-scale ML models.

Expertise with GPU-accelerated inference and distributed serving environments.

Deep familiarity with cloud-native architectures (AWS, GCP, Azure) and Kubernetes.

Strong foundation in Python, Bash, and IaC (Terraform).

Experience integrating monitoring tools (Prometheus, Grafana, Datadog) for performance visibility.

Passion for building robust, scalable, and secure AI systems.

Why Join

Lead and own mission-critical AI infrastructure at a fast-scaling startup.

Work alongside world-class engineers, data scientists, and innovators.

Competitive salary + meaningful equity in a company redefining applied AI.

A culture built on innovation, technical depth, and impact—your work truly matters.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.