Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking a Senior Tech Lead Manager to drive the optimization of advanced AI models. In this pivotal role, you will lead a specialized team focused on enhancing model efficiency through architectural innovations and compute optimizations. Your expertise in ML frameworks and accelerator architectures will be crucial in maximizing throughput and minimizing latency. This position offers the chance to work at the forefront of AI technology, collaborating with diverse teams to shape the future of intelligent systems. If you are passionate about making a significant impact in the AI landscape, this is the opportunity for you.
Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
The Model Efficiency team drives the optimization of Cohere’s base models through architectural innovations, framework optimizations, and compute efficiency improvements. As a Senior Tech Lead Manager of the Model Efficiency team, you will spearhead technical strategy while leading a team of specialized engineers focused on maximizing inference throughput, minimizing latency, and optimizing model deployment across accelerator architectures.
Architect comprehensive technical roadmaps with quantifiable performance metrics (TFLOP/s utilization, memory bandwidth efficiency, latency/throughput KPIs) aligned with product requirements and computational constraints
Identify and address technical competency gaps through strategic hiring of ML systems engineers with specialized expertise in model optimization, quantization, pruning, and accelerator-specific kernels
Demonstrate expert-level understanding of ML accelerator architectures (GPUs, TPUs, custom ASICs), memory hierarchies, and hardware-aware optimizations including tensor core utilization and parallel computation patterns
Lead critical technical decisions regarding inference frameworks (TensorRT, ONNX Runtime, PyTorch JIT, etc.), quantization strategies (INT8/INT4/mixed precision), and operator fusion techniques based on rigorous benchmarking
Collaborate with MLOps, infrastructure teams, and ML researchers to define optimization requirements, establish performance baselines, and integrate optimizations into production systems
Implement agile methodologies for workload prioritization with emphasis on critical path optimization and resource allocation across concurrent optimization workstreams
Provide technical mentorship on systems-level optimization approaches, profiling techniques, and hardware-software co-design principles
3+ years managing engineering teams with demonstrable impact on system performance metrics and team growth
Extensive experience with transformer architecture optimizations, attention mechanism enhancements, and KV-cache optimization techniques
Proven track record implementing LLM inference optimizations such as continuous batching, speculative decoding, or decoder-specific memory optimizations
Deep technical expertise in at least one ML framework's execution pipeline (PyTorch, JAX, TensorFlow) and corresponding compiler stack
Strong communication skills for translating complex technical tradeoffs into business impact metrics and explaining optimization strategies across technical domains
Demonstrated ability to build high-performance technical teams through targeted recruitment and growth-oriented management practices
Proficiency managing dynamic technical priorities under shifting requirements while maintaining optimization momentum and team velocity
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! If you want to work really hard on a glorious mission with teammates that want the same thing, Cohere is the place for you.
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
An open and inclusive culture and work environment
Work closely with a team on the cutting edge of AI research
Weekly lunch stipend, in-office lunches & snacks
Full health and dental benefits, including a separate budget to take care of your mental health
100% Parental Leave top-up for 6 months for employees based in Canada, the US, and the UK
Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
Remote-flexible, offices in Toronto, New York, San Francisco and London and co-working stipend
️ 6 weeks of vacation
Note: This post is co-authored by both Cohere humans and Cohere technology.