Enable job alerts via email!

Research Engineer, Large Scale Pre-Training Performance Engineering London, UK

DeepMind Technologies Limited

London

On-site

GBP 60,000 - 100,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative firm is seeking a Research Engineer to redefine the efficient training of frontier LLMs. This role offers a unique opportunity to influence the design and training of state-of-the-art models on cutting-edge hardware. Join a diverse team committed to advancing artificial intelligence for public benefit, where your contributions will drive significant impact in the field. If you're passionate about optimizing performance and collaborating with experts, this position is perfect for you.

Qualifications

  • Proven track record in distributed training of LLMs at massive scale.
  • Experience with GPU/TPU programming and optimizing performance.

Responsibilities

  • Optimize performance of LLM models on hardware accelerators.
  • Collaborate with teams to ensure efficient training at scale.

Skills

Distributed Training of LLMs
GPU/TPU Programming
ML Frameworks (JAX, PyTorch)
Low-Level Programming (CUDA, OpenCL)
Python

Job description

London, UK

Snapshot

We are seeking a research engineer to define, drive, and critically contribute to the next generation of the state-of-the-art ML models on TPU. As part of the Pre-Training team you will co-design the model, and implement critical components across Model architecture, ML frameworks, custom kernels and platform, to deliver frontier models with maximum efficiency.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

We’re looking for a Research Engineer to re-define efficient training of frontier LLMs at massive scale. This role offers an opportunity to influence the design of frontier LLM models, and drive an effort to ensure efficient training and inference.

Key responsibilities:
  • Being responsible for Pre-Training efficiency and optimising the performance of the latest models on Google’s fleet of hardware accelerators - throughout the entire LLM research, training and deployment lifecycle.
  • Greatly improving the performance of LLM models on hardware accelerators by optimizing at all levels, including developing custom kernels when necessary.
  • Collaborating with the compiler, framework, and platform teams. And ensure efficient training at industry-largest scale.
  • Profile models to identify performance bottlenecks and opportunities for optimization.
  • Develop low-level custom kernels for maximum performance of the most critical operators.
  • Collaborating with research teams by enabling new critical operators in advance of their availability in frameworks and compilers.
About You

In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:

  • A proven track record of critical contributions to the distributed training of LLMs at 1e25 FLOPs scale on modern GPU/TPU clusters
  • Experience in programming hardware accelerators GPU/TPUs via ML frameworks (e.g. JAX, PyTorch) and low-level programming models (e.g. CUDA, OpenCL)
  • Experience in leveraging custom kernels and compiler infrastructure to improve performance on hardware
  • Experience with Python and neural network training (publications, open-source projects, relevant work experience, etc.)

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Apply for this job

*

indicates a required field

First Name *

Last Name *

Email *

Phone

Resume/CV *

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

Enter manually

Accepted file types: pdf, doc, docx, txt, rtf

LinkedIn Profile

Link to external profile e.g. LinkedIn, GitHub etc.

Where did you hear about this role? * Select...

UK Demographic Questions

Google DeepMind is committed to equal opportunity employment regardless of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital status, domestic or civil partnership status, sexual orientation, gender identity or any other basis as protected by applicable law. A voluntary self-identification question enables us to monitor and evaluate the effectiveness of our equal opportunities policy within our recruitment process. Your information is used in an aggregated form for these limited purposes and will not form part of your application.

Please indicate your race/ethnic group (choose all that apply) * Select...

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.