Enable job alerts via email!

Member of Engineering (Pre-training)

poolside

United Kingdom

Remote

GBP 80,000 - 100,000

Full time

8 days ago

Job summary

A leading AI company in the UK is looking for a skilled engineer to join their pre-training team focused on Large Language Models. The successful candidate will engage in both programming and implementing architectures and will benefit from a fully remote work environment, flexible hours, and a strong commitment to quality. This role offers the opportunity to work with cutting-edge technology in a diverse and inclusive culture.

Benefits

Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you and dependents
Company-provided equipment
Wellbeing and home office allowances
Frequent team get-togethers
Diverse & inclusive culture

Qualifications

  • Deep knowledge of Transformers is a must.
  • Trained LLMs and coded LLMs from scratch.
  • Experience in writing high-quality Python, Cython, C/C++, Triton, CUDA code.

Responsibilities

  • Follow the latest research on LLMs and source code generation.
  • Write high-quality code while babysitting and analyzing experiments.
  • Work in the team and stay in touch regarding future steps.

Skills

Experience with Large Language Models (LLM)
Deep knowledge of Transformers
Knowledge/Experience with cutting-edge training tricks
Knowledge/Experience of distributed training
Strong machine learning and engineering background
Programming experience in Python with PyTorch or Jax
C/C++, CUDA, Triton knowledge
Strong algorithmic skills
Linux experience
Job description
ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.

poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You would be working on our pre-training team focused on building out our distributed training of Large Language Models and major architecture changes. This is a hands-on role where you'll be both programming and implementing LLM architectures (dense & sparse) and distributed training code all the way from data to tensor parallelism, while researching potential optimizations (from basic operations to communication) and new architectures & distributed training strategies. You will have access to thousands of GPUs in this team.

YOUR MISSION

To train the best foundational models for source code generation in the world in minimum time and with maximum hardware utilization.

RESPONSIBILITIES
  • Follow the latest research on LLMs and source code generation. Propose and evaluate innovations, both in the quality and the efficiency of the training

  • Do LLM-Ops: babysitting and analyzing the experiments, iterating

  • Write high-quality Python, Cython, C/C++, Triton, CUDA code

  • Work in the team: plan future steps, discuss, and always stay in touch

SKILLS & EXPERIENCE
  • Experience with Large Language Models (LLM)

    • Deep knowledge of Transformers is a must

    • Knowledge/Experience with cutting-edge training tricks

    • Knowledge/Experience of distributed training

    • Trained LLMs from scratch

    • Coded LLMs from scratch

    • Knowledge of deep learning fundamentals

  • Strong machine learning and engineering background

  • Research experience

    • Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc. - is a nice to have

    • Can freely discuss the latest papers and descend to fine details

    • Is reasonably opinionated

  • Programming experience

    • Linux

    • Strong algorithmic skills

    • Python with PyTorch or Jax

    • C/C++, CUDA, Triton

    • Use modern tools and are always looking to improve

    • Strong critical thinking and ability to question code quality policies when applicable

    • Prior experience in non-ML programming, especially not in Python - is a nice to have

PROCESS
  • Intro call with one of our Founding Engineers

  • Technical Interview(s) with one of our Founding Engineers

  • Team fit call with the People team

  • Final interview with Eiso, our CTO & Co-Founder

BENEFITS
  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you and dependents

  • Company-provided equipment

  • Wellbeing, always-be-learning and home office allowances

  • Frequent team get togethers

  • Great diverse & inclusive people-first culture

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.