
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading AI solutions provider is seeking an experienced AI Engineer to join their team. This role involves training large language models from scratch, optimizing GPU systems, and working on innovative architectures like Mixture-of-Experts. The ideal candidate should have solid experience with GPU training, PyTorch, and distributed frameworks. The offer includes a competitive salary of up to £200k plus equity, with a 100% remote working model.
Do you want to build frontier-level LLM models from scratch?
Have you worked on large-scale GPU training, Triton/CUDA, or MoE systems?
Are you ready to join one of Europe’s most technical deep-learning teams?
A Europe-based deep learning company is building the next generation of foundation models. Think of a smaller, faster, highly technical version of the major frontier labs – focused on LLM/VLM training, GPU efficiency, safety layers, and advanced architectures. They are preparing for their next funding milestone and operate with an extremely high technical bar.
They are hiring an AI Engineer to focus on training, scaling, and optimising large models. This role is hands-on, research-driven, and sits at the core of model creation. The AI Engineer will train LLMs and VLMs from scratch, optimise distributed GPU systems, and contribute to new architectures including Mixture-of-Experts and multimodal pipelines. You’ll work closely with a small team of world-class engineers on one of the most technical problems in AI.
Interested? Please apply below.