
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading AI cloud infrastructure company is seeking a senior ML Engineer specializing in large language models (LLMs). The role involves architecting distributed training and inference pipelines, implementing CUDA kernels, and optimizing performance for multi-GPU setups. Ideal candidates will have a strong grasp of machine learning theory and modern frameworks like JAX and PyTorch, along with software engineering skills. This position offers competitive salary, professional growth opportunities, and a collaborative work environment in Greater London.
Amsterdam, Netherlands; London, United Kingdom; Remote - Europe
Why work at NebiusNebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we workHeadquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
This role is for Nebius AI R&D, a team focused on applied research and the development of AI-heavy products. Examples of applied research that we have recently published include:
One example of an AI product that we are deeply involved in is Nebius AI Studio — an inference and fine-tuning platform for AI models.
We are currently in search of senior and staff-level ML engineers to work on optimizing training and inference performance in a large-scale multi-GPU multi-node setups. This role will require expertise in distributed systems and high-performance computing to build, optimize, and maintain robust pipelines for training and inference.
Your responsibilities will include:We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!