Enable job alerts via email!

Machine Learning Performance Engineer

Jane Street

City Of London

On-site

GBP 125,000 - 150,000

Full time

5 days ago
Be an early applicant

Job summary

A leading financial technology firm in London is seeking an engineer with expertise in low-level systems programming to join their ML team. The ideal candidate will optimise model performance and system integration, focusing on efficient real-time processing. Candidates with knowledge in modern ML techniques, CUDA, and distributed GPU training are encouraged to apply. This role offers a unique opportunity to combine engineering and finance.

Qualifications

  • Experience debugging training run performance end to end.
  • Intuition about CUDA graph launch and tensor core arithmetic.
  • Inventive approach to problem-solving.

Responsibilities

  • Optimise model performance across training and inference.
  • Integrate systems beyond CUDA including storage and networking.

Skills

Understanding of modern ML techniques and toolsets
Low-level GPU knowledge of PTX, SASS, and memory hierarchy
Debugging and optimisation experience with CUDA tools
Knowledge of Triton, cuDNN, cuBLAS
Familiarity with Infiniband, NVLink

Tools

CUDA GDB
NSight Systems
NSight Compute
Job description
Overview

We are looking for an engineer with experience in low-level systems programming and optimisation to join our growing ML team.

Machine learning is a critical pillar of Jane Street's global business. Our ever-evolving trading environment serves as a unique, rapid-feedback platform for ML experimentation, allowing us to incorporate new ideas with relatively little friction.

Your part here is optimising the performance of our models – both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level – is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?

If you’ve never thought about a career in finance, you’re in good company. If you have a curious mind and a passion for solving interesting problems, we have a feeling you’ll fit right in.

Responsibilities

Responsibilities are centered on optimising model performance and system integration across training and inference, with a focus on whole-systems approaches beyond CUDA to storage, networking, and host- and GPU-level considerations.

Qualifications
  • An understanding of modern ML techniques and toolsets
  • The experience and systems knowledge required to debug a training run’s performance end to end
  • Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores and the memory hierarchy
  • Debugging and optimisation experience using tools like CUDA GDB, NSight Systems, NSight Computesight-systems and nsight-compute
  • Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN and cuBLAS
  • Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization and asynchronous memory loads
  • Background in Infiniband, RoCE, GPUDirect, PXN, rail optimisation and NVLink, and how to use these networking technologies to link up GPU clusters
  • An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
  • An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools

Note: The final line items in the original description were form-field prompts and additional information for source; those have been omitted to preserve focus on the role content.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.