Enable job alerts via email!
A leading financial technology firm in London is seeking an engineer with expertise in low-level systems programming to join their ML team. The ideal candidate will optimise model performance and system integration, focusing on efficient real-time processing. Candidates with knowledge in modern ML techniques, CUDA, and distributed GPU training are encouraged to apply. This role offers a unique opportunity to combine engineering and finance.
We are looking for an engineer with experience in low-level systems programming and optimisation to join our growing ML team.
Machine learning is a critical pillar of Jane Street's global business. Our ever-evolving trading environment serves as a unique, rapid-feedback platform for ML experimentation, allowing us to incorporate new ideas with relatively little friction.
Your part here is optimising the performance of our models – both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level – is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?
If you’ve never thought about a career in finance, you’re in good company. If you have a curious mind and a passion for solving interesting problems, we have a feeling you’ll fit right in.
Responsibilities are centered on optimising model performance and system integration across training and inference, with a focus on whole-systems approaches beyond CUDA to storage, networking, and host- and GPU-level considerations.
Note: The final line items in the original description were form-field prompts and additional information for source; those have been omitted to preserve focus on the role content.