Job Search and Career Advice Platform

Enable job alerts via email!

AI Inference Engineer (London)

Pantera Capital

City Of London

On-site

GBP 80,000 - 100,000

Full time

10 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A financial technology firm in London is seeking an AI Inference Engineer to develop APIs for AI inference used by internal and external customers. Responsibilities include benchmarking, improving system reliability, and exploring LLM optimizations. The ideal candidate has experience with ML systems, deep learning frameworks, and GPU architectures. Competitive salary and equity may be offered.

Benefits

Equity in total compensation

Qualifications

  • Experience with ML systems and deep learning frameworks.
  • Familiarity with common LLM architectures and inference optimization techniques.
  • Understanding of GPU architectures or experience with GPU kernel programming.

Responsibilities

  • Develop APIs for AI inference.
  • Benchmark and address bottlenecks throughout the inference stack.
  • Improve the reliability and observability of systems.
  • Explore novel research and implement LLM inference optimizations.

Skills

Experience with ML systems
Deep learning frameworks
Familiarity with LLM architectures
GPU architectures understanding

Tools

PyTorch
TensorFlow
CUDA
Kubernetes
Job description
Location

London

Employment Type

Full time

Department

AI

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities
  • Develop APIs for AI inference that will be used by both internal and external customers
  • Benchmark and address bottlenecks throughout our inference stack
  • Improve the reliability and observability of our systems and respond to system outages
  • Explore novel research and implement LLM inference optimizations
Qualifications
  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

Final offer amounts are determined by multiple factors, including, experience and expertise.

Equity: In addition to the base salary, equity may be part of the total compensation package.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.