Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Inference Systems Engineer

Noxx

Teletrabalho

BRL 80.000 - 120.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading technology company is seeking an Inference Systems Engineer to enhance the serving runtime for production LLM inference. This deeply technical role involves optimizing system performance, collaborating with platform teams, and driving improvements in performance stability and efficiency. Candidates should have over 5 years of experience in building high-performance systems, particularly in model serving or low-latency environments. Excellent communication and engineering hygiene are essential for success in this role.

Qualificações

  • 5+ years building high-performance systems, preferably involving model serving, GPU systems, or low-latency distributed systems.
  • Strong understanding of LLM inference and memory behavior.
  • Experience with production profiling and debugging.

Responsabilidades

  • Own the end-to-end serving runtime behavior and optimization.
  • Collaborate with teams to enhance system performance and stability.
  • Establish performance measurement disciplines and ensure production readiness.

Conhecimentos

Building high-performance systems
Understanding of LLM inference tradeoffs
Comfort across Python/C++ stacks
Performance improvements
Clear communication
Descrição da oferta de emprego

Inference Systems Engineer

Remote

Infrastructure / Serving Systems

$5,651 - $6,469/month USD

Role Overview

As an Inference Systems Engineer, you will own the serving runtime that powers production LLM inference. This is a deeply technical role focused on system performance and stability: optimizing request lifecycle behavior, streaming correctness, batching/scheduling strategy, cache and memory behavior, and runtime execution efficiency. You will ship changes that improve TTFT, p95/p99 latency, throughput, and cost efficiency while preserving correctness and reliability under multi-tenant load.

You will collaborate closely with platform/infrastructure operations, networking, and API/control-plane teams to ensure the serving system behaves predictably in production and can be debugged quickly when incidents occur. This role is for engineers who can reason about the entire inference pipeline, validate improvements with rigorous measurement, and operate with production‑grade discipline.

Responsibilities
  • Own the end‑to‑end serving runtime behavior: request lifecycle, streaming semantics, cancellation, retries interaction, timeouts, and consistent failure modes.
  • Design and implement batching and scheduling strategy: dynamic batching, admission control, fairness under mixed tenants, priority lanes, and backpressure mechanisms to prevent cascading failures.
  • Optimize performance at the systems level: reduce time‑to‑first‑token, improve tail latency stability, increase tokens/sec throughput, and improve accelerator utilization under realistic workloads.
  • Improve memory behavior and cache efficiency: KV‑cache policies, fragmentation control, eviction strategies, and safeguards against OOM cliffs and performance thrash.
  • Drive runtime execution optimizations: operator‑level improvements, quantization integration, compilation/tuning paths where appropriate, and parameterization that produces stable performance across deployments.
  • Establish a performance measurement discipline: reproducible benchmarks, realistic traffic traces, profiling workflows, regression detection gates, and dashboards tied to production outcomes.
  • Build production readiness into the system: feature‑flagged rollouts, canarying, safe configuration changes, and incident playbooks that reduce MTTR.
  • Partner with networking and infrastructure operations to align deployment topology, failure domains, and capacity constraints to performance and reliability goals.
  • Collaborate with product and API teams to ensure the serving layer's guarantees are reflected accurately in external interfaces and customer expectations.
Requirements
  • 5+ years building high‑performance systems (model serving, GPU systems, performance engineering, or low‑latency distributed systems).
  • Strong understanding of LLM inference tradeoffs: batching vs latency, prefill vs decode dynamics, cache behavior, memory pressure, and tail latency causes.
  • Comfort working across Python/C++ stacks with production profiling and debugging tools.
  • Track record of shipping performance improvements that hold up under production variance and operational constraints.
  • Strong engineering hygiene: tests, instrumentation, documentation, and careful rollout discipline.
  • Ability to communicate clearly across teams and operate calmly during incidents.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.