Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

Forward Deployed ML Engineer

Rockstar

A distancia

EUR 60.000 - 80.000

Jornada completa

Ayer
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A leading AI infrastructure company is seeking a Forward Deployed Machine Learning Engineer to work directly with customers on ML systems. This role emphasizes high execution and ownership, requiring hands-on deployment and adaptation of models in production environments. The ideal candidate will have 1-3 years of production ML experience and a strong foundation in machine learning engineering. This globally remote position offers significant learning opportunities and the chance to build impactful AI infrastructure.

Formación

  • 1–3 years of production ML engineering experience.
  • Demonstrated ability to deploy models serving real users.
  • Strong fundamentals in ML engineering and coding.

Responsabilidades

  • Deploy, fine-tune, and serve ML models in production.
  • Adapt platform for customer-specific workflows.
  • Act as voice of the customer to internal teams.

Conocimientos

Production ML engineering experience
Model deployment
Data pipelines
Model training
Debugging complex systems
Clear communication
Descripción del empleo

Rockstar is recruiting for a forward-deployed machine learning engineer role at a leading AI infrastructure company. The client is building the AI backbone for the next generation of intelligent products, helping fast-growing AI startups design, fine-tune, evaluate, deploy, and maintain specialized models across text, vision, and embeddings. Think of it as a full-stack backend for training, RL, inference, evaluation, and long-term model maintenance. Their customers are Series A–C AI companies building enterprise-grade products, and their promise is simple : they make AI systems better.

About the Company

The company is building the AI backbone for the next generation of intelligent products. It helps fast-growing AI startups design, fine-tune, evaluate, deploy, and maintain specialized models across text, vision, and embeddings. Think of it as a full-stack backend for training, RL, inference, evaluation, and long-term model maintenance.

Its customers are Series A–C AI companies building enterprise-grade products. Its promise is simple : it makes your AI system better.

The Role

(Remote, open globally)

The company is hiring a Forward Deployed Machine Learning Engineer (FD-MLE) to work directly with customers to deploy, adapt, and operate production ML systems on top of its platform.

This is a high-execution, high-ownership role. The engineer will be embedded in customer problems, shipping real models into real production environments—often under tight timelines and ambiguous requirements. If you enjoy being close to users, moving fast, and doing the unglamorous work required to make ML systems actually work, this role is for you.

Why This Role Matters

AI infrastructure often breaks down at the last mile—between a promising model and a reliable, scalable production system. As a Forward Deployed MLE, you are the connective tissue between the platform and customer success.

You’ll :
  • Turn cutting-edge ML workflows into production-ready systems
  • Unblock customers facing data, training, inference, or deployment challenges
  • Feed real-world learnings back into product and platform design

This role is ideal for early-career ML engineers who want maximum learning velocity, deep exposure to real systems, and accelerated responsibility.

What You’ll Do
Customer‑Facing Execution
  • Deploy, fine‑tune, and serve ML models in production environments (text, vision, embeddings, RL‑adjacent workflows).
  • Work hands‑on with customer data, model architectures, training loops, and inference stacks.
  • Debug performance issues across training, evaluation, latency, cost, and reliability.
  • Adapt the platform to customer‑specific workflows and constraints.
Systems & Infrastructure
  • Build and maintain model‑serving pipelines (batch and real‑time).
  • Optimize inference performance (throughput, latency, cost).
  • Help productionize evaluation, monitoring, and retraining workflows.
  • Work across cloud infrastructure, GPUs, and ML tooling stacks.
Feedback & Iteration
  • Act as the “voice of the customer” to internal product and engineering teams.
  • Identify recurring patterns, edge cases, and gaps in the platform.
  • Contribute to internal tooling, templates, and best practices.
Who You Are
Required
  • 1–3 years of production ML engineering experience
  • You have deployed models that serve real users in production
  • You’ve worked on training, inference, or ML systems end‑to‑end
  • Strong fundamentals in ML engineering : data pipelines, model training, evaluation, and serving.
  • Comfortable writing production‑quality code and debugging complex systems.
  • Extremely diligent and hardworking
  • This is an execution‑heavy role where effort and follow‑through matter
  • You’re comfortable putting in the hours when needed to get things working
  • Clear communicator who can work directly with customers and internal teams.
Nice to Have
  • Experience with LLMs, fine‑tuning, embeddings, or RL‑style workflows.
  • Exposure to GPU workloads, distributed training, or high‑throughput inference.
  • Background in infra‑heavy environments (ML platforms, data systems, dev tools).
  • Interest in customer‑facing or forward‑deployed roles.
Work Environment
  • Globally remote
  • High trust, high autonomy
  • Fast‑moving, early‑stage company with direct access to founders
  • Outcomes > process
Why Join
  • Work on real ML systems—not demos or research projects
  • Rapid skill growth through exposure to diverse customer problems
  • Ownership and responsibility early in your career
  • Build infrastructure that powers the next generation of AI products
Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.