Ativa os alertas de emprego por e-mail!

Machine Learning Engineer (Distributed Training)

CloudWalk

Brasil

Teletrabalho

BRL 369.000 - 529.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Resumo da oferta

A leading tech firm in Brazil is seeking a Research Engineer to advance their distributed training stack for large language models. The candidate will design and maintain the training pipeline, optimize performance, and collaborate with cross-functional teams. This role demands a strong background in PyTorch and distributed training. Competitive salary and equity are offered.

Serviços

Competitive salary
Equity
Opportunity to shape AI infrastructure

Qualificações

  • Strong background in PyTorch and distributed training frameworks.
  • Hands-on experience with large-scale multi-GPU or multi-node training.
  • Familiarity with mixed-precision techniques.
  • Understanding of GPUs and scheduling systems.

Responsabilidades

  • Design and maintain the distributed LLM training pipeline.
  • Orchestrate multi-node runs across internal clusters.
  • Optimize performance and cost for training workloads.
  • Integrate cutting-edge frameworks into workflows.
  • Build internal tools for research-to-production transitions.

Conhecimentos

PyTorch
Distributed training
Multi-GPU training
Transformers
Kubernetes

Ferramentas

DeepSpeed
FSDP
Ray
MLflow
W&B
Descrição da oferta de emprego

About CloudWalk:

CloudWalk is building the intelligent infrastructure for the future of financial services. Powered by AI, blockchain, and thoughtful design, our systems serve millions of entrepreneurs across Brazil and the US every day.

Our AI team trains large-scale language models that power real products - from payment intelligence and credit scoring to on-device assistants for merchants.

About the Role

We’re looking for a Research Engineer to design, scale, and evolve CloudWalk’s distributed training stack for large language models. You’ll work at the intersection of research and infrastructure - running experiments across DeepSpeed, FSDP, Hugging Face Accelerate, and emerging frameworks like Unsloth, TorchTitan, and Axolotl.

You’ll own the full training lifecycle: from cluster orchestration and data streaming to throughput optimization and checkpointing at scale. If you enjoy pushing the limits of GPUs, distributed systems, and next-generation training frameworks, this role is for you.

Responsibilities
  • Design, implement, and maintain CloudWalk’s distributed LLM training pipeline.
  • Orchestrate multi-node, multi-GPU runs across Kubernetes and internal clusters.
  • Optimize performance, memory, and cost across large training workloads.
  • Integrate cutting-edge frameworks (Unsloth, TorchTitan, Axolotl) into production workflows.
  • Build internal tools and templates that accelerate research-to-production transitions.
  • Collaborate with infra, research, and MLOps teams to ensure reliability and reproducibility.
Requirements
  • Strong background in PyTorch and distributed training (DeepSpeed, FSDP, Accelerate).
  • Hands‑on experience with large‑scale multi‑GPU or multi‑node training.
  • Familiarity with Transformers, Datasets, and mixed‑precision techniques.
  • Understanding of GPUs, containers, and schedulers (Kubernetes, Slurm).
  • Mindset for reliability, performance, and clean engineering.
Bonus
  • Experience with Ray, MLflow, or W&B.
  • Knowledge of ZeRO, model parallelism, or pipeline parallelism.
  • Curiosity for emerging open‑source stacks like Unsloth, TorchTitan, and Axolotl.
Process

Our process is simple: a deep conversation on distributed systems and LLM training, and a cultural interview.

Benefits

Competitive salary, equity, and the opportunity to shape the next generation of large-scale AI infrastructure at CloudWalk.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.