
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A financial technology company in Brazil seeks a Research Engineer to design and evolve its distributed training stack for large language models. The role includes optimizing performance on multi-GPU systems and integrating cutting-edge frameworks into production. Ideal candidates will have a strong background in PyTorch and distributed training techniques. This position offers competitive salary and equity in a leading AI infrastructure firm.
About CloudWalk:
CloudWalk is building the intelligent infrastructure for the future of financial services. Powered by AI, blockchain, and thoughtful design, our systems serve millions of entrepreneurs across Brazil and the US every day.
Our AI team trains large-scale language models that power real products - from payment intelligence and credit scoring to on-device assistants for merchants.
We’re looking for a Research Engineer to design, scale, and evolve CloudWalk’s distributed training stack for large language models. You’ll work at the intersection of research and infrastructure - running experiments across DeepSpeed, FSDP, Hugging Face Accelerate, and emerging frameworks like Unsloth, TorchTitan, and Axolotl.
You’ll own the full training lifecycle: from cluster orchestration and data streaming to throughput optimization and checkpointing at scale. If you enjoy pushing the limits of GPUs, distributed systems, and next-generation training frameworks, this role is for you.
Our process is simple: a deep conversation on distributed systems and LLM training, and a cultural interview.
Competitive salary, equity, and the opportunity to shape the next generation of large-scale AI infrastructure at CloudWalk.