Enable job alerts via email!

Senior Software Engineer

RAKUTEN ASIA PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A leading technology company based in Singapore is seeking a GPU Training & Inference Optimization Engineer to enhance AI model efficiency on GPU clusters. The ideal candidate will have over 3 years of experience in GPU-accelerated machine learning, specializing in optimizing LLM workloads across various frameworks. This position offers tremendous opportunities to impact cutting-edge AI technologies and work collaboratively with global teams.

Benefits

Work on cutting-edge technology
Collaboration with global teams
Opportunity for impactful research

Qualifications

  • 3+ years of experience in GPU-accelerated ML training & inference optimization.
  • Expertise in PyTorch, DeepSpeed, or related frameworks.
  • Strong knowledge of LLM inference optimizations.

Responsibilities

  • Optimize LLM training frameworks to maximize GPU utilization.
  • Profile and optimize distributed training bottlenecks.
  • Implement inference optimizations for low-latency serving.

Skills

GPU-accelerated ML frameworks
Distributed training optimization
Inference optimization

Education

Bachelor's degree in Computer Science or Engineering

Tools

PyTorch
DeepSpeed
CUDA
Triton
Job description

Situated in the heart of Singapore's Central Business District, Rakuten Asia Pte. Ltd. is Rakuten's Asia Regional headquarters. Established in August 2012 as part of Rakuten's global expansion strategy, Rakuten Asia comprises various businesses that provide essential value-added services to Rakuten's global ecosystem. Through advertisement product development, product strategy, and data management, among others, Rakuten Asia is strengthening Rakuten Group's core competencies to take the lead in an increasingly digitalized world.

AI & Data Division (AIDD) spearheads data science & AI initiatives by leveraging data from Rakuten Group. We build a platform for large-scale field experimentations using cutting-edge technologies to provide critical insights that enable faster and better and faster contribution for our business. Our division boasts an international culture created by talented employees from around the world. Following the strategic vision “Rakuten as a data-driven membership company”, AIDD is expanding its data & AI related activities across multiple Rakuten Group companies.

As a GPU Training & Inference Optimization Engineer, you will focus on maximizing the performance, efficiency, and scalability of LLM training and inference workloads on Rakuten’s GPU clusters. You will deeply optimize training frameworks (e.g., PyTorch, DeepSpeed, FSDP) and inference engines (e.g., vLLM, TensorRT-LLM, Triton, SGLang), ensuring Rakuten’s AI models run at peak efficiency. This role requires strong expertise in GPU-accelerated ML frameworks, distributed training, and inference optimization, with a focus on reducing training time, improving GPU utilization, and minimizing inference latency.

Key Responsibilities
  • Optimize LLM training frameworks (e.g., PyTorch, DeepSpeed, Megatron-LM, FSDP) to maximize GPU utilization and reduce training time.
  • Profile and optimize distributed training bottlenecks (e.g., NCCL issues, CUDA kernel efficiency, communication overhead).
  • Implement and tune inference optimizations (e.g., quantization, dynamic batching, KV caching) for low-latency, high-throughput LLM serving (vLLM, TensorRT-LLM, Triton, SGLang).
  • Collaborate with infrastructure teams to improve GPU cluster scheduling, resource allocation, and fault tolerance for large-scale training jobs.
  • Develop benchmarking tools to measure and improve training throughput, memory efficiency, and inference latency.
  • Research and apply cutting-edge techniques (e.g., mixture-of-experts, speculative decoding) to optimize LLM performance.
Mandatory Qualifications
  • 3+ years of hands-on experience in GPU-accelerated ML training & inference optimization, preferably for LLMs or large-scale deep learning models.
  • Deep expertise in PyTorch, DeepSpeed, FSDP, or Megatron-LM, with experience in distributed training optimizations.
  • Strong knowledge of LLM inference optimizations (e.g., quantization, pruning, KV caching, continuous batching).
  • Bachelor’s or higher degree in Computer Science, Engineering, or related field.
Nice-to-Have Skills
  • Proficiency in CUDA, Triton kernel, NVIDIA tools (Nsight, NCCL), and performance profiling (e.g., PyTorch Profiler, TensorBoard).
  • Experience with LLM-specific optimizations (e.g., FlashAttention, PagedAttention, LoRA, speculative decoding).
  • Familiarity with Kubernetes (K8s) for GPU workloads (e.g., KubeFlow, Volcano).
  • Contributions to open-source ML frameworks (e.g., PyTorch, DeepSpeed, vLLM).
  • Experience with inference serving frameworks (e.g., vLLM, TensorRT-LLM, Triton, Hugging Face TGI).
Why Join Us?
  • Work on cutting-edge LLM training & inference optimization at scale.
  • Directly impact Rakuten’s AI infrastructure by improving efficiency and reducing costs.
  • Collaborate with global AI/ML teams on high-impact challenges.
  • Opportunity to research and implement state-of-the-art GPU optimizations.

Rakuten is an equal opportunities employer and welcomes applications regardless of sex, marital status, ethnic origin, sexual orientation, religious belief, or age.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.