¡Activa las notificaciones laborales por email!

Senior AI Research Engineer, Model Inference (Remote)

Tether.io

España

A distancia

EUR 50.000 - 70.000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Descripción de la vacante

A leading technology firm is seeking a Senior AI Research Engineer focused on Model Inference. This role involves implementing custom inference kernels and optimizing models for performance across various hardware, particularly in mobile and embedded environments. The ideal candidate will have extensive experience with C++, GPU programming, and techniques in model optimization and quantization. Join to push the boundaries of language model performance in mobile applications.

Formación

  • Hands-on experience with quantization techniques, LoRA architectures, and Vulkan backend.
  • Strong background in GPU debugging and optimization.
  • Experience with large language model architectures such as Qwen, Gemma, LLaMA.

Responsabilidades

  • Implement and optimize custom inference and fine-tuning kernels for language models across hardware.
  • Design and extend datatype and precision support for model optimization.
  • Architect advanced quantization techniques to improve efficiency and memory usage.

Conocimientos

C++
GPU kernel programming
GPU acceleration with Vulkan
Quantization
Mixed-precision model optimization
Mobile GPU acceleration
LoRA fine-tuning
Custom dataset creation
Descripción del empleo

Senior AI Research Engineer, Model Inference (Remote)

Join to apply for the Senior AI Research Engineer, Model Inference (Remote) role at Tether.io

Get AI-powered advice on this job and more exclusive features.

About the job

We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for Language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).

This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM / LLMs.

Responsibilities
  • Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
  • Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
  • Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
  • Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
  • Investigate and resolve GPU acceleration issues on Vulkan and integrated / mobile GPUs.
  • Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
  • Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
  • Integrate and validate quantization workflows for training and inference.
  • Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
  • Conduct GPU testing across desktop and mobile devices.
  • Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
  • Deliver production-grade, efficient language model deployment for mobile and edge use cases.
  • Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, and memory efficiency, with continuous monitoring and iterative refinements.
  • Proficiency in C++ and GPU kernel programming.
  • Proven expertise in GPU acceleration with Vulkan framework.
  • Strong background in quantization and mixed-precision model optimization.
  • Experience and expertise in Vulkan compute shader development and customization.
  • Familiarity with LoRA fine-tuning and parameter-efficient training methods.
  • Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
  • Hands-on experience with mobile GPU acceleration and model inference.
  • Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon, etc.).
  • Experience implementing custom backward operators for fine-tuning.
  • Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
  • Demonstrated ability to apply empirical research to overcome challenges in model optimization.
Important information for candidates
  • Apply only through official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page.
  • Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, confirm their identity via their profile or our official website.
  • Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is through official company emails and platforms.
  • Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.io
  • We will never request payment or financial details. If someone asks for personal financial information or payment during the hiring process, it is a scam. Please report it immediately.

Seniority level: Not Applicable

Employment type: Full-time

Job function: Information Technology

Industries: Technology, Information and Internet

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.