Ativa os alertas de emprego por e-mail!

AI Research Engineer (Model Serving & Inference)

Tether Operations Limited

São Paulo

Teletrabalho

BRL 120.000 - 180.000

Tempo integral

Há 4 dias
Torna-te num dos primeiros candidatos

Melhora as tuas possibilidades de ir a entrevistas

Cria um currículo adaptado à oferta de emprego para teres uma taxa de sucesso superior.

Resumo da oferta

Tether Operations Limited is seeking a member for their AI model team to drive innovation in model serving and inference architectures. The role involves designing efficient architectures for AI systems, optimizing performance across various environments, and developing robust inference pipelines. Candidates should possess a strong background in Machine Learning, with emphasis on practical experience in optimization on mobile devices and edge platforms.

Qualificações

  • Proven experience in kernel optimizations and inference optimization on mobile devices.
  • Strong understanding of model serving architectures.
  • Experience in developing and deploying end-to-end inference pipelines.

Responsabilidades

  • Design and deploy model serving architectures with high throughput.
  • Establish clear performance targets and monitor key metrics.
  • Optimize serving infrastructure for scalability on resource-constrained systems.

Conhecimentos

Machine Learning
AI R&D
Inference Optimization
CPU and GPU Kernels

Formação académica

PhD in NLP or Machine Learning
Degree in Computer Science or related field

Descrição da oferta de emprego

Join Tether and Shape the Future of Digital Finance

At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.

Innovate with Tether

Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.

But that’s just the beginning:

Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.

Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing.

Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.

Tether Evolution: At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.

Why Join Us?

Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.

If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.

Are you ready to be part of the future?

About the job:

As a member of our AI model team, you will drive innovation in model serving and inference architectures for advanced AI systems. Your work will focus on optimizing model deployment and inference strategies to deliver highly responsive, efficient, and scalable performance across real-world applications. You will work on a wide spectrum of systems, ranging from resource-efficient models designed for limited hardware environments to complex, multi-modal architectures that integrate data such as text, images, and audio.

We expect you to have deep expertise in designing and optimizing model serving pipelines and inference frameworks as well as a strong background in advanced model architectures. You will adopt a hands-on, research-driven approach to develop, test, and implement novel serving strategies and inference algorithms. Your responsibilities include engineering robust inference pipelines, establishing comprehensive performance metrics, and identifying and resolving bottlenecks in production environments. The ultimate goal is to enable high-throughput, low-latency, low-memory footprint, and scalable AI performance that delivers tangible value in dynamic, real-world scenarios.

Responsibilities:

  • Design and deploy state-of-the-art model serving architectures that deliver high throughput and low latency while optimizing memory usage. Ensure these pipelines run efficiently across diverse environments, including resource-constrained devices and edge platforms.

  • Establish clear performance targets such as reduced latency, improved token response, and minimized memory footprint.

  • Build, run, and monitor controlled inference tests in both simulated and live production environments. Track key performance indicators such as response latency, throughput, memory consumption, and error rates, with special attention to metrics specific to resource-constrained devices. Document iterative results and compare outcomes against established benchmarks to validate performance across platforms.

  • Identify and prepare high-quality test datasets and simulation scenarios tailored to real-world deployment challenges, specifically those encountered on low-resource devices. Set measurable criteria to ensure that these resources effectively evaluate model performance, latency, and memory utilization under various operational conditions.

  • Analyze computational efficiency and diagnose bottlenecks in the serving pipeline by monitoring both processing and memory metrics. Address issues such as suboptimal batch processing, network delays, and high memory usage to optimize the serving infrastructure for scalability and reliability on resource-constrained systems.

  • Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.


  • A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).

  • Proven experience in low-level kernel optimizations and inference optimization on mobile devices is essential. Your contributions should have led to measurable improvements in inference latency, throughput, and memory footprint for domain-specific applications, particularly on resource-constrained devices and edge platforms.

  • A deep understanding of modern model serving architectures and inference optimization techniques is required. This includes state-of-the-art methods for achieving low-latency, high-throughput performance, and efficient memory management in diverse, resource-constrained deployment scenarios.

  • Must have strong expertise in writing CPU and GPU kernels for mobile devices (i.e., smartphones) as well as a deep understanding of model serving frameworks and engines. Practical experience in developing and deploying end-to-end inference pipelines, from optimizing models for efficient serving to integrating these solutions on resource-constrained devices is required.

  • Demonstrated ability to apply empirical research to overcome challenges in model serving, such as latency optimization, computational bottlenecks, and memory constraints. You should be proficient in designing robust evaluation frameworks and iterating on optimization strategies to continuously push the boundaries of inference performance and system efficiency.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.

Ofertas semelhantes

AI Research Engineer (Fine-tuning)

Tether Operations Limited

São Paulo

Teletrabalho

USD 100.000 - 160.000

Há 3 dias
Torna-te num dos primeiros candidatos

Research Engineer / Scientist

Avra

São Paulo

Teletrabalho

BRL 80.000 - 150.000

Há 30+ dias

Freelance Software Developer (Python-Rust) - AI Tutor

Mindrift

Teletrabalho

USD 100.000 - 200.000

Há 3 dias
Torna-te num dos primeiros candidatos

Tutor de Avaliação - Agronomia

Unicesumar

Teletrabalho

BRL 120.000 - 160.000

Há 6 dias
Torna-te num dos primeiros candidatos

Tutor de Avaliação - Engenharia - especialização em seg. do trabalho

Unicesumar

Teletrabalho

BRL 160.000 - 200.000

Há 6 dias
Torna-te num dos primeiros candidatos

Supervisor de Limpeza - SP

Speedgold

São Paulo

Presencial

BRL 160.000 - 200.000

Há 6 dias
Torna-te num dos primeiros candidatos

Auxiliar Boutique Nespresso | Intermitente São Paulo

Nestlé SA

São Paulo

Presencial

BRL 160.000 - 200.000

Há 4 dias
Torna-te num dos primeiros candidatos

Agile master remoto

NetVagas

São Paulo

Teletrabalho

BRL 80.000 - 150.000

Há 30+ dias

Motofretista - Entregador de Jornal

Estadão

São Paulo

Presencial

BRL 160.000 - 200.000

Há 4 dias
Torna-te num dos primeiros candidatos