¡Activa las notificaciones laborales por email!
Genera un currículum adaptado en cuestión de minutos
Consigue la entrevista y gana más. Más información
A leading company in digital finance is seeking an AI Research Engineer to innovate in model serving and inference architectures. The role offers the opportunity to work on cutting-edge AI projects remotely, focusing on optimizing responsiveness, efficiency, and scalability for advanced AI systems.
AI Research Engineer (Model Serving & Inference - 100% Remote Spain)
3 days ago Be among the first 25 applicants
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, at a fraction of the cost. Transparency is the foundation of trust in every transaction.
Innovate with Tether
Tether Finance : Our product suite features the trusted stablecoin USDT, used worldwide, and digital asset tokenization services.
What we do :
Tether Power : Sustainable energy solutions for Bitcoin mining using eco-friendly practices.
Tether Data : AI and P2P technology solutions like KEET for secure data sharing.
Tether Education : Digital learning platforms for global access.
Tether Evolution : Merging technology and human potential for innovative futures.
Why Join Us?
Our global, remote team is passionate about fintech innovation. Collaborate with top talent, push boundaries, and set industry standards. If you excel in English and want to contribute to cutting-edge platforms, Tether is your place.
About The Job
As an AI model team member, you will innovate in model serving and inference architectures for advanced AI systems. Focus on optimizing deployment and inference for responsiveness, efficiency, and scalability across diverse applications—from resource-limited devices to complex multi-modal systems.
Your expertise should include designing and optimizing model serving pipelines, developing novel serving strategies, and resolving bottlenecks in production to achieve high throughput, low latency, and minimal memory usage.
Responsibilities :
Seniority level
Employment type
Job function
Note : The job posting is listed as 6 days ago, but no explicit expiration indicator is present. Assuming it’s active.
J-18808-Ljbffr