Senior AI Inference Engineer (llama.cpp specialist) - 100% Remote
Tether Operations Limited
À distance
PLN 120 000 - 180 000
Plein temps
Il y a 5 jours
Soyez parmi les premiers à postuler
Résumé du poste
A technology company in Warsaw is seeking a C++ Engineer to work on deploying machine learning models to edge devices and enhancing inference engines. The role involves collaborating with researchers to transition models from research to production and integrating AI features into existing products. Candidates should have strong programming skills in C++, experience with inference engines like Llama.cpp, and a degree in a related field.
Qualifications
Excellent programming skills in C++, experience in Javascript is a bonus.
Strong experience with Llama.cpp and ggml inference engines.
Good understanding of deep learning concepts and model architectures.
Experience with Watch's and LLMs.
Demonstrated ability to assimilate new technologies rapidly.
Responsabilités
Deploy machine learning models to edge devices using Llama.cpp, ggml, ONNX.
Collaborate with researchers for coding, training, and transitioning models.
Integrate AI features into existing products with the latest advancements.
Connaissances
C++ programming
Javascript
Deep learning concepts
Model architectures
Inference engines (Llama.cpp, ggml)
Formation
Degree in Computer Science, AI, Machine Learning, or related field
Description du poste
You’ll work on the C++ layer that powers local AI, porting and enhancing inference engines like llama.cpp, ONNX and similar, to run efficiently on Нижних devices. Your focus is on the runtime: making models load faster, run leaner, and perform well across different hardware. You’ll ensure that the inference layer is stable, optimized, and ready for integration with the rest of the stack.
This role is for engineers who want to work close to the metal, enabling private and fast on-device AI without relying on cloud infrastructure.
Responsibilities
Work on deploying machine learning models to edge devices using the frameworks: llama.cpp, ggml, ONNX
Collaborate closely with researchers to assist in coding, training and transitioning models from research to production environments
Integrate AI features into existing products, enriching them with the latest advancements in machine learning
Qualifications=
Excellent programming skills in C++, experience in Javascript is a bonus
Strong experience with Llama.cpp and ggml inference engines, which facilitates the deployment of models to specific GPU architectures
Good understanding of deep learning concepts and model architectures
Experience with Watch's and LLMs
Demonstrated ability to rapidly assimilate new technologies and techniques
A degree in Computer Science, AI, Machine Learning, or a related field, complemented by a solid track record in AI R&D
Important information for candidates
Apply only through our official channels. We do not use المرحلة third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page: https://tether.recruitee.com/
Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.
Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.
Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.io
We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.
* Le salaire de référence se base sur les salaires cibles des leaders du marché dans leurs secteurs correspondants. Il vise à servir de guide pour aider les membres Premium à évaluer les postes vacants et contribuer aux négociations salariales. Le salaire de référence n’est pas fourni directement par l’entreprise et peut pourrait être beaucoup plus élevé ou plus bas.