Enable job alerts via email!
A pioneering financial firm in Dubai seeks an AI model evaluation specialist to develop and deploy evaluation frameworks across the AI lifecycle. The ideal candidate will have a PhD in a relevant field and proven experience in creating benchmarking pipelines. This role requires strong programming and communication skills, along with a collaborative spirit to work with cross-functional teams. Join us to be part of an industry-leading technology project and work remotely with a global team.
Join Tether and Shape the Future of Digital Finance
At Tether, we’re pioneering a global financial revolution with innovative solutions that enable seamless integration of reserve-backed tokens across blockchains. Our technology allows secure, instant, and cost-effective digital transactions worldwide, built on transparency and trust.
Innovate with Tether
Tether Finance : Offers the trusted USDT stablecoin and digital asset tokenization services.
Additional Initiatives :
Tether Power : Focuses on eco-friendly Bitcoin mining solutions.
Tether Data : Develops AI and data-sharing technologies like KEET.
Tether Education : Provides digital learning opportunities.
Tether Evolution : Merges technology and human potential for future innovations.
Why Join Us?
Work remotely with a global team, collaborate on cutting-edge fintech projects, and contribute to industry leadership. Excellent English communication skills are essential.
About the job :
As part of our AI model team, you will develop evaluation frameworks and benchmarks for AI models across various stages, from pre-training to inference. Your focus will be on designing metrics that ensure models are responsive, efficient, and reliable in real-world applications, working with diverse architectures including resource-efficient and multi-modal models.
You should have expertise in advanced AI architectures, evaluation practices, and benchmarking. Your role involves creating, testing, and implementing evaluation strategies that measure accuracy, latency, throughput, and memory use, providing actionable insights to improve model performance throughout its lifecycle.
Collaboration with cross-functional teams is key to share findings and integrate feedback. You will build evaluation pipelines and dashboards to support continuous improvement and industry-leading standards in AI model quality and reliability.
Responsibilities :
A degree in Computer Science or related field is required, with a PhD preferred in NLP, Machine Learning, or similar, along with a strong publication record. Proven experience in designing evaluation frameworks, developing benchmarking pipelines, and working across the AI lifecycle is essential. Excellent programming skills and ability to communicate findings effectively are also required.