Job Search and Career Advice Platform

Enable job alerts via email!

Staff Engineer - ML Platform

Delivery Hero SE

Dubai

On-site

AED 150,000 - 200,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading delivery platform in Dubai seeks an ML Platform Engineer to design and enhance the infrastructure for machine learning and generative AI initiatives. This role requires proficiency in Python and experience with cloud services and MLOps practices. The candidate will collaborate with data scientists and engineers to optimize ML workflows and ensure high operational standards.

Qualifications

  • Strong software engineering background with experience in building distributed systems.
  • Expert-level proficiency in Python and familiarity with ML frameworks.
  • 2+ years in a tech lead role, 5+ years of experience in ML platform engineering.

Responsibilities

  • Design, build, and maintain scalable ML platforms.
  • Develop standardized ML workflows using MLflow.
  • Implement robust CI/CD pipelines for ML and genAI.

Skills

Python programming
Experience with ML frameworks (TensorFlow, PyTorch)
Cloud infrastructure (AWS, GCP)
MLOps practices
Real-time inference pipeline design

Education

Bachelor’s degree in Computer Science or related field

Tools

MLflow
Docker
Kubernetes
TensorFlow Serving
Job description
Company Description

Since launching in Kuwait in 2004, talabat, the leading on-demand food and Q-commerce app for everyday deliveries, has been offering convenience and reliability to its customers. talabat’s local roots run deep, offering a real understanding of the needs of the communities we serve in eight countries across the region.

We harness innovative technology and knowledge to simplify everyday life for our customers, optimize operations for our restaurants and local shops, and provide our riders with reliable earning opportunities daily.

Here at talabat, we are building a high performance culture through engaged workforce and growing talent density. We're all about keeping it real and making a difference. Our 6,000+ strong talabaty are on an awesome mission to spread positive vibes. We are proud to be a multi great place to work award winner.

Job Description
Summary

As the leading delivery platform in the region, we have a unique responsibility and opportunity to positively impact millions of customers, restaurant partners, and riders. To achieve our mission, we must scale and continuously evolve our machine learning capabilities, including cutting-edge Generative AI (genAI) initiatives. This demands robust, efficient, and scalable ML platforms that empower our teams to rapidly develop, deploy, and operate intelligent systems.

As an ML Platform Engineer, your mission is to design, build, and enhance the infrastructure and tooling that accelerates the development, deployment, and monitoring of traditional ML and genAI models at scale. You’ll collaborate closely with data scientists, ML engineers, genAI specialists, and product teams to deliver seamless ML workflows—from experimentation to production serving—ensuring operational excellence across our ML and genAI systems.

Responsibilities
  • Design, build, and maintain scalable, reusable, and reliable ML platforms and tooling that support the entire ML lifecycle, including data ingestion, model training, evaluation, deployment, and monitoring for both traditional and generative AI models.
  • Develop standardized ML workflows and templates using MLflow and other platforms, enabling rapid experimentation and deployment cycles.
  • Implement robust CI/CD pipelines, Docker containerization, model registries, and experiment tracking to support reproducibility, scalability, and governance in ML and genAI.
  • Collaborate closely with genAI experts to integrate and optimize genAI technologies, including transformers, embeddings, vector databases (e.g., Pinecone, Redis, Weaviate), and real-time retrieval-augmented generation (RAG) systems.
  • Automate and streamline ML and genAI model training, inference, deployment, and versioning workflows, ensuring consistency, reliability, and adherence to industry best practices.
  • Ensure reliability, observability, and scalability of production ML and genAI workloads by implementing comprehensive monitoring, alerting, and continuous performance evaluation.
  • Integrate infrastructure components such as real-time model serving frameworks (e.g., TensorFlow Serving, NVIDIA Triton, Seldon), Kubernetes orchestration, and cloud solutions (AWS/GCP) for robust production environments.
  • Drive infrastructure optimization for generative AI use-cases, including efficient inference techniques (batching, caching, quantization), fine-tuning, prompt management, and model updates at scale.
  • Partner with data engineering, product, infrastructure, and genAI teams to align ML platform initiatives with broader company goals, infrastructure strategy, and innovation roadmap.
  • Contribute actively to internal documentation, onboarding, and training programs, promoting platform adoption and continuous improvement.
Requirements
Technical Experience
  • Strong software engineering background with experience in building distributed systems or platforms designed for machine learning and AI workloads.
  • Expert‑level proficiency in Python and familiarity with ML frameworks (TensorFlow, PyTorch), infrastructure tooling (MLflow, Kubeflow, Ray), and popular APIs (Hugging Face, OpenAI, LangChain).
  • Experience implementing modern MLOps practices, including model lifecycle management, CI/CD, Docker, Kubernetes, model registries, and infrastructure‑as‑code tools (Terraform, Helm).
  • Demonstrated experience working with cloud infrastructure, ideally AWS or GCP, including Kubernetes clusters (GKE/EKS), serverless architectures, and managed ML services (e.g., Vertex AI, SageMaker).
  • Proven experience with generative AI technologies: transformers, embeddings, prompt engineering strategies, fine‑tuning vs. prompt‑tuning, vector databases, and retrieval‑augmented generation (RAG) systems.
  • Experience designing and maintaining real‑time inference pipelines, including integrations with feature stores, streaming data platforms (Kafka, Kinesis), and observability platforms.
  • Familiarity with SQL and data warehouse modeling; capable of managing complex data queries, joins, aggregations, and transformations.
  • Solid understanding of ML monitoring, including identifying model drift, decay, latency optimization, cost management, and scaling API‑based genAI applications efficiently.
Qualifications
  • Bachelor’s degree in Computer Science, Engineering, or a related field; advanced degree is a plus.
  • 2+ years in a tech lead role, 5+ years of experience in ML platform engineering, ML infrastructure, generative AI, or closely related roles.
  • Proven track record of successfully building and operating ML infrastructure at scale, ideally supporting generative AI use‑cases and complex inference scenarios.
  • Strategic mindset with strong problem‑solving skills and effective technical decision‑making abilities.
  • Excellent communication and collaboration skills, comfortable working cross‑functionally across diverse teams and stakeholders.
  • Strong sense of ownership, accountability, pragmatism, and proactive bias for action.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.