Job Search and Career Advice Platform

Enable job alerts via email!

Lead Consultant-GenAI, LLMs, Azure-UK

Infosys

Greater London

On-site

GBP 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading global digital services provider is seeking a Lead Consultant (ML Engineer) with over 10 years of experience to design and implement ML solutions using Azure technologies. This role focuses on operationalizing AI and ML models, ensuring smooth deployments and monitoring. Candidates should have a strong programming background in Python and comprehensive experience with Azure AI/ML services. Competitive compensation is offered for this position based in the UK.

Benefits

Competitive salary
Bonus opportunities

Qualifications

  • 10+ years of experience in ML engineering.
  • Hands-on experience in Azure AI/ML services.
  • Prior experience in LLM deployment or generative AI projects.

Responsibilities

  • Design, build, and manage ML & LLM pipelines using Azure.
  • Implement CI/CD for ML/LLM workflows.
  • Establish monitoring systems for deployed ML models.

Skills

ML engineering experience
Azure AI/ML services
Python programming
CI/CD practices
Collaboration skills

Tools

Azure ML
Azure AI Foundry
Azure DevOps
Docker
SQL
Job description
Job Description

Role – Lead Consultant (ML Engineer)

Technology – Gen AI, LLMs, Azure

Location – UK

Business Unit – DNA Analytics

Compensation – Competitive (including bonus)

Lead Consultant to design, develop and deploy solutions using GenAI, LLMs and ML models utilizing Azure OpenAI services and other features of the Azure AI/ML landscape.

Your Role

We are looking for a skilled Lead Consultant with expertise in Azure to operationalize AI, ML, and Large Language Model (LLM) solutions at scale, and will be responsible for designing, implementing, and maintaining the end-to-end Machine Learning pipelines and infrastructure that enable the seamless development, deployment, and monitoring of AI and ML models in production, including Large Language Models (LLMs). From data ingestion and model/LLM deployment, monitoring, and governance – using Azure’s AI ecosystem including Azure AI Foundry, Azure ML, and AKS.

Responsibilities
  • Design, build, and manage ML & LLM pipelines using Azure Machine Learning (AML) and Azure AI Foundry (e.g., GPT, Phi, LLaMA-based, HuggingFace) using Azure AI services.
  • Implement and manage Azure infrastructure necessary for ML workloads (e.g., Azure Machine Learning Workspaces, Compute, Data Stores) using ARM templates or Terraform.
  • Implement CI/CD for ML/LLM workflows with Azure DevOps or GitHub Actions.
  • Manage real-time and batch deployments on Azure Kubernetes Service (AKS) or Azure Container Instances and manage model versions, registrations, and lifecycle within Azure Machine Learning. This includes working with Azure AI Foundry for generative AI applications.
  • Automate infrastructure with Terraform, Bicep, or ARM templates.
  • Set up model registry, experiment tracking, versioning, and reproducibility.
  • Integrate Azure Data Factory, Synapse, or Databricks for data ingestion and preprocessing.
  • Establish robust monitoring, logging, and alerting systems for deployed ML models and LLMs to track performance, detect data drift, concept drift, and operational issues, ensuring continuous model health.
  • Ensure security, compliance, and responsible AI practices in LLM deployments.
  • Collaborate with cross-functional teams to operationalize AI/ML/LLM use cases at enterprise scale.
Required
  • 10+ years of experience in ML engineering, with hands‑on experience in Azure AI/ML services.
  • Prior experience in LLM deployment or generative AI projects is strongly preferred.
Preferred
  • Strong programming experience in Python (ML frameworks + Azure SDKs).
  • Hands‑on experience with Azure ML, Azure AI Foundry, Azure OpenAI, AKS, Databricks, ADF, Synapse, Azure Storage.
  • Experience deploying and monitoring LLMs in production (fine‑tuning, prompt engineering integration, inference optimization).
  • Knowledge of CI/CD for ML/LLMs with Azure DevOps / GitHub Actions.
  • Proficiency with Docker and containerized deployments.
  • Familiarity with MLOps best practices – version control, reproducibility, monitoring.
  • Good understanding of Responsible AI, model governance, and compliance frameworks.
  • Solid SQL and data pipeline skills.
  • Strong problem‑solving and collaboration skills.
  • Nice‑to‑Have Skills:
    • Experience with LangChain, Semantic Kernel, or Retrieval‑Augmented Generation (RAG) patterns in Azure.
    • Knowledge of vector databases (Azure Cognitive Search, Pinecone, Weaviate, etc.) for LLM applications.
    • Exposure to real‑time streaming & event‑driven architectures (Event Hub, Kafka).
    • Familiarity with KubeFlow, MLflow, or other MLOps orchestration frameworks.
Personal
  • High analytical skills
  • High customer orientation
  • High quality awareness
About Us

Infosys is a global leader in next‑generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI‑powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always‑on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.

All aspects of employment at Infosys are based on merit, competence and performance. We are committed to embracing diversity and creating an inclusive environment for all employees. Infosys is proud to be an equal opportunity employer.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.