Job Search and Career Advice Platform

Enable job alerts via email!

Senior AI Engineer

MCS Group | Your Specialist Recruitment Consultancy

Leeds

Hybrid

GBP 100,000 - 125,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global technology company is seeking a Senior AI Engineer based in the UK to help integrate AI into its platform. The ideal candidate will have over 5 years of experience, strong Python skills, and familiarity with cloud ML platforms. This role offers the opportunity to design and implement AI-powered solutions directly impacting real-world processes. Flexible work options are available, with a vibrant team culture and the chance to lead key initiatives in a cloud-first environment.

Benefits

Flexible work options
Competitive salary
Extended benefits package

Qualifications

  • 5+ years' experience in software, ML, or AI engineering roles.
  • Strong Python skills with hands-on experience in AI/ML production.
  • Experience with AWS SageMaker/Bedrock or similar cloud ML platforms.

Responsibilities

  • Design, build, and own LLM-based features and GenAI workflows.
  • Use frameworks like LangChain to deliver applied AI solutions.
  • Implement MLOps practices including CI/CD and performance optimization.

Skills

Python
MLOps
Collaboration
Cloud platforms

Tools

LangChain
PyTorch
TensorFlow
AWS SageMaker
GCP Vertex AI
Job description
Senior AI Engineer

Location : Belfast or Remote (UK-based)

Type : Permanent | Hybrid Belfast or Fully Remote (UK)

Salary : Competitive base + extended benefits package

About the Opportunity

If you're the kind of engineer who enjoys turning cutting-edge AI into real, production features that people actually use, this is a role worth serious consideration.

You'll be joining a global technology company whose platform helps major manufacturers design and build products smarter and faster. The platform simulates real-world manufacturing processes before anything is made, giving customers early insight into cost, efficiency, and design trade-offs.

The next phase of the platform is a major one : embedding AI and Generative AI directly into the product, not as experiments or side projects, but as core capabilities. This role sits right at the start of that journey and will help define how AI is built and scaled across the platform.

What You'll Be Doing

This is a hands-on, ownership-driven engineering role. You'll design, build, deploy, and own AI-powered capabilities that run in production and are used by real customers - including intelligent assistants, automated insights, and adaptive recommendations.

You'll work closely with Product, Data, and Engineering teams to shape both the technical approach and the broader AI roadmap. Day to day, that includes :

  • Designing, building, and owning LLM-based features and GenAI workflows in production
  • Using frameworks such as LangChain, PyTorch, and TensorFlow to deliver applied AI solutions
  • Working with AWS Bedrock, SageMaker, or GCP Vertex AI to build scalable cloud-based systems
  • Implementing strong MLOps / LLMOps practices, including CI / CD, monitoring, cost control, and performance optimisation
  • Partnering with Data Engineers to ensure the data pipelines feeding models are reliable, scalable, and well-structured
  • Staying close to emerging GenAI patterns, model safety considerations, and new tooling - and helping the team adopt what's genuinely useful
What You'll Bring

This role suits someone comfortable operating across the full AI delivery lifecycle - from design through to deployment and long-term ownership. You'll likely have :

  • 5+ years' experience in software, ML, or AI engineering roles
  • Strong Python skills, with hands-on experience shipping AI / ML or LLM-driven systems into production
  • Experience working with cloud ML platforms (AWS SageMaker / Bedrock, Vertex AI, or similar)
  • Familiarity with LangChain or comparable orchestration frameworks for LLM pipelines
  • A solid understanding of MLOps and what it takes to run models reliably in production
  • A collaborative mindset - comfortable working closely with Product and Data teams, and mentoring others when needed
  • A practical, curious approach - you enjoy experimenting, but you care most about building things that work
Why This Role?

You'll play a key role in defining how AI and GenAI are embedded into a global, customer-facing product - this is not an innovation lab or side initiative

You'll work with a modern, cloud-first stack and focus on applied AI, not theoretical research

You'll join a team small enough to move quickly, with the backing and scale of a global technology business

Flexible working with UK-based remote options for the right person

How They Work

The main engineering hub is in Belfast, but the team works flexibly across the UK. What matters most is clear communication, ownership, and a shared interest in using AI to solve real-world problems.

How to Apply

To apply for this role, submit your CV via the application form provided. Alternatively, to speak in absolute confidence about this opportunity, please contact Jack Tyrrell via j.tyrrell@mcsgroup.jobs

Disclaimer

If you have a disability which means you require assistance at any stage of the recruitment process, please contact us directly to discuss. We are committed to providing equality of opportunity to all.

Please note, we are receiving an exceptionally high number of applications at present and will be unable to shortlist candidates who are not meeting the specific requirements for this role. Due to the high volume of applications, we may be unable to provide individual feedback. We thank you in advance for your understanding.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.