Job Search and Career Advice Platform

Enable job alerts via email!

Artificial Intelligence Safety & Compliance Engineer

Huxley Associates

City Of London

Hybrid

GBP 60,000 - 90,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment firm is looking for an Artificial Intelligence Safety & Compliance Engineer to ensure the compliance and integrity of AI systems in their business projects. The successful candidate will work on aligning with global AI safety regulations and implementing automated compliance checks. This permanent role offers a competitive salary and flexibility with hybrid working arrangements. Ideal for those with strong experience in AI safety tools and frameworks.

Qualifications

  • Extensive experience with AI safety tools.
  • Familiarity with Azure AI Foundry.
  • Experience in prompt injection testing.

Responsibilities

  • Maintain the safety and compliance of AI systems.
  • Align with global AI safety regulations.
  • Implement automated compliance checks.

Skills

Experience with AI safety tools
Knowledge of compliance testing frameworks
Understanding of bias and drift testing

Tools

Azure AI Foundry
Responsible AI Toolbox
Deepchecks
Job description
Artificial Intelligence Safety & Compliance Engineer

£75‑95K (salary £60,000‑£90,000 depending on location and experience)

Role Details

Location: London or Glasgow (expect 1‑2 days per week in office, with hybrid flexibility). Permanent role.

Technical stack: AI safety tools such as Responsible AI Toolbox and Deepchecks, Azure AI Foundry and LLM compliance testing frameworks.

Responsibilities
  • Maintain the safety, compliance, and integrity of all AI systems deployed through business AI projects.
  • Lead efforts in aligning with global AI safety regulations (e.g., GDPR, EU AI Act).
  • Implement automated compliance checks and deliver secure AI experiences through continuous monitoring and audit logging.
  • Lead red‑team/blue‑team workflows using Azure AI Foundry, model risk analysis, and proactive testing for bias, drift, and prompt injection vulnerabilities.
Requirements
  • Extensive experience with AI safety tools (e.g., Responsible AI Toolbox, Deepchecks).
  • Familiarity with Azure AI Foundry and LLM compliance testing frameworks.
  • Experience in prompt injection testing and AI behaviour anomaly detection.
  • Strong knowledge of AI safety and compliance testing frameworks.

This is a great role and an awesome time to join this business as they expand their artificial intelligence team. For more information and the chance to be considered, please send a CV.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.