Overview
We are seeking an experienced, hands-on MLOps / MLSecOps Engineer to join our AI engineering team. This role bridges machine learning, operations, and security by ensuring that our ML pipelines and deployed AI systems are secure, resilient, and trustworthy. You will work closely with AI engineers, software developers, platform teams and security specialists to operationalise ML at scale with security, reliability, and governance built-in.
Responsibilities
- Next bound AI: Design and implement state of the art Agentic AI architectures
- Secure ML Lifecycle Management: Architect, implement, and manage secure ML pipelines from data ingestion to model deployment.
- MLOps / MLSecOps Integration: Automate training, testing, deployment, and monitoring with modern MLOps / MLSecOps tools.
- Security by Design: Embed security controls into ML and data workflows; ensure compliance with organisational standards.
- Vulnerability Management: Identify and mitigate risks across ML infrastructure (containers, data, and models).
- Model Security & Robustness: Safeguard against adversarial attacks and performance degradation from drift.
- Continuous Monitoring & Continuous Training: Develop observability pipelines to monitor deployed models for drift, anomalies, and performance degradation, and implement systems to support continuous retraining when models under-perform.
- Cross-Functional Collaboration: Partner with AI engineers, software developers, as well as DevSecOps and Infrastructure teams to enhance developer experience and platform capabilities.
- Governance & Compliance: Support responsible, safe, and reliable AI adoption in line with organisational and regulatory requirements.
Job Requirements
- Bachelor’s or Master’s in Computer Science or related field.
- At least one year of experience in MLOps/MLSecOps.
- Hands-on experience with
- ML lifecycle tools (e.g. ClearML, MLflow, Kubeflow, NVIDIA Triton, vLLM)
- Containerisation and orchestration tools (e.g. Kubernetes, Docker)
- CI/CD tools (e.g. GitLab CI, ArgoCD)
- Programming experience in Python and one other language (e.g. Go, Rust, C++, Java).
Preferred Skills
- Familiarity with Agentic AI architectures and workflows
- Familiarity with ML optimisation techniques (e.g. quantisation, pruning, distillation).
- Familiarity with ML security threats (e.g. data poisoning, model extraction, adversarial attacks).
- Familiarity with ML monitoring & observability platforms.
- Experience with air-gapped or high-assurance environments
Experience
2–8 years
Job Type
Full-Time
Qualification
Bachelor's degree or equivalent
Working Hours
Standard Hours
Programme Centre / Entity
DIGITAL HUB