Overview
A large enterprise is scaling AI and GenAI across core platforms and requires a AI Security Lead Architect to define how these systems are secured, governed, and trusted.
This role owns the security and governance architecture for AI systems across cloud and on-prem environments, ensuring AI platforms meet enterprise security standards, regulatory obligations, and emerging AI risk frameworks while enabling real production use.
Role scope
- Define secure-by-design architecture standards for AI and ML platforms
- Embed security, privacy, and compliance controls across data pipelines, model training, deployment, and access layers
- Translate regulatory and governance requirements into enforceable technical controls
- Establish Responsible AI and Explainable AI practices covering fairness, transparency, and auditability
- Lead AI-specific risk assessments including model misuse, bias, data leakage, adversarial attacks, and LLM vulnerabilities
- Review and approve AI architectures, platforms, and third-party AI integrations
- Integrate privacy-preserving techniques such as anonymisation, encryption, tokenisation, and secure logging
- Partner with cybersecurity teams to ensure AI systems undergo VAPT and continuous risk monitoring
- Evaluate and select AI security and governance tooling across cloud platforms
- Act as the authority for AI security and governance standards across the organisation
Required experience
Artificial intelligence / machine learning security experience that aligns with the following categories and examples of relevant capabilities. The list below is indicative of the stack and areas of focus.
AI / ML attack & defence
- CleverHans
- Foolbox
- TextAttack
- IBM Adversarial Robustness Toolbox (ART)
- OpenAI Evals
- Red teaming LLMs with custom jailbreak corpora
- Prompt injection testing against tool-calling / function-calling LLMs
- Training data poisoning detection techniques
- Model inversion and membership inference testing
OWASP / security standards
- OWASP Top 10 for Machine Learning
- OWASP LLM Top 10
- STRIDE threat modeling applied to ML pipelines
- AI supply-chain risk analysis (models, datasets, embeddings)
Pen testing / offensive
- Burp Suite against inference APIs
- API fuzzing of model endpoints
- Adversarial payload crafting for LLM tools
- Abuse simulation for agent workflows
- Model misuse scenarios and exploit reproduction
Cloud / platform
- AWS SageMaker IAM, VPC isolation, private endpoints
- Azure ML private links, managed identities
- GCP Vertex AI private service connect
- KMS / HSM integration for model and data protection
- Secure artifact storage for models and embeddings
Kubernetes / runtime
- Kubernetes RBAC hardening
- NetworkPolicies for model isolation
- Secure model serving (KServe, Seldon)
- Admission controllers for AI workloads
- Secrets via Vault / cloud native KMS
Pipeline & ops
- Secure feature stores
- Model registry access control
- CI/CD hardening for ML pipelines
- Canary deployments for models
- Rollback strategies for compromised models
Data & privacy
- Dataset lineage and provenance tracking
- Differential privacy implementation
- PII leakage testing in LLM outputs
- Tokenisation and masking at inference time
- Secure embedding storage and access control
Monitoring
- Prompt and output logging with PII redaction
- Drift detection tied to security events
- Abuse pattern detection in inference traffic
- Explainability tooling used for audit, not demos
Singapore based. Confidential role.