Job Search and Career Advice Platform

Enable job alerts via email!

AI Security Lead Architect

ARYAN SOLUTIONS PTE. LTD.

Singapore

On-site

SGD 120,000 - 160,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

An innovative technology firm in Singapore is seeking an AI Security Lead Architect to oversee the security architecture for AI systems across cloud and on-prem environments. This role is crucial in ensuring that AI platforms comply with enterprise security standards and regulatory frameworks while enabling effective production use. The ideal candidate will have extensive experience in defining secure-by-design standards and conducting thorough AI-specific risk assessments.

Qualifications

  • 5+ years of experience in AI security and governance architecture.
  • Proven ability to define secure architectures for AI systems.
  • Experience in regulatory compliance and emerging AI frameworks.

Responsibilities

  • Define secure architecture standards for AI and ML platforms.
  • Conduct AI-specific risk assessments including model misuse.
  • Embed security controls across data pipelines and model training.

Skills

AI/ML security architectures
Data privacy techniques
Risk assessment methodologies
Penetration testing

Tools

CleverHans
Burp Suite
IBM Adversarial Robustness Toolbox
AWS SageMaker
Job description
Overview

A large enterprise is scaling AI and GenAI across core platforms and requires a AI Security Lead Architect to define how these systems are secured, governed, and trusted.

This role owns the security and governance architecture for AI systems across cloud and on-prem environments, ensuring AI platforms meet enterprise security standards, regulatory obligations, and emerging AI risk frameworks while enabling real production use.

Role scope
  • Define secure-by-design architecture standards for AI and ML platforms
  • Embed security, privacy, and compliance controls across data pipelines, model training, deployment, and access layers
  • Translate regulatory and governance requirements into enforceable technical controls
  • Establish Responsible AI and Explainable AI practices covering fairness, transparency, and auditability
  • Lead AI-specific risk assessments including model misuse, bias, data leakage, adversarial attacks, and LLM vulnerabilities
  • Review and approve AI architectures, platforms, and third-party AI integrations
  • Integrate privacy-preserving techniques such as anonymisation, encryption, tokenisation, and secure logging
  • Partner with cybersecurity teams to ensure AI systems undergo VAPT and continuous risk monitoring
  • Evaluate and select AI security and governance tooling across cloud platforms
  • Act as the authority for AI security and governance standards across the organisation
Required experience

Artificial intelligence / machine learning security experience that aligns with the following categories and examples of relevant capabilities. The list below is indicative of the stack and areas of focus.

AI / ML attack & defence
  • CleverHans
  • Foolbox
  • TextAttack
  • IBM Adversarial Robustness Toolbox (ART)
  • OpenAI Evals
  • Red teaming LLMs with custom jailbreak corpora
  • Prompt injection testing against tool-calling / function-calling LLMs
  • Training data poisoning detection techniques
  • Model inversion and membership inference testing
OWASP / security standards
  • OWASP Top 10 for Machine Learning
  • OWASP LLM Top 10
  • STRIDE threat modeling applied to ML pipelines
  • AI supply-chain risk analysis (models, datasets, embeddings)
Pen testing / offensive
  • Burp Suite against inference APIs
  • API fuzzing of model endpoints
  • Adversarial payload crafting for LLM tools
  • Abuse simulation for agent workflows
  • Model misuse scenarios and exploit reproduction
Cloud / platform
  • AWS SageMaker IAM, VPC isolation, private endpoints
  • Azure ML private links, managed identities
  • GCP Vertex AI private service connect
  • KMS / HSM integration for model and data protection
  • Secure artifact storage for models and embeddings
Kubernetes / runtime
  • Kubernetes RBAC hardening
  • NetworkPolicies for model isolation
  • Secure model serving (KServe, Seldon)
  • Admission controllers for AI workloads
  • Secrets via Vault / cloud native KMS
Pipeline & ops
  • Secure feature stores
  • Model registry access control
  • CI/CD hardening for ML pipelines
  • Canary deployments for models
  • Rollback strategies for compromised models
Data & privacy
  • Dataset lineage and provenance tracking
  • Differential privacy implementation
  • PII leakage testing in LLM outputs
  • Tokenisation and masking at inference time
  • Secure embedding storage and access control
Monitoring
  • Prompt and output logging with PII redaction
  • Drift detection tied to security events
  • Abuse pattern detection in inference traffic
  • Explainability tooling used for audit, not demos

Singapore based. Confidential role.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.