Job Search and Career Advice Platform

Enable job alerts via email!

Lead AI Security Architect (Ref 26289)

Jobline

Singapore

On-site

SGD 90,000 - 130,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech company in Singapore seeks an experienced AI Security Architect to oversee AI platform security and compliance. Responsibilities include collaborating with Group Security, implementing secure architectural principles, evaluating security tools, and driving ethical AI practices. Candidates should have a degree in Cybersecurity or a related field, alongside significant experience in cybersecurity and AI systems. This position offers an opportunity to shape security practices in a rapidly evolving domain.

Qualifications

  • More than 5 years of experience in cybersecurity, data governance, or secure systems architecture.
  • Proven expertise in implementing cybersecurity governance on AI platforms.
  • Strong understanding of AI/ML pipeline components and associated risks.

Responsibilities

  • Collaborate with Group Security and Data Governance teams on AI platforms.
  • Define secure-by-design architectural principles for AI/ML platforms.
  • Oversee Responsible AI (RAI) practices to ensure ethical AI deployments.

Skills

Cybersecurity
Data governance
AI/ML systems
Cloud platforms
Threat modeling

Education

Bachelor’s or Master’s degree in Cybersecurity, Engineering, AI/ML, or related field

Tools

AWS SageMaker
Azure ML
Google Vertex AI
Job description

In January, we’re transitioning to Jora Singapore. Your saved jobs and job alerts will move with you so your job search stays on track.

  • Collaborate with Group Security and Data Governance teams to align AI platform designs with enterprise security and compliance policies.
  • Define and implement secure-by-design architectural principles for AI/ML platforms, covering data pipelines, model deployment, and access layers.
  • Implement and oversee Responsible AI (RAI) practices to ensure AI systems are designed and deployed ethically, with fairness, transparency, and compliance
  • Implement and oversee Explainable AI (XAI) practices to ensure AI model decisions are transparent, interpretable, and trustworthy through integrated explainability features.
  • Ensure compliance with regulatory frameworks and AI governance standards.
  • Ensure secure and compliant architecture in collaboration with cybersecurity and governance teams, embedding PDPA and enterprise policy requirements into designs
  • Translate governance requirements into technical specifications and enforceable controls across cloud and on-premise AI environments.
  • Integrate privacy-preserving mechanisms such as data anonymization, encryption, tokenization, and secure logging into AI workflows.
  • Evaluate and recommend AI security and governance tools (e.g. AWS Guardrails, Azure Responsible AI, IBM Watson Governance) for adoption.
  • Conduct AI-specific risk assessments, including model misuse, bias, data leakage, adversarial attacks, and LLM prompt vulnerabilities.
  • Review and approve the integration of third-party AI services and open-source models from a security and compliance perspective.
  • Champion awareness of AI security and governance across AIDA by contributing to policies, best practices, and team enablement sessions.
  • Review and clear governance approvals related to architecture and solution design, with specific focus on AI security.
  • Collaborate with vendors and partners to review, evaluate, and select appropriate security solutions.
  • Ensure all AI solutions, including in-house and vendor-developed systems, undergo thorough testing and Vulnerability Assessment and Penetration Testing (VAPT) to safeguard security and reliability.
Requirements
  • Bachelor’s or Master’s degree in Cybersecurity, Engineering, AI/ML, or related field.
  • More than 5 years of experience in cybersecurity, data governance, or secure systems architecture, with at least 3 years focused on AI or cloud-based ML systems
  • Proven expertise in implementing cybersecurity governance, data protection, etc on data or AI platform.
  • Strong understanding of AI/ML pipeline components and risks—model misuse, prompt injection, data leakage, adversarial inputs, bias and explainability.
  • Proficient in implementing secure and compliant AI/ML systems on cloud platforms such as AWS SageMaker, Azure ML, Google Vertex AI, etc.
  • Experience with AIOps/LLMOps and DevSecOps practices, including secure CI/CD, RBAC, secrets management and logging
  • Familiarity with AI governance toolkits and regulatory trends
  • Technical knowledge of data privacy controls (encryption, tokenization, data minimization) and security frameworks (e.g., Zero Trust, OWASP for ML)
  • Ability to perform threat modeling and security assessment for AI and LLM-based systems
  • Strong cross-functional communication and collaboration skills, with the ability to influence both technical and policy-level decisions
  • Good internal (IT, Networks, business) and external (suppliers, government) stakeholders management skills
  • Strong technical writing and presentation skills, with the ability to communicate complex concepts clearly to both technical and non-technical stakeholders.
  • Proactive and fast learner with a strong drive to stay current on emerging technologies and industry trends
  • Proven experience working in a telco environment or implementing security and governance role is a plus

Be careful - Don’t provide your bank or credit card details when applying for jobs. Don't transfer any money or complete suspicious online surveys. If you see something suspicious, report this job ad .

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.