Job Search and Career Advice Platform

Enable job alerts via email!

Senior Cybersecurity Engineer – Generative AI

Synopsys

Morrisville

On-site

CAD 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm is seeking a professional to design and implement advanced security controls for AI/ML systems. The role involves conducting threat modeling, vulnerability assessments, and responding to AI-specific security incidents. Candidates should have an advanced degree in Computer Science or Cybersecurity and relevant industry certifications. Strong knowledge of product security and programming skills are essential. This position offers an exciting opportunity to enhance security measures in cutting-edge AI technologies.

Qualifications

  • Advanced degree in Computer Science, Cybersecurity, AI, or related field.
  • Relevant industry certifications such as CISSP, CCSP, or CEH.
  • Strong knowledge of product security concepts and open-source software security.
  • Deep understanding of security architecture and incident response for AI/ML environments.
  • Hands-on experience with AI/ML pipelines and security automation.

Responsibilities

  • Design and implement advanced security controls for AI/ML systems.
  • Conduct thorough threat modeling and vulnerability assessments.
  • Integrate security into every stage of the GenAI lifecycle.
  • Monitor and respond to AI-specific security incidents.
  • Collaborate with teams to mitigate security risks in real time.

Skills

Knowledge of product security concepts
Hands-on experience with machine learning algorithms
Expertise in AI-specific threats
Proficiency in programming languages such as Python
Familiarity with cloud security and containerized environments
Exceptional verbal and written communication skills

Education

Advanced degree in Computer Science, Cybersecurity, or related field

Tools

AI security tools and frameworks
Vulnerability scanning tools
Job description

Opening. This role is a key part of Synopsys' efforts to protect its cutting‑edge AI technologies. The successful candidate will be responsible for designing and implementing advanced security controls for AI/ML systems, focusing on threats unique to generative AI such as adversarial examples, prompt injections, and jailbreaks.

What you’ll do
  • Design and implement advanced security controls for AI/ML systems, focusing on threats unique to generative AI such as adversarial examples, prompt injections, and jailbreaks.
  • Conduct thorough threat modeling, vulnerability assessments, and red teaming exercises tailored to AI models, data pipelines, and supporting infrastructure.
  • Integrate security into every stage of the GenAI lifecycle, from data ingestion and model training to deployment and inference.
  • Monitor, detect, and respond to AI‑specific security incidents including model inversion, membership inference, and supply chain vulnerabilities.
  • Collaborate closely with AI architecture, research, and engineering teams to evaluate new features and mitigate security risks in real time.
  • Research and track emerging AI threats, contributing to the development of internal security tools, policies, and governance for responsible AI use.
  • Assist in shaping the enterprise AI strategy, ensuring robust security alignment with business objectives.
  • Create and document reusable AI security patterns, and develop AI‑driven use cases to strengthen cybersecurity operations.
  • Evaluate, recommend, and implement best‑in‑class AI security tools and frameworks for Synopsys' AI infrastructure.
  • Drive comprehensive threat modeling for AI/ML systems, addressing adversarial risks and emerging attack vectors.
What you need
  • Advanced degree in Computer Science, Cybersecurity, Artificial Intelligence, or a related field.
  • Relevant industry certifications such as CISSP, CCSP, CEH, or specialized AI/ML security credentials.
  • Strong knowledge of product security concepts—data security and privacy, security engineering, open‑source software security, and security assurance.
  • Deep understanding of security architecture, threat modeling, secure coding practices, and incident response for AI/ML environments.
  • Hands‑on experience with machine learning algorithms, model training, data preprocessing, and end‑to‑end AI/ML pipelines.
  • Expertise in AI‑specific threats: adversarial machine learning, model inversion, data poisoning, and evasion attacks.
  • Proficiency in programming languages such as Python, with experience in scripting for vulnerability scanning and security automation.
  • Strong familiarity with cloud security (AWS, Azure, GCP) and containerized environments (Kubernetes, Docker).
  • Experience with security frameworks and standards relevant to AI (e.g., OWASP Top 10 for LLMs, NIST AI Risk Management Framework).
  • Exceptional verbal and written communication skills to convey technical concepts to diverse audiences.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.