Enable job alerts via email!

Senior AI Product Security Researcher

Working Nomads

United Kingdom

Remote

GBP 80,000 - 100,000

Full time

2 days ago
Be an early applicant

Job summary

A leading software platform is seeking a Senior AI Product Security Researcher to conduct cutting-edge security research on AI-powered DevSecOps capabilities. The role involves identifying vulnerabilities, conducting penetration tests, and developing security frameworks. The ideal candidate has over 5 years of experience in security research and a strong proficiency in AI security practices. This unique opportunity allows for creative exploration and impactful contributions to a secure development environment.

Benefits

Competitive salary
Remote working flexibility
Access to cutting-edge AI systems

Qualifications

  • 5+ years of experience in security research or penetration testing.
  • Hands-on experience discovering vulnerabilities in AI systems.
  • Strong understanding of AI attack vectors.

Responsibilities

  • Identify and validate security vulnerabilities in AI systems.
  • Conduct penetration testing targeting AI platforms.
  • Research and assess emerging AI security threats.

Skills

Security research
Penetration testing
Exploit development
AI/ML security
Python
Analytical skills
Documentation skills

Education

Relevant security certifications (OSCP, OSCE)

Tools

Offensive security tools
AI testing frameworks
Job description
Overview

We are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab's AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.

In this role, you'll be at the forefront of AI security research, working with GitLab Duo Agent Platform, GitLab Duo Chat, and AI workflows that represent the future of human/AI collaborative development. You'll develop novel testing methodologies for AI agent security, conduct hands-on penetration testing of multi-agent orchestration systems, and translate emerging AI threats into actionable security improvements. Your research will directly influence how we build and secure the next generation of AI-powered DevSecOps tools, ensuring GitLab remains the most secure software factory platform on the market.

This position offers the unique opportunity to shape AI security practices in one of the world\'s largest DevSecOps platforms, working with engineering teams who are pushing the boundaries of what\'s possible with AI-assisted software development. You\'ll have access to cutting-edge AI systems and the freedom to explore creative attack scenarios while contributing to the security of millions of developers worldwide.

What You\'ll Do
  • Identify and validate security vulnerabilities in GitLab\'s AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios
  • Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques
  • Research emerging AI security threats and attack techniques to assess their potential impact on GitLab\'s AI-powered platform
  • Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation
  • Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies
  • Collaborate with AI engineering teams to validate security fixes through iterative testing and verification
  • Contribute to the development of AI security testing frameworks and automated validation tools
  • Partner with Security Architecture to inform architectural improvements based on research findings
  • Share knowledge and mentor team members on AI security testing techniques and vulnerability discovery
What You\'ll Bring
  • 5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security
  • Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms
  • Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation
  • Proficiency in Python with experience in AI frameworks and security testing tools
  • Experience with offensive security tools and vulnerability discovery methodologies
  • Ability to read and analyze code across multiple languages and codebases
  • Strong analytical and problem-solving skills with creative thinking about attack scenarios
  • Excellent written communication skills for documenting technical findings and creating security advisories
  • Ability to translate technical findings into clear risk assessments and remediation recommendations
Nice to have Qualifications
  • Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures
  • Published security research or conference presentations on AI security topics
  • Background in software engineering with distributed systems expertise
  • Security certifications such as OSCP, OSCE, GPEN, or similar
  • Experience with GitLab or similar DevSecOps platforms
  • Knowledge of AI agent communication protocols and multi-agent architectures
About the team

Security Researchers are a part of our Security Platforms and Architecture team, who address complex security challenges facing GitLab and its customers to enable GitLab to be the most secure software factory platform on the market. Composed of Security Architecture and Security Research, we focus on systemic product security risks and work cross-functionally to mitigate them while maintaining Engineering\'s development velocity.

Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you\'re excited about this role, please apply and allow our recruiters to assess your application.

The base salary range for this role\'s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.

California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay range

We are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab\'s AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.