Enable job alerts via email!

Security Engineer

10a Labs

New York (NY)

Remote

USD 105,000 - 125,000

Full time

Today
Be an early applicant

Job summary

A cyber security firm is looking for a Security Engineer to secure AI systems and conduct security assessments. Candidates should have a degree in Computer Science, a minimum of 3 years of experience in security engineering, and be proficient in programming languages like Python. The role is remote and offers a salary range of $105K to $125K, performance-based bonuses, and generous PTO.

Benefits

Generous PTO
401(k) plan
Performance-based bonuses
Support for professional development

Qualifications

  • 3+ years of experience in security engineering or related fields.
  • Proficient in programming/scripting languages.
  • Strong knowledge of secure software development practices.

Responsibilities

  • Conduct threat modeling and vulnerability assessments.
  • Design and implement security controls.
  • Collaborate with engineers to embed security best practices.

Skills

Cybersecurity experience
Threat detection
Communication skills
Problem-solving

Education

Degree in Computer Science or related field

Tools

Python
Docker
Kubernetes
Job description
About 10a Labs

10a Labs is an applied research and AI security company trusted by AI unicorns, Fortune 10 companies, and U.S. tech leaders. We combine proprietary technology, deep expertise, and multilingual threat intelligence to detect abuse at scale. We also deliver state-of-the-art red teaming across high-impact security and safety challenges.

Role overview

As a Security Engineer, you will be on the front line of securing cutting-edge AI systems. You’ll identify vulnerabilities, build protections into code and workflows, and partner with researchers to reduce risks from adversarial actors. This role requires strong technical skills, a security-first mindset, and comfort working in a fast-paced, startup environment where threats and priorities evolve quickly.

In this role, you will:

  • Conduct threat modeling, vulnerability assessments, and red-team style testing of AI-related systems.
  • Design and implement security controls across infrastructure, applications, and data pipelines.
  • Build, operate, and maintain detection, monitoring, and incident response capabilities, including investigating incidents and driving remediation.
  • Collaborate with engineers and researchers to embed security best practices into the design and deployment of AI systems.
  • Develop automation, tooling, and documentation to improve security operations and reduce manual effort.
  • Stay current on emerging threats — particularly those related to AI/ML, cloud, and large-scale distributed systems.
We\'re looking for someone who
  • Brings deep experience in cybersecurity, threat detection, or a related area.
  • Thrives in fast-moving, high-impact environments such as startups, AI research labs, or security-focused teams.
  • Enjoys tackling complex, ambiguous problems and adapts quickly as priorities evolve.
  • Takes initiative, communicates openly, and consistently pushes themselves — and their team — to deliver high-quality results.
  • Works effectively both independently and collaboratively, contributing to team goals while managing projects with accountability.
  • Can clearly communicate technical security concepts and risk trade-offs to both engineering and non-technical audiences.
Requirements
  • Degree in Computer Science, Engineering, Cybersecurity, or a related field — or equivalent professional experience.
  • 3+ years of hands-on experience in security engineering, application security, detection & response, or penetration testing.
  • Proficient in pertinent programming/scripting languages (e.g., Python, Go, bash/ powershell, etc.).
  • Strong knowledge of secure software development practices, including threat modeling, code review, and DevSecOps principles.
  • Solid understanding of cloud security concepts (IAM, VPCs, encryption, logging/monitoring) and secure deployment practices.
  • Familiarity with modern infrastructure, containerization (Docker, Kubernetes), and scalable distributed systems.
  • Experience effectively communicating complex security topics to technical and non-technical stakeholders.
Very Nice to Have
  • Background in offensive security (exploit development, red teaming, adversary simulation).
  • Familiarity with AI-specific risks such as prompt injection, model theft, data poisoning, or adversarial ML.
  • Salary Range: $105K–$125K, depending on experience and location.
  • Bonus: Performance-based annual bonus.
  • Professional Development: Support for conferences, continuing education, or leadership training.
  • Work Environment: Fully remote, U.S.-based.
  • Time Off: Generous PTO and paid holiday schedule.
  • Retirement: 401(k) plan.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.