Job Search and Career Advice Platform

Enable job alerts via email!

Offensive Security Analyst

Alignerr

Remote

ZAR 200 000 - 300 000

Part time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A cutting-edge AI security firm is seeking an Offensive Security Analyst to analyze attack paths and adversary strategies. The ideal candidate will have over 2 years of experience in pentesting, red team, or a strong blue-team role. Responsibilities include classifying weaknesses and helping to generate data for AI systems evaluation. This is a remote position with competitive pay ranging from $40 to $60/hour and flexible hours between 10 to 40 hours per week.

Benefits

Competitive pay
Flexibility
Global collaboration
Potential for contract extension

Qualifications

  • 2+ years in pentesting, red team, or a strong blue-team role with hands-on attack knowledge.
  • Understand how real attacks unfold in production environments.
  • Ability to clearly explain attack chains, impact, and tradeoffs.

Responsibilities

  • Analyze attack paths, kill chains, and adversary strategies across real-world systems.
  • Classify weaknesses, misconfigurations, and defensive gaps.
  • Review red-team style scenarios and intrusion narratives.
  • Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems.

Skills

Pentesting
Red team experience
Blue team experience
Job description
About The Job

At Alignerr, we partner with the world’s leading AI research teams and labs to build and train cutting‑edge AI models. This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through systems, where defenses fail, and how risk propagates across modern environments.

Organization

Alignerr – Offensive Security Analyst (Structured / Non‑Exploit) – Contract / Task‑Based – $40–$60 /hour – Remote – 10–40 hours/week

What You’ll Do
  • Analyze attack paths, kill chains, and adversary strategies across real‑world systems
  • Classify weaknesses, misconfigurations, and defensive gaps
  • Review red‑team style scenarios and intrusion narratives
  • Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
What We’re Looking For
  • 2+ years in pentesting, red team, or a strong blue‑team role with hands‑on attack knowledge
  • Understand how real attacks unfold in production environments
  • Ability to clearly explain attack chains, impact, and tradeoffs
Why Join Us
  • Competitive pay and flexible remote work.
  • Work directly on frontier AI systems.
  • Freelance perks: autonomy, flexibility, and global collaboration.
  • Potential for contract extension.
Application Process (Takes 10–15 min)
  • Submit your resume
  • Complete a short screening
  • Project matching and onboarding

PS: Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.