Offensive Security Analyst (Structured / Non-Exploit) — AI Training
About The Role
What if your red-team mindset and deep knowledge of how real attacks unfold could directly shape how the world's most advanced AI systems understand cybersecurity
We're looking for Offensive Security Analysts to bring adversarial thinking to AI training — mapping attack paths, analyzing kill chains, and modeling how threats move through real environments. This isn't exploit development. It's structured adversarial reasoning: thinking clearly about how attackers operate, where defenses fail, and how risk propagates across modern systems.
This is a fully remote, flexible contract role built for security professionals who can think offensively and communicate precisely.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
What You'll Do
- Analyze attack paths, kill chains, and adversary strategies across realistic, real-world system scenarios
- Identify weaknesses, misconfigurations, and defensive gaps with clear, structured reasoning
- Review red-team-style scenarios and intrusion narratives for accuracy and depth
- Generate, label, and validate adversarial reasoning data used to train and evaluate frontier AI systems
- Articulate attack chains, potential impact, and security tradeoffs in ways that are clear and technically sound
- Work independently and asynchronously — fully on your own schedule
Who You Are
- 2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with strong offensive knowledge
- You understand how real attacks unfold in production environments — not just in theory
- You can clearly explain complex attack scenarios, their impact, and the tradeoffs involved
- Detail-oriented and methodical — you think in systems and spot what others miss
- Strong written communicator who can document findings with precision
- No exploit development skills required — this role is about structured adversarial thinking, not code
Nice to Have
- Familiarity with MITRE ATT&CK, kill chain frameworks, or threat modeling methodologies
- Experience writing security reports, red team narratives, or threat assessments
- Background in security architecture, cloud security, or enterprise environments
- Prior experience working with AI tools or data labeling platforms
Why Join Us
- Work directly on frontier AI systems alongside leading AI research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, task-based work
- Make a tangible impact on how AI understands and reasons about real-world cybersecurity threats
- Potential for ongoing work and contract extension as new projects launch