Search by job, company or skills

alignerr

Offensive Security Analyst (Structured / Non-Exploit)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 10 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About The Role

What if your ability to think like an adversary could directly shape how AI understands and reasons about cybersecurity threats We're looking for Offensive Security Analysts to analyze real-world attack paths, model adversary behavior, and help build the AI systems that will define the future of security.

This role is focused on structured adversarial reasoning — not exploit development. You'll bring your red team instincts and attack knowledge to a unique challenge: teaching AI how threats actually unfold, where defenses break down, and how risk cascades through modern environments.

Fully remote, flexible hours, and meaningful work at the frontier of AI.

  • Organization: Alignerr
  • Type: Hourly Contract
  • Location: Remote
  • Commitment: 10–40 hours/week

What You'll Do

  • Analyze attack paths, kill chains, and adversary strategies across realistic, production-like environments
  • Identify weaknesses, misconfigurations, and defensive gaps in complex system architectures
  • Review and evaluate red team scenarios, intrusion narratives, and threat models
  • Generate, label, and validate adversarial reasoning data used to train and benchmark AI systems
  • Articulate attack chains, impact assessments, and risk tradeoffs in clear, structured formats
  • Work independently and asynchronously on task-based assignments — on your own schedule

Who You Are

  • 2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with deep offensive knowledge
  • You understand how real attacks unfold in production environments — not just in theory
  • You can clearly explain attack chains, adversary intent, and downstream impact to both technical and non-technical audiences
  • Detail-oriented and systematic — you document your reasoning as rigorously as your findings
  • Comfortable working independently without day-to-day oversight

Nice to Have

  • Familiarity with frameworks like MITRE ATT&CK, Cyber Kill Chain, or STRIDE
  • Experience with threat modeling, adversary simulation, or purple team exercises
  • Background in cloud security, network architecture, or enterprise environments
  • Prior work in security research, CTF competitions, or technical writing
  • Exposure to AI tools or data labeling workflows

Why Join Us

  • Work directly on frontier AI systems alongside the world's leading AI research labs
  • Fully remote and flexible — structure your work around your life, not the other way around
  • Freelance autonomy with the substance of meaningful, high-impact projects
  • Apply your offensive security expertise in a novel, intellectually engaging context
  • Potential for ongoing work and contract extension as new projects launch

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 145786091

Similar Jobs