About The Job
At Alignerr, we partner with the world's leading AI research teams and labs to build and train cutting-edge AI models.
This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through systems, where defenses fail, and how risk propagates across modern environments.
Organization: Alignerr
Position: Offensive Security Analyst (Structured / Non-Exploit)
Type: Contract / Task-Based
Compensation: $40$60 /hour
Location: Remote
Commitment: 1040 hours/week
What You'll Do
- Analyze attack paths, kill chains, and adversary strategies across real-world systems
- Classify weaknesses, misconfigurations, and defensive gaps
- Review red-team style scenarios and intrusion narratives
- Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
What We're Looking For
- 2+ years in pentesting, red team, or a strong blue-team role with hands-on attack knowledge
- Understand how real attacks unfold in production environments
- Ability to clearly explain attack chains, impact, and tradeoffs
Why Join Us
- Competitive pay and flexible remote work.
- Work directly on frontier AI systems.
- Freelance perks: autonomy, flexibility, and global collaboration.
- Potential for contract extension.
Application Process (Takes 1015 min)
- Submit your resume
- Complete a short screening
- Project matching and onboarding
PS: Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.