Search by job, company or skills

C

AI Security Researcher - LLM Red Teaming & Jailbreaking Specialist- Ground Floor Opportunity

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Our Client is On a mission to help enterprises accelerate AI adoption with confidence.

Distinguished Founders / Board / Founding Team / Investors!

Shape the Future of AI Security from Day One.

Join a elite founding team of cybersecurity veterans to pioneer the next generation of AI threat defense.

We're building the definitive platform for AI security and need a world-class AI Security Researcher with 1-6 years of cutting-edge experience in LLM jailbreaking and AI agent red teaming to architect our core research initiatives.

Revolutionary Impact: Own critical research domains, publish industry-defining papers, develop proprietary attack frameworks, and establish the gold standard for AI security practices that will protect billions of AI interactions globally.

What You'll Pioneer:

  • Advanced threat modeling for AI systems
  • Red team scenarios: prompt injection, jailbreaking, model manipulation
  • Build proof-of-concept exploits demonstrating AI vulnerabilities
  • Shape industry-wide security standards

Your Background:

  • 1-7 years security research experience
  • Bug bounty hunter with demonstrated exploits
  • Deep application security knowledge
  • Passion for AI safety & governance
  • Ready to define the future of AI security Let's build something extraordinary together!

Ready to define the future of AI security

Write to [Confidential Information] to get connected!

More Info

Job Type:
Industry:
Function:
Employment Type:

Job ID: 134323053