Hiring: AI Red Team Engineer
We're hiring security researchers and offensive engineers to stress-test AI models, agents, and ML systems from prompt injections to creative exploit chains. If you think like an attacker and build like an engineer, keep reading.
What you'll do
- Run red-team engagements against AI models and autonomous agents (think advanced adversarial scenarios).
- Create offline, reproducible, auto-evaluable test cases to validate safety and capability of AI agents.
- Build automation, custom tooling, test harnesses, and CI workflows to scale tests.
- Reproduce real-world exploits and document mitigation recommendations.
- Collaborate with research, engineering and product teams to harden systems.
(Example work we expect familiarity with: AI agents under attack red-teaming case studies.)
Must-have (non-negotiable)
- Proficient scripting & automation: Python, Bash, or PowerShell.
- Containerization & CI/CD security experience (Docker).
- Strong background in penetration testing / offensive security (web, API, infra).
- Familiarity with AI model vulnerabilities (e.g., prompt injection) and LLM threat models OWASP LLM concepts.
- Comfortable using LLM tools to speed test-case creation and debugging.
- Able to design end-to-end test cases and run them autonomously.
Highly desirable
- Prior AI/ML security, evaluation, or red-teaming experience (LLMs, agents, RAG pipelines).
- Experience with AI red-teaming frameworks (e.g., garak, PyRIT) or similar.
- Vulnerability research, exploit development, or OS privilege escalation knowledge.
- Web & network security expertise; social engineering/phishing simulation experience.
- Experience building auto-grading / reproducible security tests.
How to apply
Send your CV - [Confidential Information]
Fill up this form: https://forms.gle/Mikbco7Zdz6cBDfk8