About the Role
We are looking for a hands-on Team Lead to lead high-quality coding data annotation workstreams that support training and evaluation of Large Language Models (LLMs). This role sits at the intersection of software engineering, workflow automation, and data quality, requiring someone who can lead developers while still contributing technically.
You will architect workflows, guide coding annotators, design validation pipelines, ensure quality bar consistency, and collaborate closely with Delivery Managers and Product Managers on project execution. This is not a pure management role deep technical expertise and hands-on coding ability are essential.
Responsibilities
- Write, review, debug, and optimize production-grade Python code, contributing to prototypes, automations, validation workflows, and internal LLM training tools.
- Develop scalable solutions for RLHF, SFT, competitive coding tasks, verifiers, and GYM systems/environments.
- Drive code quality, system design inputs, CI/CD ownership, and cloud-based execution workflows.
- Identify automation opportunities and collaborate with tooling and engineering teams to scale operations.
- Lead end-to-end execution of coding-driven AI/LLM data projects, ensuring high-quality and timely delivery.
- Troubleshoot technical and operational issues, remove bottlenecks, and implement continuous process and quality improvements.
- Build and enforce frameworks for code review, validation, QA scoring, SLAs, throughput, and delivery governance.
- Lead and mentor a team of 510 programmers, QA specialists, and contributors, supporting skill development, performance alignment, and technical guidance.
- Run weekly client cadences, provide transparent updates, and proactively mitigate technical or delivery risks.
- Partner cross-functionally with AI research, engineering, data, tooling, and infrastructure teams to ensure workflow alignment, efficient pipelines, and seamless deployments.
Required Skills and Qualifications
- 5 9 years of hands-on software engineering with strong Python coding skills (JavaScript/TypeScript preferred; Java/C/C++ exposure a plus).
- Proven experience leading engineering/coding teams, owning code quality, and staying hands-on with reviews and debugging.
- Experience with one or more of: competitive programming tasks, multi-language code analysis, backend services & databases, automation workflows, UI/React/Next.js systems, or GYM (UI/non-UI) tooling.
- Ability to architect and deploy end-to-end systems across coding annotation or LLM data workflows.
- AI-forward mindset with strong learning ability; personal AI projects or GitHub contributions are a strong plus.
Nice to Have:
- Experience with the Agentic System is good to have.
- Experience building annotation tools, automation systems, or developer platforms.
- Open-source (GitHub) or AI tooling contributions.