Job Description
We are looking for a Director – AI & Innovation who is, above all, a builder. You will define and
execute the company-wide AI strategy, architect and ship production-grade agentic systems, and
personally write code every week. This is not a strategy-only role- you will be in the codebase.
You will lead a small, high-trust team of two- an ML Engineer and a Vibe Coder (a product-minded
engineer fluent in vibe coding tools). Small team size is intentional: we compound velocity through
great tooling, open-source, and tight architectural discipline, not headcount.
You report directly to the CPTO and operate with the authority and accountability of a founding
engineer.
Required Skills
[We are looking for a Director – AI & Innovation who is, above all, a builder. You will define and execute the company-wide AI strategy, architect and ship production-grade agentic systems, and personally write code every wee]
Additional Information
What You'll Do
- Agentic AI Systems Design & Engineering
- Architect and build multi-agent systems that orchestrate the core workflows of 1Source,
1Data, and 1Xcess- RFQ generation, supplier discovery, price benchmarking, BOM parsing,
and inventory matching.
- Design agent graphs using frameworks such as LangGraph, CrewAI, or AutoGen- defining
agent roles, tool registries, state machines, escalation paths, and human-in-the-loop
checkpoints.
- Build and maintain MCP (Model Context Protocol) servers that expose internal
data and business logic as structured, composable tools consumable by AI agents and
external LLM clients.
- Define tool schemas, function signatures, and capability registries so that agents across all
products can discover and invoke capabilities reliably and safely.
- Implement guardrails, retry logic, fallback strategies, and audit logging for every agentic
workflow- production agents must be observable and recoverable.
- GenAI Engineering
- Build RAG pipelines for datasheet extraction, BOM parsing, RFQ drafting, and supplier
communication- grounding LLMs proprietary component data.
- Design and manage embedding strategies: choose the right embedding models, chunk
sizes, retrieval architectures (hybrid dense-sparse search), and re-ranking layers for each
use case.
- Fine-tune and adapt open-source LLMs (Llama 3, Mistral, Phi, Qwen, or equivalents) for
domain-specific tasks- component classification, part number normalisation, lifecycle
prediction from text.
- Build prompt engineering systems that are version-controlled, evaluated, and reproducible-
not ad-hoc prompts left in notebooks.
- Evaluate and integrate frontier model APIs (OpenAI, Anthropic, Gemini) alongside self-
hosted open-source models based on cost, latency, and capability trade-offs.
- Classical ML & Predictive Intelligence
- Build and deploy classical ML models for pricing signal detection, demand forecasting, lead
time prediction, lifecycle risk scoring, and inventory age risk- using gradient boosting, time-
series models, and clustering techniques.
- Own the full ML lifecycle: feature engineering, model training, offline evaluation, A/B testing,
production monitoring, drift detection, and retraining pipelines.
- Make principled decisions on when to use a simple statistical model versus a large LLM-
optimising for cost, latency, and explainability at every layer.
- MCP, Tools & Integrations
- Design and maintain the MCP server layer that exposes#39;s business capabilities
(pricing lookups, supplier scoring, BOM analysis, RFQ status) as callable tools for internal
agents and external AI clients.
- Define a coherent tool taxonomy across all products- ensuring agents can compose tools
from 1Source, 1Data, and 1Xcess without tight coupling or redundancy.
- Build integration connectors for distributor APIs, ERP systems, and data feeds that are
agentic-friendly- structured outputs, error contracts, and rate-limit-aware retry logic.
- Stay ahead of the MCP ecosystem: evaluate new servers, contribute open-source tooling
where it benefits the platform, and ensure's stack is composable with the broader AI
ecosystem.
- MLOps, Infrastructure & Open-Source Stack on AWS
- Own the end-to-end MLOps stack on AWS: SageMaker for model training and hosting,
Lambda and ECS for lightweight inference, S3 and RDS for data persistence, and
CloudWatch for observability.
- Prefer open-source tooling at every layer: MLflow for experiment tracking, Qdrant or
Weaviate for vector search, Airflow or Prefect for pipeline orchestration, Ollama for local
inference, and Hugging Face for model management.
- Build containerised inference services (Docker, Kubernetes / EKS) with autoscaling, blue-
green deployment, and latency SLAs- every production model must have a runbook.
- Implement model monitoring: track prediction drift, data drift, latency percentiles, and
business KPI alignment- automated alerts before humans notice degradation.
- Drive cloud cost discipline: choose self-hosted open-source over paid APIs wherever
performance is equivalent; benchmark and document every trade-off.
- Vibe Coding & Engineering Culture
- Actively use and champion AI-assisted development tools- Cursor, GitHub Copilot,
Windsurf, Bolt, or equivalents- and set the standard for how the team uses them to ship
faster.
- Guide the Vibe Coder in translating product requirements into working AI-assisted
prototypes and production-ready components- from idea to demo in hours.
- Evaluate and onboard new vibe coding and AI dev tooling as the ecosystem evolves; what
is state-of-the-art today will be table stakes in six months.
- Create a culture where shipping beats theorising: every model, every agent, every pipeline
is measured against a business KPI within its first sprint in production.
- Company-Wide AI Strategy & Leadership
- Define the multi-year AI roadmap fori- identifying where agentic AI, fine-tuned LLMs,
and classical ML create the most durable business value across all products.
- Partner with the SVP of Data Products, VP of Sourcing Products, and CPTO to embed AI
capabilities into core product workflows- you are the connective tissue between product
ambition and technical reality.
- Champion responsible AI: define evaluation frameworks, bias checks, and guardrails for all
agentic systems before they touch production data.
- Represent AI strategy and technical credibility to investors, enterprise customers, and
technology partners when required.
- Recruit, mentor, and develop the ML Engineer and Vibe Coder- setting a high bar for craft,
velocity, and continuous learning.