What will you do
- Voice AI Stack Ownership: Build and own the end-to-end voice bot pipeline ASR, NLU, dialog state management, tool calling, and TTS to create a natural, human-like conversation experience.
- LLM Orchestration & Tooling: Architect systems using MCP (Model Context Protocol) to mediate structured context between real-time ASR, memory, APIs, and the LLM.
- RAG Integration: Implement retrieval-augmented generation to ground responses using dealership knowledge bases, inventory data, recall lookups, and FAQs.
- Vector Store & Memory: Design scalable vector-based search for dynamic FAQ handling, call recall, and user-specific memory embedding.
- Latency Optimization: Engineer low-latency, streaming ASR + TTS pipelines and fine-tune turn-taking models for natural conversation.
- Model Tuning & Hallucination Control: Use fine-tuning, LoRA, or instruction tuning to customize tone, reduce hallucinations, and align responses to business goals.
- Instrumentation & QA Looping: Build robust observability, run real-time call QA pipelines, and analyze interruptions, hallucinations, and fallbacks.
- Cross-functional Collaboration: Work closely with product, infra, and leadership to scale this bot to thousands of US dealerships.
What will make you successful in this role
- Architect-level thinking: You understand how ASR, LLMs, memory, and tools fit together and can design modular, observable, and resilient systems.
- LLM Tooling Mastery: You've implemented tool calling, retrieval pipelines, function calls, or prompt chaining across multiple workflows.
- Fluency in Vector Search & RAG: You know how to chunk, embed, index, and retrieve and how to avoid prompt bloat and token overflow.
- Latency-First Mindset: You debug token delays, know the cost of each API hop, and can optimize round-trip time to keep calls human-like.
- Grounding > Hallucination: You know how to trace hallucinations back to weak prompts, missing guardrails, or lack of tool access and fix them.
- Prototyper at heart: You're not scared of building from scratch and iterating fast, using open-source or hosted tools as needed.
What you must have
- 5+ years in AI/ML or voice/NLP systems with real-time experience
- Deep knowledge of LLM orchestration, RAG, vector search, and prompt engineering
- Experience with MCP-style architectures or structured context pipelines between LLMs and APIs/tools
- Experience integrating ASR (Whisper/Deepgram), TTS (ElevenLabs/Coqui), and OpenAI/GPT-style models
- Solid understanding of latency optimization, streaming inference, and real-time audio pipelines
- Hands-on with Python, FastAPI, vector DBs (Pinecone, Weaviate, FAISS), and cloud infra (AWS/GCP)
- Strong debugging, logging, and QA instincts for hallucination, grounding, and UX behavior