Role Summary
We are looking for a highly skilled AI Engineer to help build and scale our internal AI ecosystem. You will design and deploy production-ready AI agents and multi-agent workflows that automate complex business processes.
This role bridges LLM engineering and backend software development, using a modern stack centered on LangGraph, PydanticAI, and Model Context Protocol (MCP).
This is a hands-on role ideal for someone excited about building real-world AI systems and shipping them to production.
Key Responsibilities
Agentic Workflow Development
- Design and implement stateful AI workflows using LangGraph.
- Build role-based multi-agent collaborations using CrewAI.
- Develop reliable long-running and branching AI processes.
Structured AI Services
- Use Pydantic and PydanticAI to enforce type safety and structured outputs.
- Implement schema-driven AI pipelines and validation layers.
- Contribute to reliability, logging, and observability of AI services.
RAG & Context Engineering
- Build and maintain Retrieval-Augmented Generation (RAG) pipelines.
- Work with vector databases such as Qdrant, ChromaDB, or PgVector.
- Contribute to GraphRAG implementations using Neo4j or FalkorDB.
- Improve search quality using hybrid search and reranking techniques.
MCP & Internal Tooling
- Help build and maintain Model Context Protocol (MCP) servers.
- Integrate AI agents with internal APIs, databases, and tools.
- Support development of internal AI frameworks and reusable components.
Model Integration & Optimization
- Work with both local models (Ollama / LM Studio) and cloud LLM providers.
- Assist in model evaluation, optimization, and experimentation.
- Support domain-specific fine-tuning and benchmarking.
Performance & Scalability
- Implement semantic caching and context optimization strategies.
- Improve latency, cost efficiency, and scalability of AI services.
Deployment & Engineering
- Containerize services using Docker.
- Deploy AI workloads on AWS Lambda or GCP Cloud Functions.
- Write clean, maintainable, production-quality Python code.
Required Skills & Experience: -
- 3–6 years of experience in software engineering, ML engineering, or AI engineering.
- Hands-on experience building production applications in Python.
- Experience with LangChain or LangGraph (or similar LLM frameworks).
- Strong experience with Pydantic and structured data validation.
- Exposure to multi-agent frameworks such as CrewAI is a plus.
- Experience working with LLM APIs and prompt engineering.
- Familiarity with RAG pipelines and vector databases.
Databases
- Experience with at least one Vector DB (Qdrant / PgVector) or Graph DB (Neo4j or FalkorDB).
Backend & Infrastructure
- Strong Python (Asyncio preferred).
- Experience with Docker and cloud/serverless deployments.
- Understanding of REST or gRPC APIs.
Preferred Qualifications
- Experience with Human-in-the-Loop workflows.
- Background in semantic search or information retrieval.
- Experience building internal tools or developer platforms.
- Familiarity with model fine-tuning or evaluation.