Key Responsibilities:
- Design and develop RAG-based pipelines using vector databases and LangChain for enterprise-scale applications.
- Build, fine-tune, and optimize LLM-powered solutions, including prompt engineering and model evaluation for domain-specific tasks.
- Integrate and deploy AI solutions on AWS Bedrock and Azure AI platforms.
- Develop and maintain knowledge graphs to support intelligent querying and reasoning in GenAI solutions.
- Implement agentic frameworks to enable autonomous AI workflows and decision-making.
- Collaborate with cross-functional teams to align AI initiatives with business goals.
- Ensure scalability, performance, security, and compliance of AI applications in enterprise environments.
- Stay updated on the latest GenAI and LLM advancements to continuously improve solution capabilities.
Required Skills & Experience:
- 57 years of hands-on experience in AI/ML, with a focus on Generative AI.
- Strong experience with LangChain or similar orchestration frameworks.
- Proven experience with RAG pipelines and vector databases (Pinecone, FAISS, Weaviate, Milvus, ChromaDB).
- Experience integrating and deploying solutions on AWS Bedrock and Azure AI services.
- Working knowledge of Knowledge Graphs (Neo4j, RDF, Graph DBs) and semantic search.
- Hands-on experience in LLM fine-tuning (OpenAI, Anthropic, HuggingFace models) for domain-specific use cases.
- Proficiency in Python and modern AI development frameworks.
- Experience building enterprise-grade GenAI or agentic applications.
- Familiarity with CI/CD for ML/AI pipelines is a plus.