Why This Role Exists
RADlabs isn't a research lab that publishes papers. It's a production AI shop that ships systems into healthcare organizations, government agencies, and financial institutions where accuracy, compliance, and reliability aren't optional. The AI/ML Lead is the technical spine. You'll architect the ML pipelines behind intelligent document processing (already cutting review time by 70%), build the NLP engines powering Versante's OSMe Buddy health conversations, design the syndemic intelligence models inside VersanteConnect, and stand up RADcube's GenBI analytics platform. This is hands-on-keyboard plus team leadership plus client-facing architecture—not a management role that's lost touch with code.
What You Will Do
- Architect and lead ML systems for RADlabs (IDP, GenBI, AI Maturity tools) and Versante (OSMe Buddy NLP, VersanteConnect syndemic intelligence): end-to-end from data pipelines to production deployment
- Design and implement production ML pipelines: ingestion, feature engineering, training, evaluation, deployment, monitoring, retraining (MLOps)
- Build and mentor AI/ML engineering team (3–5 engineers): code reviews, architecture decision records, technical standards, and growth plans
- Develop and deploy NLP/LLM solutions: conversational AI, text classification, NER, sentiment analysis, and RAG (Retrieval-Augmented Generation) architectures using both proprietary and open-source models (GPT, Claude, Llama, Hugging Face)
- Build chatbot and conversational AI applications using Microsoft Bot Framework, Dialogflow, OpenAI APIs, or equivalent platforms—with integration into healthcare and enterprise workflows
- Design and deploy deep learning models (CNN, RNN, Seq2Seq, Transformers, LLMs) for document processing, form extraction, and compliance verification
- Explore and implement advanced AI techniques: RLHF (Reinforcement Learning with Human Feedback), prompt engineering optimization, few-shot/zero-shot learning, and multi-modal AI
- Embed responsible AI: bias detection, explainability (SHAP/LIME), fairness metrics, governance aligned with NIST AI RMF
- Deploy on AWS/Azure using Docker/Kubernetes and serverless; establish CI/CD for ML with DevOps
- Evaluate emerging tech (foundation models, fine-tuning strategies, agentic AI frameworks like LangGraph/CrewAI) and make build-vs-buy recommendations for product roadmap
- Write technical approach narratives and architecture diagrams for RFP responses; deliver client-facing AI demos and capability presentations
- Contribute thought leadership: blog posts, conference talks, Databricks/AI community engagement
Requirements
What You Bring
- 5–7 years hands-on AI/ML engineering, with 2+ years in lead/senior/architect role owning system-level decisions
- Production Python proficiency; deep experience with PyTorch or TensorFlow, Scikit-learn, Hugging Face Transformers
- Strong understanding of ML algorithms (classical and deep): KNN, SVM, Random Forest, CNN, RNN, Seq2Seq, Transformers, and LLMs
- NLP/LLM production experience (Non-Negotiable): fine-tuning, prompt engineering, RAG architectures, conversational AI deployment—not just notebooks or Kaggle
- Chatbot / Conversational AI experience: proven deployment of chatbots or AI assistants using Microsoft Bot Framework, Dialogflow, OpenAI APIs, or equivalent in production environments
- MLOps maturity: model versioning (MLflow, DVC, W&B), CI/CD for ML, model monitoring (data drift, performance degradation), A/B testing
- Cloud ML platform experience: AWS SageMaker, Azure ML, or GCP Vertex AI with real deployment artifacts—not just certification projects
- Data engineering foundation: SQL, Spark/Databricks, Airflow or equivalent orchestration, data pipeline design
- Shipped AI systems in at least one of: healthcare, government, financial services, education. Domain context matters.
- Technical leadership: architecture ownership, team mentoring, sprint-level technical planning, cross-functional communication
- Clear communicator who can explain model behavior, trade-offs, and limitations to non-technical executives, clients, and grant reviewers
- BS/MS in CS, AI/ML, Data Science, Mathematics, or related quantitative field
Bonus Points
- GenAI depth: LLM fine-tuning (GPT, Claude, Llama), multi-modal models, diffusion models, agentic AI (LangGraph, CrewAI)
- Experience with RLHF (Reinforcement Learning with Human Feedback) and advanced optimization strategies
- Healthcare AI regulatory: HIPAA technical safeguards, FDA AI/ML guidance, FHIR/HL7 interoperability
- Graph databases (Neo4j) and knowledge graphs for healthcare ontologies or syndemic modeling
- Published research, conference talks, or open-source contributions in AI/ML
- AWS ML Specialty, Azure AI Engineer, or Google Professional ML Engineer certification
- Databricks, Snowflake, or modern lakehouse experience