
Search by job, company or skills
Job Title: Senior AI Engineer (Healthcare Domain)
Location: Work from Office (On-site)
Experience: 5+years
JOB DESCRIPTION
Job Title: AI/ML Engineer
Department: Information Technology Reports To: Development Lead
Team Overview
We are hiring two engineers to build and operate the AI-powered denial prediction system within our RCM platform. The two roles are distinct but tightly coupled — together they own everything from raw claims data to a production inference API serving the Angular billing UI.
Engineer #1 – AI/ML Engineer: Owns the end-to-end training pipeline — feature engineering on structured claims data, binary classification modelling, Azure ML workspace, and MLflow experiment tracking. Focused on model accuracy and the data-to-model lifecycle.
Engineer #2 – AI/ML Engineer: Owns the production inference system — the FastAPI/.NET 8 inference API, the claims scrubbing rules engine, integration into the Angular denial management UI and billing worklist, and the feedback loop that keeps the model current. Focused on reliability, latency (<200ms), and correctness.
Both engineers are expected to contribute to the .NET backend using AI-assisted development tools (Claude, GitHub Copilot, GPT-4), and both must be comfortable reviewing AI-generated code for correctness and security.
ROLE 1: AI/ML Engineer – AI-Powered Denial Prediction
Position Summary
This is a hands-on individual contributor role. The ML Engineer owns the end-to-end machine learning pipeline for claims denial prediction — from raw claims data through to a production-ready model that scores every claim before submission. The model output feeds directly into the claims scrubbing engine and denial management UI. The engineer writes production-quality Python, owns the Azure ML workspace, and is accountable for model accuracy metrics at go-live.
Key Responsibilities
ML Pipeline Ownership
Design and implement the end-to-end ML pipeline for claims denial prediction, from data ingestion through model deployment on Azure ML.
Own the Azure ML workspace, experiment tracking (MLflow), and pipeline orchestration.
Ensure the production model meets go-live accuracy targets; monitor model performance and implement drift detection post-deployment.
Maintain clean, testable, and well-documented Python code across all pipeline components.
Feature Engineering & Data Preparation
Transform raw transactional claims records (EDI X12 835/837) into predictive features for binary classification models.
Collaborate with domain experts to identify denial-predictive signals from CARC/RARC codes, payer behavior, and claim attributes.
Apply HIPAA-compliant data handling practices for PHI in ML training datasets.
Manage imbalanced datasets (denial rates typically 5–15%) using appropriate sampling and calibration techniques.
Modelling & Evaluation
Train, evaluate, calibrate, and productionise supervised binary classification models on imbalanced healthcare datasets.
Apply explainability frameworks (SHAP, LIME) to generate denial reason explanations surfaced in the UI.
Conduct rigorous model validation including precision/recall analysis, calibration curves, and business metric alignment.
Document model performance, assumptions, and limitations for stakeholder review.
Backend Integration & AI-Assisted Development
Contribute integration code to the .NET backend using AI-assisted tools (Claude, GitHub Copilot, GPT-4) to generate and refactor C# boilerplate including EF Core models, REST controllers, and messaging consumers.
Review and correct AI-generated .NET C# code for correctness and security before it is merged.
Collaborate closely with the other Systems Engineer to integrate model scoring APIs into the claims scrubbing engine.
Required Skills & Experience
Non-Negotiable
6+ years of hands-on ML engineering — production ML systems, not research or data analysis.
Feature engineering on structured tabular data: turning raw transactional records into predictive features.
Binary classification modelling: training, evaluation, calibration, and productionization of supervised models on imbalanced datasets.
Familiarity with model monitoring and drift detection in production.
Experience with explainability frameworks (SHAP, LIME).
LLM-powered code generation: proficiency using Claude, Copilot, or GPT-4 to generate .NET C# boilerplate, with the ability to review and correct AI-generated code for correctness and security.
Ability to contribute .NET C# integration code, not just Python model code.
Strongly Preferred
Healthcare claims data experience — US payer ecosystem, EDI X12 835/837, CARC/RARC codes, denial categories.
Experience building ML pipelines for RCM, health insurance, or financial services.
MLflow experiment tracking and Azure ML Pipelines.
Familiarity with HIPAA data handling requirements for PHI in ML training datasets.
Nice to Have
Prior work on imbalanced classification problems in healthcare (denial rates typically 5–15%).
Python proficiency: pandas, scikit-learn, XGBoost or LightGBM, MLflow — writes clean, testable, documented code.
ROLE 2: ML Systems Engineer – Inference API & Claims Scrubbing
Position Summary
This is the bridge between the ML model and the production RCM system. The AI/ML Engineer owns the inference API, the claims scrubbing rules engine, integration of AI predictions into the Angular denial management UI and billing worklist, and the feedback loop that keeps the model current. This role is focused on reliability, latency, and correctness of the production inference system. A denial prediction score that arrives 3 seconds after a biller submits a claim is useless — this engineer owns the <200ms target.
Key Responsibilities
Inference API & Production Systems
Build and operate the low-latency inference API (<200ms) that serves ML model predictions to the claims scrubbing engine and Angular UI.
Deploy and manage Azure ML online endpoints (or equivalent); integrate model calls from application code.
Design fault-tolerant inference systems: fallback behavior, circuit breakers, timeout handling, and graceful degradation when the model is unavailable.
Implement structured logging and observability for every prediction — with enough context to debug incorrect predictions post-hoc.
Rules Engine & Claims Scrubbing
Design and maintain a configurable business rules layer on top of ML scores, editable by non-engineers without code deployment.
Integrate ML predictions into the claims scrubbing engine to flag high-risk claims before submission.
Implement batch inference patterns: pre-scoring workloads asynchronously and caching results for UI performance.
Frontend Integration & UX
Integrate ML prediction scores and explanations into the Angular denial management UI and billing worklist.
Design API contracts with careful attention to latency, error states, and confidence score communication.
Collaborate with frontend engineers on UX implications of model outputs.
Feedback Loop & Model Currency
Build and maintain the feedback pipeline that captures biller outcomes and routes labelled data back to the ML Engineer's training pipeline.
Implement event-driven architecture using Azure Service Bus to consume events from upstream services and publish feedback events downstream.
Support A/B testing infrastructure for gradual rollout of new model versions to a subset of claims.
Backend Development & AI-Assisted Coding
Develop REST APIs in .NET (C#)
Use Claude, GitHub Copilot, or GPT-4 to generate .NET 8 C# scaffolding (controllers, service classes, Azure Service Bus consumers, EF Core repositories) — and critically, identify when generated code is subtly wrong, insecure, or non-idiomatic.
Implement API authentication using Azure AD service-to-service (OAuth2 client credentials flow) or equivalent.
Required Skills & Experience
Non-Negotiable
6+ years backend/ML systems engineering — has built and operated production APIs serving ML model predictions at low latency.
.NET (C#) for building REST APIs. Must be comfortable using LLMs (Claude, Copilot, GPT-4) to generate .NET scaffolding and reviewing output for correctness.
Designing fault-tolerant ML inference systems: fallback behavior, circuit breakers, timeout handling, graceful degradation.
Structured logging and observability for ML systems — every prediction logged with sufficient context for post-hoc debugging.
API authentication: Azure AD service-to-service (or equivalent OAuth2 client credentials flow).
Strongly Preferred
Rules engine design: configurable business rules layers on top of ML scores, editable by non-engineers without code deployment.
LLM-assisted .NET development: demonstrated ability to use Claude, Copilot, or GPT-4 to generate .NET 8 C# code, and to spot when generated code is subtly wrong, insecure, or non-idiomatic.
Healthcare RCM domain: familiarity with claim submission workflows, denial categories, CARC/RARC codes, prior authorization requirements.
Event-driven architecture: Rabbit MQ or equivalent — consuming events from upstream services and publishing feedback events downstream.
Experience integrating ML predictions into Angular or React front-ends — understands UX implications of latency, error states, and confidence communication.
Batch inference patterns: pre-scoring workloads asynchronously and caching results for UI performance.
Nice to Have
SHAP or LIME explainability — generating human-readable explanations for why a claim was flagged.
A/B testing framework for model versions in production — gradually rolling out a new model version to a subset of claims.
Experience with Azure Container Apps or AKS deployment.
Job ID: 145454625