Key Responsibilities
Copilot & Fabric Agent Engineering
- Design, configure, and deploy Copilot experiences and Fabric Agents across Lakehouse, Warehouse, and Power BI semantic models
- Define agent instructions, system prompts, and grounding strategies aligned to business use cases
- Implement guardrails to control data scope, tone, response format, and behavior
- Tune Copilot interactions to improve accuracy, explainability, and relevance
Python & AI Orchestration
- Use Python to:
- Orchestrate AI workflows in Fabric notebooks
- Pre-process and enrich data for AI consumption
- Validate Copilot outputs programmatically
- Automate testing and evaluation of agent responses
- Implement Python-based evaluation frameworks (accuracy, relevance, hallucination detection)
- Integrate Python logic with Fabric APIs and metadata layers
Microsoft Fabric Expertise
- Work closely with Data Engineers to:
- Optimize Lakehouse and Warehouse structures for Copilot consumption
- Ensure data is curated, well-modeled, and semantically rich
- Leverage Fabric components:
- OneLake
- Notebooks (PySpark / Python)
- Data Pipelines
- Power BI Semantic Models
- Ensure Copilot interactions respect Row-Level Security (RLS) and role-based access control (RBAC)
Copilot Prompt & Experience Design
- Design and test:
- System prompts
- Few-shot examples
- Structured response templates.
- Reduce hallucinations through:
- Strong grounding in Fabric datasets
- Explicit constraints and validation logic
- Optimize Copilot for:
- Natural language Q&A
- Insight summaries
- Trend explanations
- Root-cause analysis narratives
Responsible AI & Governance
- Apply Responsible AI principles:
- Transparency
- Reliability
- Data privacy
- Define usage boundaries and acceptable output guidelines
- Partner with Security and Compliance teams to:
- Validate data exposure
- Ensure Copilot aligns with enterprise policies
Testing, Validation & Iteration
- Conduct structured testing with business users
- Capture feedback and refine prompts and agents
- Measure:
- Accuracy
- Time-to-insight
- User satisfaction
- Document limitations, risks, and best practices
Documentation & Knowledge Transfer
- Produce clear documentation for:
- Copilot configurations
- Prompt strategies
- Known limitations
- Support handover to production or scale-out teams
- Contribute to internal AI and Fabric standards
Required Technical Skills
Core Skills (Must-Have)
- Strong Python development (advanced)
- Data processing
- Automation
- AI evaluation logic
- Microsoft Fabric (hands-on)
- Notebooks (Python / PySpark)
- Lakehouse / Warehouse
- OneLake
- Copilot & Fabric Agents
- Prompt engineering
- Agent configuration
- Copilot experience tuning
- Power BI Semantic Models
- Understanding of how Copilot consumes metadata
- SQL & Data Modeling
- Star schema concepts
- Analytical data structures
AI & GenAI Skills
- Applied Generative AI concepts
- Prompt engineering and prompt chaining
- LLM behavior tuning and evaluation
- Grounding and context injection strategies
- Hallucination mitigation techniques.
Nice-to-Have Skills
- Azure OpenAI integration experience
- Experience with evaluation frameworks (e.g., custom scoring, LLM-as-judge)
- Experience deploying AI solutions in regulated environments
- Familiarity with Power Platform or Teams Copilot integration.
Experience Requirements
- 5+ years in data, analytics, or AI engineering
- 2+ years applying Generative AI in production or POC environments
- Proven experience with Microsoft Fabric and Copilot
- Strong background in Python-driven analytics or AI workflows