Job Title: Azure Data Factory
Experience: 7+ years
Location: PAN India (Preferred: Bangalore / Mumbai)
Key Responsibilities
- Design, build, and implement data pipelines using Fabric, Azure Data Factory, PySpark, SparkSQL, SQL, and Azure DevOps.
- Develop and maintain ETL scripts and workflows to enable seamless data integration across multiple sources.
- Analyze functional specifications and design dimensional data models, KPIs, and metrics to support business reporting needs.
- Ingest data from diverse applications while ensuring compliance with business SLAs.
- Implement data security measures including encryption, masking, and access controls.
- Define and maintain data validation rules, quality checks, and profiling reports.
- Execute data migration and conversion from legacy systems to modern cloud-based data platforms.
Primary Skills
- 5+ years of hands-on experience in Azure Data Factory for pipeline orchestration and integration.
- Strong expertise in PySpark & SparkSQL for distributed data processing.
- Advanced SQL programming skills with query optimization.
- Deep understanding of Data Warehousing concepts and dimensional modeling.
- Proven experience in end-to-end data engineering projects with large-scale datasets.
Core Skills
- Azure Data Factory | PySpark | SparkSQL
- Advanced SQL | Data Warehousing | ETL Development
- Fabric | Azure DevOps | CI/CD Pipelines
- Dimensional Modeling | KPI & Metrics Development
- Data Migration | Data Security | Quality & Validation