We are seeking an experienced
Senior Data Engineer to design, develop, and optimize data solutions using
Microsoft Fabric. This role requires deep technical expertise in modern data engineering practices and Microsoft's unified analytics platform. Similar experience with Data Bricks would be considered.
Key Responsibilities:
- Design and implement scalable data pipelines and ETL/ELT processes within Microsoft Fabric from a code-first approach
- Develop and maintain notebooks, data pipelines, workspace and other Fabric item configurations
- Build and optimize data architectures using delta tables, lakehouse, and data warehouse patterns
- Implement data modelling solutions including star schema, snowflake schema, and slowly changing dimensions (SCDs)
- Performance tune Delta, Spark, and SQL workloads through partitioning, optimization, liquid clustering, and other advanced techniques
- Develop and deploy Fabric solutions using CI/CD practices via Azure DevOps
- Integrate and orchestrate data workflows using Fabric Data Agents and REST APIs
- Collaborate with development team and stakeholders to translate business requirements into technical solutions
Requirements
Total experience- Min 6+ yrs with 2+yrs in Data Fabric
Microsoft Fabric Expertise:
- Hands-on experience with Fabric notebooks, pipelines, and workspace configuration
- Fabric Data Agent implementation and orchestration
- Fabric CLI and CI/CD deployment practices
Programming & Development:
- Python (advanced proficiency)
- PySpark for distributed data processing
- Pandas and Polars for data manipulation
- Experience with Python libraries for data engineering workflows
- REST API development and integration
Data Platform & Storage:
- Delta Lake and Iceberg table formats
- Delta table optimization techniques (partitioning, Z-ordering, liquid clustering)
- Spark performance tuning and optimization
- SQL query optimization and performance tuning
Development Environment:
- Visual Studio Code
- Azure DevOps for CI/CD and deployment pipelines
- Experience with both code-first and low-code development approaches
Data Modeling:
- Data warehouse dimensional modeling (star schema, snowflake schema)
- Slowly Changing Dimensions (SCD Type 1, 2, 3)
- Modern lakehouse architecture patterns
- Metadata driven approaches
Preferred Qualifications:
- 5+ years of data engineering experience
- Previous experience with large-scale data platforms and enterprise analytics solutions
- Strong understanding of data governance and security best practices
- Experience with Agile/Scrum methodologies
- Excellent problem-solving and communication skills
Benefits
Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them.
Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment or even abroad in one of our global canters.
Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays.
Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings.
Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.