Role: Data Support Engineer | L3 support |
Experience: 5+ years
Skills: Python, Databricks ,SQL ,Pyspark
Notice period: Immediate to 15 days
Location: Bangalore.
Roles and Responsibilities:
Roles & Responsibilities
. Data Engineering & Development
- Design, build, and optimize data pipelines and ETL/ELT workflows using Azure Data Factory (Classic/Fabric), PySpark, SQL, and Python.
- Develop scalable data solutions leveraging Azure Data Lake Storage Gen2, Delta Lake, Azure Synapse (serverless/dedicated), and Spark-based processing.
- Implement robust SQL logic including query tuning, window functions, and complex transformations.
2. Cloud Architecture & Integration
- Build and maintain modern data architectures on the Azure ecosystem, ensuring efficiency, scalability, and cost optimization.
- Integrate multiple data sources (structured, semi-structured, streaming) into centralized data platforms such as Lakehouse or Delta Lake.
3. Data Security, Governance & Compliance
- Apply strong governance principles using Azure/Microsoft Purview, including cataloging, lineage, classification, and policy enforcement.
- Implement and manage RBAC, data masking, key management, and secure access controls.
- Enforce compliance with enterprise data, privacy, and security standards.
4. CI/CD, DevOps & Version Control
- Use Git, Azure DevOps, or GitHub Actions for version control, deployment automation, and release management.
- Build and maintain CI/CD pipelines for data workloads, ensuring quality, consistency, and repeatability.
5. Consulting & Client Engagement
- Engage directly with clients to gather requirements, participate in workshops, and contribute to solution design.
- Support pre‑sales activities, including scoping, effort estimation, proposal creation, and technical presentations.
- Deliver solutions across multi-client environments, managing expectations and ensuring customer satisfaction.
6. Collaboration, Leadership & Delivery
- Lead agile ceremonies, coordinate task planning, and ensure on-time delivery of project milestones.
- Mentor junior engineers on coding standards, best practices, and architectural guidelines.
- Ensure adherence to engineering excellence through documentation, reviews, and quality checks.
7. Advanced & Preferred Responsibilities (Nice to Have)
- Develop or optimize workloads using Databricks (DBSQL, Jobs, Delta Live Tables).
- Support migration efforts from Azure Data Factory to Microsoft Fabric pipelines.
- Work with real-time/streaming technologies such as Event Hubs, Kafka, or Fabric Real-Time Intelligence.
- Implement advanced models and optimizations in Power BI—Direct Lake, composite models, incremental refresh, and semantic model design.
- Apply knowledge of Microsoft Fabric components including OneLake, Lakehouse, Synapse in Fabric, and Data Engineering workloads.
8. Behavioral & Consulting Competencies
- Demonstrate strong client communication skills, including whiteboarding, storytelling, and translating technical concepts for business stakeholders.
- Show ownership, accountability, and an ability to solve problems independently and proactively.
- Produce clear, structured documentation across design, pipelines, data models, and solution artifacts.
- Drive collaboration across cross-functional teams and ensure alignment with customer and delivery goals.