As an Azure Data Engineer at Fusion Plus Solutions, you will be responsible for designing, developing, implementing, and optimizing data pipelines and solutions within the Microsoft Azure ecosystem. You will play a crucial role in enabling data-driven decision-making by ensuring data is accessible, reliable, and performant.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient data ingestion, processing, and transformation pipelines using various Azure data services.
- Work with diverse data sources, including structured, semi-structured, and unstructured data, applying appropriate ETL (Extract, Transform, Load) or ELT strategies.
- Implement and optimize data storage solutions using Azure Data Lake Storage, Azure SQL Database, Azure Synapse Analytics, Azure Cosmos DB, or other relevant Azure services based on project requirements.
- Develop and maintain data models and schemas to support analytics, reporting, and machine learning initiatives.
- Ensure data quality, integrity, and security across all data pipelines and storage solutions.
- Collaborate with data scientists, data analysts, and other stakeholders to understand data requirements and deliver effective data solutions.
- Monitor, troubleshoot, and optimize data workflows and processing performance on Azure.
- Implement data governance and compliance policies within the Azure data environment.
- Automate data processes and workflows using Azure Data Factory, Azure Databricks, and other scripting languages (e.g., Python, Scala, SQL).
Mandatory Skills:
- 5+ years of total experience in data engineering, with a minimum of 3 years of relevant experience in Azure Data Engineering.
- Proven experience with databases like Oracle, SQL Server, or similar relational databases.
- Strong hands-on experience with Big Data and Cloud platforms, specifically Microsoft Azure. This includes proficiency with services such as Azure Data Factory, Azure Databricks, Azure Data Lake Storage (Gen2), Azure Synapse Analytics, and Azure SQL Database.
- Solid understanding of data warehousing concepts, data modeling, and ETL/ELT processes.
- Proficiency in SQL for data manipulation, querying, and optimization.
Good to Have Skills:
- Hands-on experience with ETL tools and coding for unstructured, semi-structured, and structured data.
- Experience with other cloud platforms like Google Big Query, AWS, or Cloudera is a plus.
- Familiarity with scripting languages like Python or Scala for data processing and automation.
- Knowledge of real-time data processing concepts and technologies (e.g., Azure Stream Analytics, Event Hubs).
- Experience with data visualization tools (e.g., Power BI) and integrating with Azure data services.
- Understanding of DevOps practices for data pipelines (CI/CD).
Interview Process:
- Client Interview / F2F Applicable: Yes
Background Check Process:
- Background check will be initiated Post onboarding through a designated BGV Agency.