We are seeking an experienced Data Engineer to design, develop, and optimize data pipelines, workflows, and integrations across enterprise platforms. The ideal candidate will have a strong background in SQL, Data Warehousing (DWH), Azure, and ETL tools, along with experience handling large-scale data processing and cloud-based data solutions.
Key Responsibilities:
- Implement data structures, workflows, and integrations to ensure seamless and accurate business process execution.
- Collaborate with architects, client leads, and technology teams to define and develop data pipelines.
- Develop and maintain scalable data pipelines and build new API integrations to support growing data needs.
- Analyze source data and create data mappings for efficient data transformations.
- Optimize database performance, ensuring efficient query execution and scalability.
- Design, implement, and manage data warehouses, databases, tables, and SQL queries to power reporting tools like Tableau.
- Write complex and efficient SQL queries to transform raw data into structured models for analytics.
- Work closely with analytics, data science, and engineering teams to automate data analysis, enhance data visualization, and improve data transformation processes.
- Utilize Azure Data Lake (ADL), Azure Data Factory (ADF), and DBT to curate and process data into meaningful insights.
- Provide technical guidance to junior team members, assist with client interactions, and manage task allocation and reporting.
Required Qualifications & Skills:
- Engineering Degree (BE/BTech/MCA/BCA/BSc IT) or equivalent.
- Proven experience as a Data Engineer with hands-on expertise in SQL, Data Warehousing (DWH), Azure, Azure Data Lake (ADL), Azure Data Factory (ADF), and Snowflake.
- Strong understanding of ETL tools such as SSIS or DataStage.
Preferred Skills (Desirable but Not Mandatory):
- Experience in Big Data technologies such as Hive, Impala, and Spark.