Search by job, company or skills

  • Posted a month ago
  • Over 50 applicants
Quick Apply

Job Description

Technical Skills:

6-9 years of relevant work experience.

Fully conversant with big-dataprocessing approaches and schema-on-read methodologies are a must and knowledge of AzureDataFactory / AzureDatabricks (PySpark) / AzureDataLake Storage (ADLS Gen 2) / Azure Synapse and Microsoft Fabric is a must.

Good to have an excellent development skills & extensive hand-on development & coding experience in a variety of languages, e.g., Python (compulsory PySpark), SQL, Power BI DAX (good to have), etc.

Experience in designing solutions for CloudDataWarehouse and working with Architect for generic implementation design of ETL / ELT Pipelines.

Experience implementing robust ETL / ELT pipelines within AzureDataFactory / Azure Synapse Pipelines along with error handling and performance improvements, identifying performance bottlenecks.

Should have knowledge of designing Reporting Framework layer for Power BI / Tableau.

Roles and Responsibilities:

Experience in Designing, implementing, and delivering solutions on batch and streamingdata.

Good to have Experience in working onDataWarehouse using differentdatamodelling techniques (Kimball, SCD2 (Slowly Changing Dimensions)).

Work in collaborative and Agile environment to deliverdataproducts.

Adapting new technology advancements and learning new things on Microsoft Azure and competing technologies no hands-on experience needed, but need to know comparisons.

Ensuring that client deliveries are made on time, with quality.

Coordinate withEngineering Team, Architect and Client through PM for delivery.

Align with organization's vision fordatapractices and work towards improvising skills as needed

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Job ID: 130607827

Similar Jobs