Search by job, company or skills

  • Posted 5 months ago
  • Over 100 applicants

Job Description

Brief Description

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

Role Scope

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

Requirements

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

Preferred Background

Tier1/2 candidates from IIT/NIT/IIITs

However, Relevant Experience, Learning Attitude Takes Precedence

Skills:- Python, PySpark, SQL, pandas, Cloud Computing, Microsoft Windows Azure and Big Data

More Info

Job Type:
Industry:
Employment Type:

Job ID: 128148669

Similar Jobs