Experience: 8to 12Years
Locations: Mumbai, Pune, Chennai, Bangalore
Primary Skills: Bigdata, Spark, scala , python , aws
Secondary skills:Sqoop, Hive,Spark SQL, Kafka
Desired Skills & Responsibilities
Develop and implement data pipelines that extracts, transforms and loads data into an information product
that helps to inform the organization in reaching strategic goals
Work on ingesting, storing, processing and analyzing large data sets
Create scalable and high-performance web services for tracking data
Translate complex technical and functional requirements into detailed designs
Investigate and analyze alternative solutions to data storing, processing etc. to ensure most streamlined
approaches are implemented
Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs
Skills and Qualifications
Experience in Python, Pyspark, Databricksand Hive
Understanding of data warehousing and data modeling techniques
Knowledge of industry-wide analytical and visualization tools (Tableau and R)
Strong data engineering skills on any Cloud Platforms
Streaming frameworks like Kafka
Knowledge of core Java, Linux, SQL, and any scripting language
Good interpersonal skills and positive attitude
Degree in computer Sciences, Math or Engineering
Expertise in ETL methodology of Data Extraction, Transformation and Load processing in corporate wide
ETL solution design using Data Stage.