Search by job, company or skills

A

Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Project Role : Data Engineer

Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

Must have skills : PySpark

Good to have skills : NA

Minimum 3 Year(s) Of Experience Is Required

Educational Qualification : 15 years full time education

Summary:

As a Data Engineer, your typical day involves designing, developing, and maintaining comprehensive data solutions that support the generation, collection, and processing of data. You will be responsible for creating efficient data pipelines that facilitate smooth data flow and ensure the integrity and quality of data throughout its lifecycle. Your role includes implementing processes to extract, transform, and load data, enabling seamless migration and deployment across various systems. This position requires a proactive approach to managing data infrastructure and collaborating with different teams to support organizational data needs effectively.

Roles & Responsibilities:

  • Expected to perform independently and become an SME.
  • Required active participation/contribution in team discussions.
  • Contribute in providing solutions to work related problems.
  • Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
  • Monitor and optimize data workflows to improve performance and reliability.
  • Document data processes and maintain clear communication regarding data architecture and pipeline status.
  • Assist junior team members by sharing knowledge and providing guidance on best practices.

Professional & Technical Skills:

  • Must To Have Skills: Proficiency in PySpark.
  • Experience in building and managing scalable data pipelines and workflows.
  • Strong understanding of data processing frameworks and distributed computing.
  • Ability to troubleshoot and optimize complex data processing tasks.
  • Familiarity with data storage solutions and data integration techniques.
  • Knowledge of performance tuning and resource management in big data environments.

Additional Information:

  • The candidate should have minimum 3 years of experience in PySpark.
  • This position is based at our Indore office.
  • A 15 years full time education is required.

, 15 years full time education



More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146194495

Similar Jobs