Search by job, company or skills

A

Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 8 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Project Role : Data Engineer

Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

Must have skills : PySpark

Good to have skills : NA

Minimum 3 Year(s) Of Experience Is Required

Educational Qualification : 15 years full time education

Summary:

As a Data Engineer, a typical day involves designing, developing, and maintaining comprehensive data solutions that support the generation, collection, and processing of data. This role requires creating efficient data pipelines and ensuring the integrity and quality of data throughout its lifecycle. The position also involves implementing processes to extract, transform, and load data, facilitating seamless migration and deployment across various systems. Collaboration with different teams to align data strategies and optimize workflows is an integral part of daily activities, ensuring that data infrastructure supports organizational needs effectively.

Roles & Responsibilities:

  • Expected to be an SME, collaborate and manage the team to perform.
  • Responsible for team decisions.
  • Engage with multiple teams and contribute on key decisions.
  • Provide solutions to problems for their immediate team and across multiple teams.
  • Lead the development and optimization of data pipelines to support business requirements.
  • Ensure adherence to data governance and compliance standards within the team.
  • Mentor junior team members to foster skill development and knowledge sharing.

Professional & Technical Skills:

  • Must To Have Skills: Proficiency in PySpark.
  • Experience in designing and implementing scalable data processing workflows using distributed computing frameworks.
  • Strong knowledge of data integration techniques and ETL best practices.
  • Familiarity with cloud-based data platforms and storage solutions.
  • Ability to troubleshoot and optimize complex data pipelines for performance and reliability.
  • Experience with scripting and automation to streamline data operations.

Additional Information:

  • The candidate should have minimum 5 years of experience in PySpark.
  • This position is based at our Indore office.
  • A 15 years full time education is required.

, 15 years full time education



More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146159225

Similar Jobs