
Search by job, company or skills
Job Description
Responsibilities:
- Design and develop data pipelines using AWS Glue and PySpark
- Work with large datasets and develop efficient data processing solutions
- Collaborate with cross-functional teams to deliver data engineering projects
- Develop and maintain data warehouses using RDMS like PostgreSQL
- Integrate data from various sources using APIs and JSON data
- Ensure data quality and integrity
Requirements:
- 5-8 years of experience in data engineering
- Strong expertise in Python, PySpark, AWS Glue, and RDMS like PostgreSQL
- Experience with APIs and JSON data is preferred
- Excellent problem-solving and analytical skills
- Strong communication and interpersonal skills
A global consulting firm headquartered in USA and a multi-million dollar and multinational group, offering comprehensive and innovative business and technology solutions to global clients with prime focus on BFSI and IT domain.
Our seasoned professionals come experienced in a broad spectrum of technologies that range from cutting-edge technologies to legacy systems, enabling us to successfully service a wide variety of customers.
Job ID: 105409523