Job Description
About KPMG in India
KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada.
KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focussed and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment.
The person will work on a variety of projects in a highly collaborative, fast-paced environment. The person will be responsible for software development activities of KPMG, India. Part of the development team, he/she will work on the full life cycle of the process, develop code and unit testing. He/she will work closely with Technical Architect, Business Analyst, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Additionally, the person will ensure that all development practices are in compliance with KPMG's best practices policies and procedures. This role requires quick ramp up on new technologies whenever required.
Responsibilities
Role : AWS Data Engineer
Location: Gurugram
Experience: 2 to 4 years
Key Responsibilities
- Databricks & Spark Development
- Design, develop, and optimize ETL/ELT pipelines using Databricks (PySpark/Spark SQL).
- Build scalable data transformation workflows for batch and streaming applications.
- Implement best practices for notebook development, orchestration, and job workflows.
- Work with AWS services such as:
- S3 (data lake storage)
- Lambda
- Glue
- Athena
- EC2
- IAM
- CloudWatch
- Step Functions (optional)
- Deploy, monitor, and troubleshoot data workflows on AWS.
- Integrate data from multiple sources (databases, APIs, flat files).
- Ensure high data quality, validation, logging, and error-handling frameworks in pipelines.
- Work with REST APIs, relational DBs (MySQL/PostgreSQL), or NoSQL stores (DynamoDB).
- Perform data modeling for data lakes, delta lakes, and analytical datasets.
- Optimize Spark jobs for performance and cost efficiency.
- Work closely with data analysts, data scientists, and business stakeholders.
- Maintain clear documentation, version control (Git), and CI/CD practices.
- Experience with AWS Glue ETL, Glue Catalog, or Glue Jobs.
- Exposure to CI/CD tools (GitHub Actions, Bitbucket Pipelines, Azure DevOps, Jenkins).
- Understanding of DevOps concepts and infrastructure-as-code (Terraform/CloudFormation).
- Knowledge of streaming technologies (Kafka, Kinesisoptional).
Qualifications
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Equal Opportunity Employer
KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you.