Search by job, company or skills

PwC India

AWS Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 27 days ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Please apply here: https://forms.office.com/r/3qQD7UruDs

Responsibilities:

  • Design, develop, and maintain ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, Data Pipeline, EMR).
  • Build and optimize data lakes and data warehouses using Amazon S3, Redshift, Athena, and Snowflake (if applicable).
  • Implement and manage data ingestion frameworks from structured and unstructured data sources (APIs, RDBMS, streaming, etc.).
  • Ensure data quality, integrity, and security using tools like AWS Lake Formation, IAM, and KMS.
  • Strong proficiency in SQL, Python, and PySpark.
  • Expertise in AWS data ecosystem – including Glue, Redshift, S3, Lambda, EMR, and Athena.

Mandatory skill sets:

  • Strong proficiency in SQL, Python, and PySpark.
  • Expertise in AWS data ecosystem – including Glue, Redshift, S3, Lambda, EMR, and Athena.

Preferred skill sets:

  • AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.

Years of experience required:

6-10

Education qualification:

B.Tech / M.Tech / MBA / MCA

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145752499

Similar Jobs

Bengaluru, India

Skills:

GithubPysparkSparkBig DataPythonSqlAWSAirflow

Bengaluru, India

Skills:

CloudwatchDockerAmazon S3ECSPysparkAWS GlueRedshiftPythonAthenaEKS

Bengaluru, India

Skills:

Aws LambdaS3RDSPysparkPostgreSQLScalaDynamodbEmrSqlTerraformPythonAWSAuroraGlue

Bengaluru, India

Skills:

PysparkSqlELTS3EmrData ModelingAws ServicesPerformance TuningEtlData WarehousingRedshiftAthenaAirflowTroubleshootingorchestration toolsGluebig data architecturedata lakes

Bengaluru

Skills:

Aws S3Aws LambdaAWS GlueApi GatewayAWS SQSPython