
Search by job, company or skills
Job Title: Data Engineer (Python, PySpark, AWS, SQL)
Location: Bengaluru
Experience: 5+ Years
Work Mode: 5 Days Working (Work from Office)
Employment Type: Full-Time
Job Description:
We are looking for an experienced Data Engineer with strong expertise in data processing, cloud technologies, and scalable data pipeline development. The ideal candidate should have hands-on experience in Python, PySpark, AWS, and SQL.
Note: Only Immediate Joiners will be considered.
Must-Have Skills (Mandatory):
Strong programming experience in Python
Hands-on experience with PySpark for large-scale data processing
Good exposure to Amazon Web Services (S3, Glue, Lambda, Redshift, etc.)
Strong SQL skills for data querying and optimization
Experience in building and maintaining data pipelines (ETL/ELT)
Minimum 5+ years of relevant experience
Immediate Joiner preferred
Key Responsibilities:
Design and develop scalable data pipelines using Python and PySpark
Work with Amazon Web Services services for data storage and processing
Perform data extraction, transformation, and loading (ETL/ELT)
Optimize SQL queries and improve data performance
Collaborate with data analysts and business teams for data solutions
Job ID: 147186017
Skills:
snowflake , Apache Airflow, Data Modelling, Python, Sql, AWS – Glue Lambda Step Functions, dbt, CI CD
Skills:
Hadoop, Etl Development, Pyspark, Spark, Data Modeling, Sql, workflow orchestration, Informatica BDM, Big Data Management
Skills:
Hadoop, Cassandra, Apache Spark, Grafana, Cosmos, Sql, Apache Airflow, Nosql, Jenkins, Spark Streaming, Hive, Presto, Splunk, Hudi, GitHub Actions, Trino, GCP BigQuery
Skills:
Java, Hadoop, Cassandra, PostgreSQL, Nodejs, Informatica, HBase, Sql, Hive, MySQL, Spark, Talend, Oracle, Python, Aws S3
Skills:
Servicenow, Jenkins, Oracle, JIRA, SQL developer, Control-M, Delphix Data masking
We don’t charge any money for job offers