
Search by job, company or skills
Important Note:
Only Immediate Joiners or candidates who can join on or before May 31, 2026 should apply.
Profiles of candidates with longer notice periods will be rejected.
Key Skills Required:
• Strong hands-on experience in dbt (Data Build Tool)
• Good knowledge of:
• Strong SQL skills with experience in:
• Basic working knowledge of:
Responsibilities:
• Develop and maintain data transformation pipelines using dbt
• Write optimized and complex SQL queries for analytics and reporting
• Work with large-scale datasets in Hive/Spark environments
• Support workflow orchestration using Airflow
• Ensure data quality, performance, and scalability of pipelines
Preferred Candidate Profile:
• Good analytical and problem-solving skills
• Experience in Data Engineering or Analytics Engineering projects
Job ID: 147402695
Skills:
T-sql, Pyspark, Plsql, Spark, Data Warehousing, Sql, ETL Fundamentals, Advanced Python, Stored Procedures, Data Modelling Fundamentals, Modern Data Platform Fundamentals
Skills:
data engineering , Databricks, Amazon Redshift, Etl Development, Sql, Data Pipelines
Skills:
Jenkins, Pytest, PostgreSQL, Databricks, Python, Sql, Azure DevOps, AWS
Skills:
Hadoop, Pyspark, AWS Glue, Kafka, Redshift, Sql, Data Quality, Dynamo Db, Iam, Spark, Data Architecture, AWS SNS, Data Warehousing, Python, Aws S3, Data Lakes, Data Pipelines, Dms, Cloud Optimization
Skills:
AWS Glue, Prometheus, Grafana, Apache Nifi, Apache Airflow, Docker, Terraform, Openshift, Azure Data Lake, Etl Tools, Talend, Python, Azure DevOps, Apache Spark, Bash, Elk Stack, Sql, Jenkins, Ansible, Amazon Redshift, AWS CloudFormation, Apache Kafka, Puppet, Kubernetes, Aws S3, AWS Step Functions, Google BigQuery, GitLab CI
We don’t charge any money for job offers