
Search by job, company or skills
Key Responsibilities
Design, build, and optimize ETL/ELT data pipelines using Snowflake and Databricks.
Develop scalable data models, schemas, and data warehousing solutions.
Implement complex data transformations using PySpark, SQL, and Spark.
Work with cloud platforms (AWS, Azure, GCP) for data ingestion, orchestration, and automation.
Ensure high data quality, governance, and performance optimization across data workflows.
Collaborate with data scientists, analysts, and crossfunctional teams to deliver data solutions.
Build and deploy machine learning workflows and notebooks (preferred).
Support CI/CD practices for automated and reliable data engineering deployments.
Troubleshoot and resolve data pipeline issues in production environments.
Job ID: 141064993