We are looking for a highly skilled Data Engineer with expertise in PySpark and Databricks to design, build, and optimize scalable data pipelines for processing massive datasets.
Key Responsibilities
Build & Optimize Pipelines: Develop high-throughput ETL workflows using PySpark on Databricks.
Data Architecture & Engineering: Work on distributed computing solutions, optimize Spark jobs, and build efficient data models.