
Search by job, company or skills
Years of experience: 6- 10 years
Location: Bangalore
Roles & Responsibilities:
Design, build, and maintain scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark.
Develop and optimize spark data workflows using Scala for large-scale data processing and transformation.
Implement performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling.
Collaborate with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows.
Collaborate with cross-functional teams and stakeholder teams to understand data requirements and deliver high-quality solutions
Ensure data quality and integrity by implementing validation, error-handling, and monitoring mechanisms.
Work with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem.
Job ID: 145264051