We are seeking expert
Data Engineers specializing in Databricks, PySpark, Scala, and SQL to build efficient data pipelines for US enterprise clients. Work onsite, delivering complex data management solutions handling high-volume live/streaming and batch data processing.
Key Responsibilities
- Design and build scalable data pipelines using Databricks and PySpark
- Process live/streaming data and batch workloads with optimal performance
- Implement data management solutions following industry best practices
- Collaborate with business units for efficient data delivery to analytics
- Handle high volume, velocity, variety of real-world data challenges
- Contribute to CoE/CoP initiatives and technology evangelism
Required Technical Skills
Big Data: Databricks, PySpark, Scala, Spark Streaming, batch processing
Query & Processing: Advanced SQL, data pipeline optimization
Data Engineering: ETL/ELT design, data modeling, pipeline orchestration
Cloud: Azure Databricks, AWS EMR (preferred)
Skills: data,processing,live,pipelines,sql,azure,design,pipeline