Role: Data Engineer – Kafka & Spark
Location: Gurugram | Delhi NCR
Experience: 6+ years
Immediate Joiners Only
- DE + Kafka – (6 + Yeas Exp)
- Primary Tools/Platforms: Data engineering Skills – Any cloud , Kafka , Streaming data processing experience
- - Language: SQL, PySpark
- - Expertise in SQL, should be able to write complex SQLs to do data analysis
- - SQL: Aggregation, Windows functions, Joins, Performance
- - Spark: Expertise in spark architecture and processing. Should be able to write performance optimized code
- - Data modeling: Experience in ER model and Dimensional modeling (Fact/Dimension)
- - ETL: Should understand standard ETL processes, like - SCD, Delta load etc.
- - Experience in design and development of end-to-end data pipeline
- Good to have Data Visualization tool knowledge e.g., PowerBI or Tableau.