
Search by job, company or skills
Job Description
We are looking for data engineers who have the right attitude, aptitude, skills, empathy, compassion, and hunger for learning. Build products in the data analytics space. A passion for shipping high-quality data products, interest in the data products space; curiosity about
the bigger picture of building a company, product development and its people.
Share your resume at [Confidential Information]
*Roles & Responsibilities*
Design, develop, and manage robust ETL pipelines using Apache Spark (Scala)
Demonstrate a strong understanding of Spark internals, performance optimization techniques, and governance tools
Build scalable, fault-tolerant, and high-performance data pipelines for Enterprise Data Warehouses, Data Lakes, and Data Mesh architectures
Collaborate with cross-functional teams to design and deliver effective data solutions
Implement orchestration workflows using AWS Step Functions
Leverage AWS Glue and Crawlers for automated data cataloging
Monitor, troubleshoot, and optimize pipeline performance and data quality
Maintain high coding standards and produce clear, comprehensive documentation
Actively contribute to High-Level Design (HLD) and Low-Level Design (LLD) discussions
*Technical Skills Required*
Minimum 5+ years of hands-on experience in Big Data / Data Engineering
Strong expertise in building scalable, reliable ETL pipelines
At least 4+ years of hands-on experience with Python, Apache Spark, and Kafka
Strong command of AWS services, including EMR, Redshift, Step Functions, AWS Glue, and AWS Crawler
Solid hands-on experience with SQL and NoSQL databases
Strong understanding of Data Warehousing, Data Modeling, and ETL concepts
Familiarity with HLD and LLD design principles
Excellent written and verbal communication skills
Preference
Minimum 5 years of relevant experience in the same field
Immediate joiner or Maximum 1 Month Notice Period is preffered
Salary offered would be upto 25 LPA and negotiable depending on the experience
Job ID: 136209839