Key Responsibilities: - Lead the design and implementation of big data solutions using Spark, Pyspark, Hive, and Scala. - Collaborate with cross-functional teams to develop scalable data architectures and pipelines. - Optimize data processing workflows to ensure high performance and reliability. - Mentor and guide junior engineers, fostering a culture of continuous learning and improvement. - Ensure data security and compliance with industry standards. Minimum Qualifications: - Bachelor's degree in Technology (B.Tech) or related field. - Proven experience in big data technologies such as Spark, Pyspark, Hive, and Scala. - Strong analytical and problem-solving skills. Preferred Qualifications: - Experience with Kafka for real-time data processing. - Demonstrated leadership skills in managing technology projects. - Ability to work effectively in a collaborative team environment. - Track record of delivering high-quality data solutions. Good to have skills: Flask, Pandas, Data Analysis, Springboot, Microservices
- Knowledge of more than one technology
- Basics of Architecture and Design fundamentals
- Knowledge of Testing tools
- Knowledge of agile methodologies
- Understanding of Project life cycle activities on development and maintenance projects
- Understanding of one or more Estimation methodologies, Knowledge of Quality processes
- Basics of business domain to understand the business requirements
- Analytical abilities, Strong Technical Skills, Good communication skills
- Good understanding of the technology and domain
- Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods
- Awareness of latest technologies and trends
- Excellent problem solving, analytical and debugging skills