Project Role : Custom Software Engineer
Project Role Description : Design, build and configure applications to meet business process and application requirements.
Must have skills : PySpark
Good to have skills : Apache Spark
Minimum 3 Year(s) Of Experience Is Required
Educational Qualification : 15 years full time education
Summary: Seeking a forward-thinking professional with an AI-first mindset to design, develop, and deploy enterprise-grade solutions using Generative and Agentic AI frameworks that drive innovation, efficiency, and business transformation. As a Data Engineer, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure the data architecture supports the overall application functionality and performance, while also addressing any challenges that arise during the development process.
Roles & Responsibilities: Lead AI-driven solution design and delivery by applying GenAI and Agentic AI to address complex business challenges, automate processes, and integrate intelligent insights into enterprise workflows for measurable impact.
Expected to be an SME.
- Collaborate and manage the team to perform.
Responsible for team decisions.
- Engage with multiple teams and contribute on key decisions.
Expected to Provide solutions to problems that apply across multiple teams.
Facilitate knowledge sharing sessions to enhance team capabilities.
- Evaluate and recommend tools and technologies that can improve data architecture.
Professional & Technical Skills: Strong grasp of Generative and Agentic AI, prompt engineering, and AI evaluation frameworks. Ability to align AI capabilities with business objectives while ensuring scalability, responsible use, and tangible value realization.
Must to Have skills: Proficiency in PySpark.
good to Have skills: Experience with AWS Architecture.
- Strong understanding of data modeling techniques and best practices.
- Experience with data integration tools and methodologies.
Familiarity with cloud-based Data storage solutions.
- Ability to design scalable and efficient data pipelines.
Additional Information:
The candidate should have minimum 3 years of experience in PySpark.
This position is based at our Pune office.
- A 15 years full time education is required., 15 years full time education