About the Organization-
Impetus Technologies is a digital engineering company focused on delivering expert services and products to help enterprises achieve their transformation goals. We solve the analytics, AI, and cloud puzzle, enabling businesses to drive unmatched innovation and growth.
Founded in 1991, we are cloud and data engineering leaders providing solutions to fortune 100 enterprises, headquartered in Los Gatos, California, with development centers in NOIDA, Indore, Gurugram, Bengaluru, Pune, and Hyderabad with over 3000 global team members. We also have offices in Canada and Australia and collaborate with a number of established companies, including American Express, Bank of America, Capital One, Toyota, United Airlines, and Verizon.
Locations: Indore / Bangalore / Pune / Hyderabad / Pune / Noida /Gurgaon
Job Description
We are looking for an experienced AWS Module Lead Software Engineer to drive the design, development, and delivery of AWS-based modules for enterprise-scale applications. This role requires strong expertise in AWS services, hands-on development skills, and the ability to lead and mentor module-level engineering teams.
Key Responsibilities
- Lead end-to-end development of AWS-based modules for large-scale applications.
- Design, build, maintain, and optimize data pipelines using Python, Spark, and SQL.
- Work extensively with AWS cloud services for data ingestion, storage, and processing, including S3, Glue, Lambda, EMR, RDS, Airflow, and Step Functions.
- Perform data validation, transformation, and integration across multiple data sources.
- Collaborate with cross-functional stakeholders to gather requirements and deliver scalable data engineering solutions.
- Support and enhance Databricks notebooks, jobs, and cluster configurations (nice to have).
- Assist with basic AI/ML workflows, including feature engineering and model deployment support.
- Write clean, efficient, reusable, and well-documented code following best practices.
- Troubleshoot and resolve data pipeline issues to ensure performance, reliability, and scalability.
- Prepare technical documentation and actively participate in code reviews.
- Apply BFSI domain knowledge to develop context-aware and business-relevant data solutions (good to have).
Required Skills & Experience
- 68 years of hands-on experience in data engineering or a related technical role.
- Strong proficiency in Python and Spark for data processing and automation.
- Working knowledge of Apache Spark (PySpark preferred).
- Solid understanding of SQL, query optimization, and basic data modeling.
- Experience with AWS services such as S3, Glue, Lambda, EMR, RDS, EC2, IAM, Airflow, and Step Functions.
- Strong understanding of observability concepts for monitoring distributed systems.
- Hands-on experience with AWS CloudWatch and AWS CloudTrail.
- Exposure to Databricks for notebooks, pipelines, and job orchestration (good to have).
- Basic understanding of AI/ML concepts and workflows.
Basic AI/LLM Knowledge
- Understanding of LLMs and their underlying architecture.
- Knowledge of how LLMs are trained or derived.
- Familiarity with RAG (Retrieval-Augmented Generation) concepts.
- Understanding of structured vs. unstructured data handling.
Good to Have
- Experience with LLM fine-tuning techniques such as LoRA or SFT.
- Exposure to Agentic AI concepts.
- Hands-on experience building or supporting RAG pipelines.
- Familiarity with AWS Bedrock.
- Knowledge of the BFSI domain.
- Experience with CI/CD pipelines and tools like GitLab CI, GitHub Actions, AWS CodePipeline, or Jenkins.
- Proficiency with Git and collaborative development workflows.
- Strong analytical, problem-solving, communication, and teamwork skills.
For Quick Response- Interested Candidates can directly share their resume along with the details like Notice Period, Current CTC and Expected CTC at [Confidential Information]