We are looking for an experienced ETL Developer with strong hands-on expertise in Informatica and modern data platforms including Snowflake, along with solid knowledge of SQL and the Hadoop ecosystem (Hadoop, Hive). The role involves designing, developing, and optimizing scalable ETL pipelines, ensuring data quality, performance, and reliable delivery for enterprise data and analytics needs.
About the Role
The role involves designing, developing, and optimizing scalable ETL pipelines, ensuring data quality, performance, and reliable delivery for enterprise data and analytics needs.
Responsibilities
- Design, develop, and maintain end-to-end ETL workflows using Informatica (PowerCenter / IICS as applicable).
- Build and enhance data pipelines integrating Snowflake for data warehousing and analytics use cases.
- Develop and optimize complex SQL queries for data extraction, transformation, reconciliation, and reporting.
- Work with Hadoop and Hive for large-scale data processing, transformations, and performance tuning.
- Implement data validation, reconciliation checks, and monitoring to ensure strong data quality and pipeline reliability.
- Troubleshoot ETL failures, analyze root causes, and implement permanent fixes to improve stability and throughput.
- Collaborate with architects, analysts, and downstream consumers to translate business requirements into robust technical solutions.
- Maintain clear technical documentation for mappings, jobs, workflows, data lineage, and operational runbooks.
Qualifications
- Bachelor's or master's degree in computer science, Information Technology, or related fields.
- Relevant certifications in Informatica / Snowflake / Data Engineering (good to have, not mandatory).
Required Skills
- 7–10 years of overall experience in ETL/Data Engineering.
- Strong hands-on ETL development experience using Informatica.
- Solid working experience with Snowflake (loading patterns, performance concepts, and data warehouse fundamentals).
- Advanced SQL skills (joins, window functions, performance tuning, query optimization).
- Hands-on experience in Hadoop ecosystem with strong exposure to Hive.
- Strong understanding of ETL concepts: incremental loads, CDC patterns, error handling, scheduling, and dependency management.
- Strong problem-solving skills with ability to work in fast-paced delivery environments.
Preferred Skills
- Experience with performance tuning across ETL pipelines and Snowflake workloads (query optimization and load optimization).
- Exposure to data governance practices, metadata management, and documentation standards.
- Familiarity with Agile delivery and collaborating with cross-functional teams.