Role Summary:
We are looking for an experienced Ab Initio Developer to design, build, and optimize ETL workflows that support large-scale data processing. This role involves leveraging the full Ab Initio suite to implement high-performance data integration solutions, ensuring data accuracy, integrity, and accessibility across enterprise systems.
Key Responsibilities:
- Design, develop, and maintain ETL processes using Ab Initio tools for data extraction, transformation, and loading (ETL).
- Utilize Ab Initio GDE, Co>Operating System, Conduct>It, and EME to build scalable and efficient data workflows.
- Apply best practices for data warehousing and ETL architecture in handling large data sets.
- Perform unit testing, debugging, and troubleshooting of Ab Initio graphs to ensure reliability and data integrity.
- Collaborate with data architects, analysts, and business stakeholders to understand data needs and deliver solutions.
- Optimize ETL performance and support production deployments and issue resolutions.
- Write and tune SQL queries and use scripting languages (e.g., Unix shell) for data manipulation and analysis.
- Ensure documentation of workflows, processes, and technical designs for knowledge sharing and support.
- Participate in code reviews and quality assurance activities to maintain high coding standards.
Required Skills & Experience:
- 4+ years of hands-on experience in Ab Initio ETL development.
- Proficiency with core Ab Initio suite components:
- GDE (Graphical Development Environment)
- Co>Operating System
- Conduct>It
- EME (Enterprise Meta>Environment)
- Strong knowledge of ETL concepts, data warehousing principles, and relational databases.
- Solid SQL skills and experience with scripting languages (e.g., Unix Shell, Python is a plus).
- Ability to design and implement efficient and scalable ETL solutions for large-scale data environments.
- Strong analytical and problem-solving capabilities.
- Excellent verbal and written communication skills; ability to engage effectively with technical and non-technical stakeholders.
- Experience in unit testing and quality assurance for data pipelines.
Nice to Have:
- Familiarity with data lake or cloud-based ETL architectures (e.g., AWS, Azure).
- Exposure to job schedulers (e.g., TWS, Autosys).
- Understanding of metadata-driven frameworks and data lineage tracking.
- Prior experience working in Agile or Scrum-based development teams.