Required Skills & Experience
- 4+ years of total experience, with 3+ years in data engineering and business intelligence.
- 4+ years of hands-on experience in Python and other open-source data engineering tools.
- Strong expertise in data engineering, data handling, and statistical programming using modern IT tools.
- Understanding of Data Warehousing, Business Intelligence, and ETL processes.
- Proficiency and experience with Apache Spark and Big Data technologies (e.g., Hadoop, Hive).
- Proven experience in Agile methodologies, including iterative development and sprint planning.
- Experience working in highly regulated environments, ensuring data governance and compliance.
- Familiarity with release governance processes and incident management tools (e.g., ServiceNow, JIRA).
- Exposure to cloud platforms (AWS, Azure, GCP). Key Responsibilities
- Lead end-to-end data engineering and reporting engagements, from requirement gathering to solution development.
- Design and implement scalable data pipelines, data model, and process frameworks.
- Collaborate with stakeholders, providing domain and technical thought.
- Perform Fit/Gap analysis and translate business needs into actionable data solutions.
- Develop interactive dashboards and reports using visualization tools or Python-based libraries.
- Ensure data quality, governance, and compliance across reporting solutions.
- Conduct performance tuning and optimization of data workflows and visualizations.
- Ensure compliance with release governance and manage incidents using industry-standard tools.
- Contribute to Unit/Organizational initiatives, including Centers of Excellence (COEs) and innovation programs.