Search by job, company or skills

Mizuho

Assistant to Vice President - Data Engineer

11-13 Years
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 13 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description:

We are looking for a skilled Senior Data Engineer with expertise in financial data mapping across domains such as Trade Finance, Lending, Financial Markets, and CASA. The candidate will play a key role in understanding core banking systems, implementing data models, and creating mappings in a Databricks environment using PySpark and other tools. This role is part of a Core Banking Transformation initiative, involving upstream systems like trade, lending, payments, casa and markets. The ideal candidate will act as the key point of contact for implementation of data model, mappings, and analytics, working together with SI Partner teams to onboard, map new systems, while ensuring adherence to technology processes, and maintaining proper documentation of knowledge and workflows.

Data Model & Mapping Implementation:

Work with upstream, downstream system stakeholders and Data Analyst to understand the data requirements, model the data for the domain.

Prepare detailed Source-to-Target Mapping (STM) specifications.

Ingest data from diverse and complex core banking sources, including Trade 360 (Trade Finance), ACBS (Lending), Murex (Financial Markets), and Flexcube (CASA).

Design, develop, and maintain robust, scalable, and high-performance data pipelines within the Azure Databricks environment using PySpark and SQL.

Optimize and tune Databricks jobs for performance and cost-efficiency, handling large volumes of financial data

Overall Experience:

Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field.

11+ years of hands-on experience as a Data Engineer, ETL Developer, or Software Engineer with a focus on data.

Strong, demonstrable proficiency in PySpark and advanced SQL is mandatory.

Databricks experience being highly preferred (or strong experience with other Spark-based platforms like Hadoop/EMR).

Proven experience working with complex financial data from banking systems. Candidates must be able to demonstrate an understanding of data concepts related to at least one of the core domains.

Experience with data pipeline orchestration tools (e.g., Azure Data Factory, Airflow).

Desired/Preferred:

Direct experience working with data from one or more of our target systems: Trade 360, ACBS, Murex, or Flexcube.

Hands-on experience with the Microsoft Azure data stack and with Databricks, Delta Lake, ETL tools, and data visualization tools

Knowledge of CI/CD practices for data engineering (e.g., automated testing, deployment)

Familiarity with the Medallion data architecture pattern. Relevant certifications in data engineering or financial systems (e.g., AWS Certified Data Analytics, Databricks Certified Data Engineer Associate).

Experience in a code review or technical mentorship capacity.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147219729