You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the CCB , you are part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable manner. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities:
- Design, develop, and maintain scalable data pipelinesusing PySpark, AWS and Snowflake to support platform modernization and data delivery goals.
- Contribute to the exit of legacy platforms(HWX and ABINITIO) by migrating data products and processes to modern cloud-based solutions.
- Collaborate with cross-functional teamsto deliver product and D&A (Data & Analytics) deliverables, ensuring alignment with business objectives.
- Optimize ETL processes and drive operational excellenceby implementing best practices in data engineering and automation.
- Enhance productivity for BOW and RTB initiativesthrough innovative data solutions and teamwork.
- Ensure robust risk & controlsin all data delivery activities, maintaining compliance and data integrity.
- Demonstrate leadership and teamworkin driving data delivery projects, fostering a culture of collaboration and continuous improvement.
- Produce architecture and design artifactsfor data applications, ensuring design constraints are met and solutions are scalable.
- Analyze, synthesize, and visualize datafrom diverse sources to support decision-making and continuous improvement.
- Identify hidden problems and patterns in datato drive improvements in coding hygiene, system architecture, and operational processes.
Required qualifications, capabilities, and skills
- Experience in data engineering or a closely related field.
- Proven experience designing and building scalable data pipelines (HWX and ABINITIO) for batch and streaming use cases using modern technologies (e.g., Python, SQL, Spark, Flink, Airflow)
- Hands-on experience in building and operating workloads Databricks.
- Hands-on experience creating data visualizations and dashboards in Tableau.
- Familiarity with cloud data platforms (e.g., AWS, Azure, GCP) and big data technologies.
- Strong analytical skills with the ability to interpret complex data, build performant datasets, and deliver business value.
- Experience integrating data from multiple sources and systems (APIs, databases, files, streaming), including data modeling and orchestration best practices.
- Ability to work independently and collaboratively in a fast-paced, agile environment, partnering with data scientists, analysts, software engineers and business stakeholders.
Excellent communication and documentation skills, including clear articulation of data designs, lineage, and assumptions for technical and non-technical audiences.
Preferred qualifications, capabilities, and skills
- Awareness or experience with financial concepts, especially in banking, treasury and cash management.
- Experience supporting predictive analytics or machine learning workflows.
- Knowledge of data governance, security, and compliance in financial services.
- Familiarity with CI/CD concepts, Git and end to end software delivery lifecycle
- Familiarity with OOP languages such as Java
- Exposure to front end technologies such as Javascript and React