Search by job, company or skills

Jpmorgan & Co

Software Engineer III - Python, PySpark, Databricks, AWS

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

We have an exciting opportunity for you to advance your data engineering career and make a meaningful impact at JPMorganChase.

As a Software Engineer III at JPMorgan Chase within Corporate Technology, you design and deliver high-performance data solutions that power the firm's technology products.

Job Responsibilities

  • Architect, develop, and maintain high-performance ETL pipelines and data workflows using Python, PySpark, and Databricks
  • Design and implement scalable, fault-tolerant data solutions on AWS, leveraging services such as S3 and Lambda
  • Write secure, optimized code in Python and PySpark with a focus on performance and reliability
  • Develop and optimize SQL-based data models, queries, and transformations to support analytical and operational needs
  • Own and operate production data pipelines end-to-end, including monitoring, alerting, and performance optimization
  • Apply knowledge of the Software Development Life Cycle toolchain, including Git and CI/CD, to maximize automation and delivery velocity
  • Gather, analyze, and synthesize large, diverse data sets to drive data-driven decision-making

Required Qualifications, Capabilities, And Skills

  • Formal training 3 years or certification in software engineering, data engineering, or a related technical discipline
  • Seven years of hands-on experience developing production-grade applications and data solutions in Python
  • Three years of experience building and optimizing large-scale data pipelines using PySpark
  • Proven experience designing, deploying, and managing data engineering workflows on Databricks, including Delta Lake and Unity Catalog
  • Strong hands-on experience with AWS cloud services, including S3 and Lambda
  • Proficiency in SQL for complex data querying, transformation, and performance tuning
  • Experience across the Software Development Life Cycle with exposure to agile methodologies such as CI/CD, Application Resiliency, and Security

Preferred Qualifications, Capabilities, And Skills

  • Experience with infrastructure-as-code tools such as Terraform
  • Familiarity with data governance, data quality frameworks, and data cataloging practices
  • Exposure to real-time streaming technologies such as Kafka or Kinesis
  • Experience mentoring junior engineers and contributing to engineering best practices

ABOUT US

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147271857

Similar Jobs

Bengaluru

Skills:

S3LambdaGitPysparkDatabricksSqlPythonAWS

Bengaluru, India

Skills:

PysparkApache SparkKafkaSqlApache NifiKinesisGcpDockerMongoDBAzureKubernetesPythonAWSAirflowEKS

Bengaluru, India

Skills:

ApisPysparkSqlELTPythonEtlvector searchDatabricks Lakehouse architectureAI agent conceptsevent-driven integrationMLflowLLM-driven workflowsUnity CatalogDelta Lakedata orchestrationAgentbricksRAG architectures

Bengaluru, India

Skills:

Amazon RedshiftPysparkAmazon S3AWS GlueAdvanced SqlAWS

Bengaluru, India

Skills:

NosqlSqlNodejsJavaPythonMicroservice ArchitectureScalaRestful ApiCloud InfrastructureDistributed Systems