Search by job, company or skills

W

Lead Software Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About This Role

Wells Fargo is seeking a Lead Software Engineer Data Engineering to join the CALM (Corporate Asset and Liability Management) Data Engineering team within the Enterprise Functions Technology (EFT) organization. In this role, you will be responsible for designing, developing, optimizing, and maintaining metadatadriven, scalable, highperformance data engineering frameworks that power critical financial risk processes across Corporate Treasury.

You will work independently to build resilient data pipelines, APIs, wrappers, and supporting components to enable reliable data ingestion, transformation, validation, and delivery across cloud and onprem ecosystems. This position plays a key role in Data Center exit migrations, DPC onboarding, and enterprise-wide modernization initiatives.

The role requires deep technical expertise, handson problemsolving, and technical leadership in distributed data engineering, cloud platforms, data quality, and performance engineering.

Key Responsibilities

  • Architect, develop, and optimize both batch and streaming data pipelines using Python, SQL, and Apache Spark. These pipelines are implemented across both cloud and on-premises environments to ensure flexibility and scalability.
  • Lead lakehouse engineering efforts by utilizing open table formats such as Iceberg, Delta, or Hudi. This includes implementing Medallion architectures to govern data ingestion, transformation, and consumption processes.
  • Establish comprehensive frameworks for data quality, observability, lineage, and service-level agreements (SLA), guaranteeing reliability and auditability of data at scale.
  • Design secure REST and metadata APIs to facilitate governed data access and seamless integration with downstream applications.
  • Collaborate closely with cross-functional product, architecture, and business teams, providing technical leadership through design reviews, mentoring, and contributions to the platform roadmap.

Required Qualifications

  • Minimum 8+ of hands-on experience with large-scale data engineering using Python, SQL, and Apache Spark.
  • Expertise in designing and implementing metadata-driven ETL/ELT pipelines and robust data ingestion and validation frameworks.
  • Skilled in optimizing distributed Spark workloads, storage layouts, and schema evolution within lakehouse environments.
  • Proficient with enterprise orchestration tools such as Autosys or Airflow; able to operate production-grade data pipelines meeting SLAs with effective alerting.
  • Strong knowledge of data modeling, API development, and CI/CD or infrastructure automation processes.
  • Familiarity with major cloud platforms, data governance tools, and secure engineering best practices.
  • Preferred: Experience with financial data, risk or treasury systems, and large-scale migration programs.
  • Applying GenAI for metadata extraction, data anomaly detection, automated documentation, or pipeline optimization.

Job Expectations

  • Deliver high-quality engineering outcomes during Data Center exit migrations and DPC onboarding, ensuring validations, automation, and production readiness.
  • Collaborate with cross-functional teams to build scalable, highperformance data solutions using Python, SQL, Spark, Iceberg, Dremio, and Autosys.

Reference Number

R-524270

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144236653