Search by job, company or skills

saaf finance

Senior Data Engineer

3-5 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Saaf Finance is building the data backbone for modern mortgage operations. As a Data Engineer, you will design and operate the pipelines, warehouses, and data models that power our products, analytics, and automation. We are an AI-native engineering team: AI-assisted development tools are a regular part of how we build, test, and ship data infrastructure. We expect engineers to use these tools thoughtfully and effectively as part of their daily workflow, from writing transformation logic to building agentic data workflows.

Key Responsibilities

Data Pipeline Development

  • Design, implement, and maintain ETL/ELT pipelines for structured and unstructured datasets from internal and external sources.
  • Leverage AI-assisted development tools to accelerate pipeline authoring, generate transformation logic, and automate boilerplate code.

Data Warehousing & Modeling

  • Build and optimize data warehouses and marts (Snowflake, BigQuery, or similar) for analytics, reporting, and product use cases.
  • Design, implement, and maintain conceptual, logical, and physical data models to ensure scalable, consistent, and high-quality datasets for downstream analytics and applications.

Integration & Ingestion

  • Ingest data from APIs, SaaS platforms (CRM, financial data APIs), and internal systems into the core data platform.
  • Build and maintain reliable connectors and ingestion frameworks that handle schema evolution, rate limits, and error recovery.

Data Quality & Governance

  • Implement validation, schema management, and robust documentation to ensure data accuracy and compliance.
  • Use AI tools to support data profiling, anomaly detection, and automated documentation of data lineage and transformations.

AI-Integrated Data Engineering

  • Use AI-assisted tools (code generation, intelligent autocomplete, automated testing) as a regular part of your data engineering workflow.
  • Evaluate and integrate emerging AI tools and practices into the team's data development process.
  • Build and support agentic workflows and multi-step automated processes that act on data in real time, including AI-powered data validation and enrichment.
  • Apply AI-assisted analysis to debugging pipeline failures, optimizing query performance, and identifying data quality issues.

Performance & Reliability

  • Monitor and fine-tune pipeline and warehouse performance for scalability and cost efficiency.
  • Set up logging, monitoring, and alerting for data jobs to ensure reliability and fast incident response.

Security & Compliance

  • Apply data security and privacy controls aligned with financial regulatory requirements, ensuring full traceability of every transformation.
  • Foster a security-first mindset across all data operations.

Analytics Enablement

  • Provide clean, consistent datasets for analysts, product managers, and operational teams to support fast, data-driven decisions.
  • Collaborate closely with product managers, data scientists, and full stack engineers to align data models with business needs.

Qualifications

Required

  • 3+ years in a data engineering or similar backend data-focused role.
  • Strong SQL and Python development skills for data transformation and automation.
  • Experience with modern ETL/ELT frameworks such as dbt.
  • Proficiency with cloud platforms (AWS preferred) and serverless data services.
  • Strong experience with data warehouse technologies (Snowflake preferred).
  • Skilled in API integrations and ingestion from third-party systems.
  • Proficient in data modeling (Kimball/Star schema, Data Vault).
  • Demonstrated, regular use of AI-powered development tools (e.g., Cursor, GitHub Copilot, Claude Code, or similar) to accelerate data pipeline development, debugging, or documentation.
  • Proven track record of delivering production-grade data pipelines at scale.
  • Experience implementing CI/CD practices for data workflows.
  • Experience collaborating closely with product managers, data scientists, and full stack engineers.
  • Startup mindset: hands-on, resourceful, and comfortable operating in a fast-paced environment.

Preferred

  • Experience building agentic workflows and orchestrating multi-step automated processes that act on data in real time.
  • Familiarity with data engineering patterns and infrastructure required for AI-powered tools and automation platforms.
  • Experience working with financial datasets and APIs in a high-compliance environment.
  • Understanding of data privacy regulations such as GDPR and CCPA.
  • Experience with prompt engineering for code generation, data transformation logic, or building AI-powered data workflows.

Benefits

  • Competitive salary
  • High ownership from day one your work will directly shape core systems and products
  • Fast-paced environment with quick decision cycles and minimal bureaucracy
  • Remote-first team with flexibility on work hours and location
  • Direct access to founders and cross-functional teams no layers, no silos
  • Clear expectations, regular feedback, and support for professional growth
  • Work on real problems in a complex, high-impact industry

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145112669

Similar Jobs