Search by job, company or skills

K2 Partnering Solutions

Senior Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 15 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Title: Senior Data Engineer – Databricks (Data Platform)

Duration: 6 Months Contract with a possibility for an extension

Work Location: PAN India

Work Timings: 12:00 – 9:00 pm IST (First 3 months while working from Office) & 2:00 – 11:00 pm IST (Partial overlap with US Team)

Experience Level:

8–12+ years overall

4–6+ years in Databricks or modern data platforms

Job Description:

We need a hands-on Senior Data Engineer to support enhancements and localization of a modern data platform built on Databricks for a global deployment program. This role will focus on integrating the data platform with multiple upstream and downstream enterprise systems, including ERP, revenue/billing platforms, and operational systems. The engineer will work closely with functional, integration, and finance teams to ensure accurate, scalable, and compliant data pipelines supporting financial and operational processes.

Key Responsibilities:

  • Design, build, and enhance data pipelines in Databricks using PySpark and SQL for region-specific requirements
  • Modify and extend existing global template pipelines to support localization including tax, depreciation, and reporting needs
  • Implement end-to-end data transformations supporting asset lifecycle processes from procurement to capitalization, depreciation, and disposal
  • Integrate the data platform with ERP, billing/revenue, and operational systems using APIs, JSON, message queues, and batch interfaces
  • Develop and manage event-driven and batch data ingestion pipelines for near real-time and scheduled processing
  • Design and maintain lakehouse data models using bronze, silver, and gold layers aligned with business reporting needs
  • Build curated datasets for financial accounting, depreciation, reserves, and reporting
  • Implement data quality checks, validation rules, and reconciliation processes across multiple enterprise systems
  • Optimize Spark jobs, workflows, and cluster usage for performance and cost efficiency
  • Collaborate with functional, integration, and testing teams to support design sprints, SIT, UAT, and go-live readiness
  • Troubleshoot data issues, production defects, and integration failures during deployment and hypercare

Skills & Experience:

  • Strong hands-on experience with Databricks including Delta Lake, notebooks, and workflows
  • Expertise in PySpark, SQL, and Python for large-scale data processing
  • Experience working with REST APIs, JSON, and message-based integration patterns such as Kafka or similar technologies
  • Strong understanding of lakehouse architecture and medallion data modeling (bronze, silver, gold layers)
  • Experience designing and building scalable ETL/ELT pipelines in cloud-based data platforms
  • Integration experience with enterprise systems such as ERP and revenue or billing platforms
  • Understanding of finance and ERP data models including Procure-to-Pay and Record-to-Report processes
  • Familiarity with asset lifecycle processes including capitalization, depreciation, and reserves
  • Experience with data validation, reconciliation, and financial data integrity controls
  • Exposure to enterprise data platforms such as cloud data warehouses or integration tools is a plus.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 146186725

Similar Jobs

Early Applicant