Search by job, company or skills

Servify

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 14 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Servify:

Servify is a global product lifecycle management company, operating across 3 continents (in India, North America, Europe and MENA), that focuses on designing as well as administering custom product protection programs and exchange/upgrade programs for carriers, OEM brands, and retailers. We have cultivated a diverse, global client portfolio that includes global fortune 100 companies, OEMs (with more than 87% global mobile phone market share including the likes of Apple and Samsung), and more than 75 other brands to support their product care solutions. Servify protects tens of millions of devices across the globe and supports distribution of device protection products in over 200,000+ retail outlets globally.

POSITION SUMMARY:

We are seeking an experienced Senior Data Engineer to take operational ownership of our established data ecosystem. If you have 5+ years of data engineering experience and 2+recent years mastering Databricks in a production environment, this is your chance to drive a critical platform transformation. This role provides significant autonomy. You will be responsible for the continuous health of our production pipelines, ensuring data quality, and leading the charge to consolidate our entire visualization layer by migrating ashboards from Tableau onto Databricks.

KEY RESPONSIBILITIES:

1.Pipeline Stabilization & Management: Manage, monitor, and ensure the 24/7 operational reliability of our current suite of production Databricks (PySpark/Delta Lake) pipelines. You are directly responsible for pipeline operational health.

2.ETL Architecture Fidelity: Maintain and iterate on complex ETL/ELT processes structured around the Medallion Architecture. Guarantee the integrity and performance of data moving from Bronze to Gold layers.

3.Visualization Migration Lead: Execute the end-to-end migration of all existing business intelligence dashboards from Tableau onto the native Databricks visualization platform. This is a core focus area.

4.Source System Integration: Design and optimize ingestion logic for diverse data sources, with specific responsibility for extracting, transforming, and loading data efficiently from PostgreSQL and MongoDB.

5.Partner Sharing & Security: Establish and govern secure, reliable mechanisms for sharing finalized Databricks visualizations and reports with both internal stakeholders and external partners.

6.Cost & Performance Optimization: Actively tune Spark jobs, cluster configurations and Delta tables to drive down cloud costs and reduce pipeline latency.

REQUIREMENTS:

1. 5+ years experience in building and managing robust data infrastructure.

2.Databricks Mastery: Minimum of 2 years of recent, hands-on production experience in managing jobs, clusters, and data assets within the Databricks environment.

3.Expert proficiency in Python (PySpark) and advanced SQL. You should be comfortable working extensively with Databricks notebooks and terminals.

4.Database Expertise: Proven ability to connect to, query, and efficiently extract large

datasets from PostgreSQL (Postgres DB) & MongoDB (Understanding of NoSQL schema and extraction methods is key).

5.Architecture & Methodology: Practical experience implementing and maintaining the Medallion Data Architecture

6.BI Tool Knowledge: Prior exposure to Tableau is essential for understanding the migration scope, coupled with proven experience developing dashboards in Databricks SQL Analytics/Warehouse.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 134693771

Similar Jobs