Search by job, company or skills

Senior Data Engineer – Data Ingestion & Pipelines

Company name confidential
4-9 Years
18 - 30 LPA
new job description bg glownew job description bg glownew job description bg svg
  • Posted 29 days ago
  • Over 100 applicants
Quick Apply

Job Description

Senior Data Engineer Data Ingestion &

Pipelines

Location: India (Bangalore)

Experience: 49 years

Client - Saltmine

About the Role

We are looking for a Senior Data Engineer who is deeply comfortable working with data

building ingestion pipelines, writing efficient transformations, exploring different database

technologies, and ensuring reliable data movement across the organization.

You will primarily work on our Kafka ingestion flow, distributed data systems, and multiple

storage layers (SQL, NoSQL, and graph databases).

The ideal candidate enjoys working with raw data, optimizing queries, and building stable

pipelines using Python and Spark.

This role is perfect for someone who is data-heavy, curious about the data world.

Key Responsibilities

Data Engineering (Primary Focus)

Build and maintain robust ETL/ELT workflows using Python and Apache Spark.

Design and optimize transformations across SQL, NoSQL, and graph database ecosystems.

Implement data quality checks, schema handling, and consistency across pipelines.

Deal with complex, high-volume datasets and real-world production data.

Streaming & Ingestion

Work on Kafka ingestion pipelines topics, partitions, consumer logic, schema evolution.

Monitor ingestion performance, throughput, and reliability.

Build connectors/utilities for integrating various sources and sinks.

Querying & Multi-Database Work

Write and optimize complex queries across relational and NoSQL systems.

Exposure to graph databases and graph query languages (e.g., Cypher, Gremlin).

Understand indexing, modeling, and access patterns across different DB types.

Python & DevOps Awareness

Write clean and modular Python code for ETL jobs and data utilities.

Basic understanding of CI/CD, job orchestration, logging, and monitoring.

Ability to debug production ETL issues end-to-end.

Required Skills

Strong Python ETL, utilities, data processing.

Strong SQL query optimization and data modeling fundamentals.

Mandatory Spark experience batch transformations, performance tuning.

Kafka basics producers, consumers, offsets, schema handling.

Hands-on experience with relational and NoSQL databases.

Exposure to graph databases and graph query languages.

Strong debugging and data exploration skills.

Nice-To-Have

Retail or e-commerce domain experience.

Familiarity with Terraform or basic infra automation.

Experience with nested JSON, denormalized structures, and semi-structured data.

Understanding of distributed storage concepts.

Soft Skills

High ownership and end-to-end problem solving.

Enjoys working with complex or messy data.

Curious mindset comfortable exploring new databases and data flows.

Clear communication and ability to collaborate with backend and product teams.

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 135846099