Search by job, company or skills

Bajaj Finserv

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Location Name: Pune Corporate Office - Mantri

Job Purpose

To effectively design, develop, and manage data solutions using ETL technologies such as Azure Databricks (ADB) and Azure Data Factory (ADF), work with NoSQL databases like Cosmos DB, and apply object-oriented programming (OOP) principles through C#/.NET for scalable data integration and backend services.

Duties And Responsibilities

  • Databricks Development: Build and manage Databricks notebooks using SQL and PySpark within a reusable framework.
  • ETL Development: Design and maintain data integration pipelines in Azure Data Factory.
  • Database Proficiency: Strong knowledge of SQL and experience with relational databases like SQL Server, MySQL, etc.
  • API/Data Exchange Layer: Develop and manage backend services in C#/.NET, hosted on Azure App Service.
  • NoSQL Expertise: Work with Cosmos DB or MongoDB (documents & collections), including concepts like indexing, partitioning, change feed, throttling, etc.
  • CI/CD & Release Management: Utilize DevOps pipelines for code versioning, automated testing, and release.
  • Time-Series Data Handling: Leverage Azure Data Explorer (ADX) for time-series data, with knowledge of materialized views, sharding, and caching policies.
  • Real-Time Streaming (Nice to Have): Basic understanding of Azure Event Hub, including batching, offsets/checkpoints, payload handling, and throttling.
  • Cloud Platform Familiarity (Preferred): Exposure to Azure cloud services and architecture best practices.

Key Responsibilities/Major Challenges


  • Translate business requirements into technical solutions in collaboration with the PMO team.
  • Own end-to-end delivery of data projects, ensuring on-time execution and adherence to quality standards.
  • Design technical architecture and guide development efforts for enhancements and new projects.
  • Develop and maintain robust ETL pipelines and data integration modules across systems.
  • Ensure high data quality, platform stability, and resolution of critical process issues.
  • Monitor and resolve performance bottlenecks in data workflows and programs.
  • Establish best practices, standard operating procedures, and drive their implementation across teams.
  • Act as a liaison with business users and product managers to support daily data needs and strategic initiatives.
  • Coordinate with internal and external development teams to troubleshoot and resolve issues efficiently.
  • Manage workload through effective planning, prioritization, and progress tracking.
  • Balancing development timelines with on-time delivery in a dynamic and evolving technical environment.
  • Coordinating with multiple internal and external stakeholders to align priorities, resolve dependencies, and ensure smooth execution.
  • Ensuring code quality, data integrity, and performance while scaling data solutions across diverse systems and platforms.

Key Decisions / Dimensions


  • Making critical decisions during production issues, including root cause analysis, quick fixes, and long-term resolutions.
  • Prioritizing and escalating support tasks effectively to minimize downtime and business impact.
  • Driving decisions around technical design, architecture, and optimization to ensure performance, scalability, and maintainability of solutions.

Educational Qualifications


Required Qualifications and Experience

  • Graduate or PostGraduate in Computer Science, Information Technology, or Data Science/Technologies.

Work Experience


  • 0.51 year of handson data engineering experience.
  • Technical Expertise / Skills Keywords:

Azure Databricks - PySpark, SQ - Must Have

  • Azure Data Factory For ETL & Data Integrations - Must Have
  • OOPS Concept Implementation in C#/.Net
  • COSMOS Database for NoSQL DB
  • Event Hub & Kafka for Change Feed & Real-Time Streaming - Good to Have
  • Azure Data Explorer as Time Series Database with Kusto Query Language (KQL) As Programming Language - Good to Have


More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144745549

Similar Jobs