Search by job, company or skills

UNLOQ

Data Engineer II

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 17 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for a Data Engineer to build and own a large-scale data infrastructure powering analytics, product insights, and business decision-making. You will define core Sources of Truth (SOT), design unified data models, and build reliable batch and streaming pipelines processing high-volume event and transactional data. This role sits at the intersection of Software, SRE and data engineering, requiring strong foundations in distributed systems, SQL, and big data processing. This role requires the engineer to own the E2E Stack, i. e., from set-up to development of high-volume data pipelines.

Responsibilities

  • Build and automate large-scale, high-performance data pipelines (batch and streaming).
  • Define and own Sources of Truth (SOT) and dataset design used across multiple teams.
  • Streamline ingestion and processing of raw event sources into authoritative event logs.
  • Lead data engineering projects, ensuring pipelines are reliable, efficient, testable, and maintainable.
  • Design and optimise data models for analytics, reporting, and downstream product use cases.
  • Build systems to monitor data quality, data loss, SLAs, and reliability of Tier-1 and Tier-2 datasets.
  • Devise strategies to detect, reconcile, and compensate for data loss across multiple sources.
  • Evangelise high-quality software engineering practices for data infrastructure at scale.
  • Collaborate with Data Science, Analytics, Product, and Engineering teams to align on data architecture.
  • Contribute to shared data tooling, frameworks, and standards to improve developer productivity.

Requirements

  • 3-5+ years of relevant Data Engineering or Software Engineering experience.
  • Bachelor's or Master's degree in Computer Science, Engineering, or equivalent practical experience.
  • Strong experience working with large-scale datasets (terabyte to petabyte-scale).
  • Solid background in distributed systems design and operation.
  • Excellent SQL skills (mandatory); experience with complex analytical queries.
  • Solid understanding of data modelling (star/snowflake schemas, fact and dimension tables).
  • Hands-on experience with Spark and data processing frameworks.
  • Proficiency in one or more programming languages: Java, Scala, or Python.
  • Well-versed with tools of one of the Cloud vendors, i. e., AWS, GCP, Azure.
  • Experience with ETL frameworks, data pipelines, data lakes, and data modelling fundamentals.
  • Strong understanding of monitoring, logging, and observability for data systems.
  • Ability to work across teams to define overarching data architecture and influence best practices.
  • Strong problem-solving skills and attention to data correctness and reliability.
  • Excellent written and verbal communication skills.

Must Have

  • Strong understanding of monitoring, logging, and observability for data systems.
  • Experience with real-time / streaming data (Kafka, Flink, Beam).
  • Familiarity with Hadoop / HDFS ecosystems.
  • Experience building or integrating with backend services.
  • Exposure to cloud platforms (AWS, GCP, or Azure) includes optimisation (pipelines, cost).
  • Exposure to Snowflake, SQL at scale, and modern analytics engineering.
  • Exposure to designing building secured data systems using RBAC, CBAC for audit and compliance.

This job was posted by Ishita Singh from Unloq.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147204779

Similar Jobs

Bengaluru, India

Skills:

ScalaPostgreSQLMariadbSqlRedisGcpMySQLSparkMongoDBAzurePythonAWSDask

Bengaluru, India

Skills:

Data ModelingSparkAWS GluePythonSqlELTEtl

Bengaluru, India

Skills:

graph databases object storage AWS GlueData Modelingbuilding ETL pipelinesdata storeskey-value storesIAM roles and permissionsnon-relational databasesWarehousingcolumn-family databases

Bengaluru, India

Skills:

Data ModelingAWS GlueSparkPythonSqlELTEtl

Bengaluru, India

Skills:

snowflake PysparkJiraAzure Data FactoryConfluenceGitlabDatabricksRestful ApisPythonAzure DevOpsDatabricks SQLoperational monitoringmodernizationworkflow orchestrationAgile delivery modelscloud migration patternsdata visualization toolsAzure ReposRelational DatabasesETL ELT pipelinesCI CD pipelines