
Search by job, company or skills
About Us:
slice
A new bank for a new India.
slice's purpose is to make the world better at using money and time, with a major focus on building the best consumer experience for your money. We've all felt how slow, confusing, and complicated banking can be. So, we're reimagining it. We're building every product from scratch to be fast, transparent, and feel good, because we believe that the best products transcend demographics, like how great music touches most of us.
Our cornerstone products and services: slice savings account, slice UPI credit card, slice UPI, and slice business are designed to be simple, rewarding, and completely in your control. At slice, you'll get to build things you'd use yourself and shape the future of banking in India. We tailor our working experience with the belief that the present moment is the only real thing in life. And we have harmony in the present the most when we feel happy and successful together.
We're backed by some of the world's leading investors, including Tiger Global, Insight Partners, Advent International, Blume Ventures, and Gunosy Capital.
About the role:
We are looking for 3-5 years of experienced Analytics Engineers in the Data Platform team. The role has two parallel responsibilities. Building and owning data marts. The business-modelled analytical layer between our raw data lake and every team making decisions with data. Fact tables, dimension tables, SCD management, quality tests, SLA adherence. Not all data marts are the same - some run on S3-backed Delta Lake for historical batch consumption, others on Pinot for near-real-time serving. You understand how the storage layer shapes the model, and you design accordingly.
Contributing to the platform. Reusable Spark pipeline templates, quality framework extensions, onboarding & backfill automation, cost tooling, observability components. When you solve a problem in a pipeline, you ask whether it can become a platform capability. The expectation is that each data mart you build is faster to deliver than the last - because of tools you helped create.
We expect engineers here to use AI tools as a genuine part of how they work - for development, debugging, documentation, and quality. Not occasionally. This is how the team moves fast without cutting corners.
You Should Have:
AI Engineering
Data Modelling
Deep dimensional modelling: grain, additive vs semi-additive measures, conformed dimensions, SCD patterns, and when star schema is the wrong answer. You know when to normalise vs denormalise based on access patterns, not convention. You understand that storage engine shapes the model - a Delta Lake batch fact table and a Pinot real-time fact table are different designs - and that near-real-time freshness changes the modelling problem meaningfully. Before writing code, you resolve the right questions with stakeholders: grain, shared dimensions, and where denormalisation is justified by query pattern.
SQL & Transformation
You work in a version-controlled transformation framework - dbt or equivalent - and treat data models as software. Materialisation strategies, incremental models, ref-based dependencies, and model-level tests are part of every delivery. PR-based code review is the standard gate before anything reaches production. Every model is documented, tested, and reviewable by someone who wasn't in the room when it was written.
BI Layer Awareness
You design with downstream consumption in mind. A mart that is technically correct but difficult to query in Superset is an incomplete delivery. You validate that what you built answers the question it was meant to answer, and work with analysts to catch consumption friction early.
Apache Spark
Core Tools
Pipeline Engineering
Idempotency is a design principle, not an afterthought. You have implemented SCD Type 2 or equivalent and can articulate the trade-offs. When issues arise, you diagnose fast feeding query plans and stack traces into AI tools to cut time-to-root-cause, while knowing which outputs to verify before acting.
Data Quality Engineering
Custom quality assertions beyond framework defaults volume checks, distribution comparisons, referential integrity. You can implement anomaly detection on metric time series without a library doing all the work. You apply AI-driven pattern detection where deterministic rules fall short, and use LLMs to generate quality rule suggestions from schema and sample data without skipping the review.
Product and Business Sense
You think in metrics before tables. You ask what question someone is trying to answer before asking what columns they need, and push back on underspecified requirements with precision. Familiarity with fintech metrics - KYC approvals, repayment rates, settlement ratios, disbursement volumes, conversion rates, cohort retention is a strong plus.
Nice to Have
Life at slice:
Life so good, you'd think we're kidding
Job ID: 146980733