
Search by job, company or skills
Senior Data Engineer
Parentheses Labs. • Kolkata (In-office / Remote)
Role
Senior Data Engineer
Experience
5+ years
Location
Kolkata — In-office / Remote (hybrid flexibility)
Employment Type
Full-time
Compensation
₹15,00,000 – ₹18,00,000 per annum (CTC), based on experience
Reports To
Head of Business Vertical / TBD
About Parentheses LabsParentheses Labs is an AI and software products company headquartered in Kolkata, serving clients across the US, the GCC, and India. We build data-driven products and platforms across multiple verticals — from banking performance infrastructure to adaptive learning systems and B2B SaaS — and we believe great engineering starts with great data.
About the RoleWe are looking for a Senior Data Engineer to own the data backbone that powers our analytics, AI, and reporting workloads across multiple business verticals. You will work directly with engineering, product, and business stakeholders to design pipelines, model warehouses, and deliver insights that decision-makers can actually trust and act on.
What You'll Do• Design, build, and maintain robust ELT/ETL pipelines feeding our Snowflake warehouse from diverse sources (APIs, databases, files, event streams).
• Write performant, well-tested SQL — from analytical queries to complex transformations — and own data modeling decisions (star/snowflake schemas, slowly changing dimensions, marts).
• Build and maintain Power BI datasets, semantic models, and dashboards that business teams actually use; optimize DAX and refresh performance.
• Partner with stakeholders across verticals (banking, HVAC services, education, MSME platforms) to translate business questions into data products.
• Apply AI tooling (LLMs, copilots, embeddings, RAG) to accelerate data engineering work — pipeline scaffolding, data quality checks, documentation, and ad-hoc analytics.
• Own data quality, lineage, and observability — write tests, set up monitoring, and make sure broken pipelines get noticed before stakeholders do.
• Build advanced Excel models when that's the right tool for the job (financial analyses, exec-ready workbooks, what-if models).
• Mentor junior engineers and analysts, review code, and raise the bar for engineering craft on the team.
• Document everything you build — architectures, schemas, runbooks — so the next engineer (or AI agent) can pick it up in a day.
What We're Looking ForRequired• 5+ years of professional data engineering experience, with a strong track record of shipping production data systems.
• Excellent SQL skills — you can write, optimize, and debug complex analytical queries; you understand query plans and indexing.
• Hands-on production experience with Snowflake — warehouse design, role/grant model, cost optimization, Snowpipe, tasks, and streams.
• Strong Power BI skills — semantic modeling, DAX, row-level security, gateways, and performance tuning.
• Advanced Excel — Power Query, pivot models, complex formulas, and the judgment to know when Excel is the right tool versus when it isn't.
• Demonstrated AI fluency — comfortable using LLM-based tools (Claude, ChatGPT, Copilot, Cursor, etc.) as a daily part of your engineering workflow, with a clear understanding of where they help and where they don't.
• Excellent written and verbal communication in English — you can explain a data model to a CFO and a CTO in the same meeting.
• Strong educational background — degree in Computer Science, Engineering, Statistics, Mathematics, or a related quantitative field from a reputed institution.
• Genuine intellectual curiosity and a quick-learner mindset — you enjoy picking up new domains, tools, and stacks, and you don't wait to be told what to learn next.
• Comfort working across multiple business verticals and context-switching between projects without losing quality.
Nice to Have• Experience with dbt for transformation modeling and lineage.
• Python proficiency for data work (pandas, SQLAlchemy, orchestration with Airflow / Prefect / Dagster).
• Exposure to streaming or event-driven data (Kafka, Kinesis, Snowpipe Streaming).
• Experience integrating data with marketing platforms (GA4, ad platforms, HubSpot, Salesforce) or financial systems.
• Familiarity with cloud data ecosystems on AWS, Azure, or GCP.
• Experience building or fine-tuning LLM-powered analytics features (text-to-SQL, RAG over warehouse data, AI assistants).
• Prior experience working with US/international clients.
What We Offer• Compensation of ₹15–18 LPA, calibrated to experience and skill depth.
• Hybrid working model — work from our Kolkata office or remotely, with flexibility based on project needs.
• Direct exposure to multiple industries and clients across India, the US, and the GCC.
• A modern, AI-first engineering culture — we expect you to use the best tools available, and we invest in them.
• High ownership and short feedback loops — your work is visible to leadership and to clients.
• Learning budget for courses, certifications, and conferences relevant to your growth.
• A small, technically strong team where senior engineers shape the architecture and the hiring bar.
How to ApplyApplication link - https://equip.co/job-posts/9qsYsb/
Shortlisted candidates will go through a technical SQL/Snowflake exercise, a system design discussion, and a stakeholder-communication round.
Contact - [Confidential Information]
Job ID: 147270731
Skills:
data engineering , Python, Pyspark, AWS Glue, Docker, AWS Batch
Skills:
CI/CD & DevOps, Spark Streaming / Event Hubs, Agile environments (Jira / Azure DevOps), REST API integrations, AI/ML solutions, Life Sciences analytics experience
Skills:
Data Management, Debugging, Apache, Python, Sql
Skills:
Elk, Scala, Kafka, Sql, ELT, Gcp, Docker, Terraform, Spark, Databricks, Splunk, Azure, Kubernetes, Python, Etl, AWS, Airflow, Flink, Apache Atlas, Open Lineage
Skills:
Sql, Sap, Ibm
We don’t charge any money for job offers