Wissen Technology is Hiring forDatabricks Engineer- Azure Fabric
About Wissen Technology:
At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don't just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients with the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary:We are seeking a highly skilled Databricks Engineer – Azure Fabric with 6–8 years of experience in data engineering to design, build, and maintain scalable data platforms on Microsoft Fabric and Azure. The ideal candidate will have strong hands‑on experience with Python, SQL, Microsoft Fabric (OneLake, Lakehouse, Data Factory), and Delta Lake, along with the ownership mindset to deliver regulatory‑grade, enterprise data solutions. This role involves close collaboration with global engineering, data, compliance, and business teams and supports advanced analytics and AI‑enabled data products.
Experience: 6- 8 years
Location: Mumbai/Pune/Bangalore
Mode of Work:Full time
KeyResponsibilities:
- Design, build, and maintain scalable, distributed, and fault‑tolerant data pipelines on Microsoft Fabric.
- Develop lakehouse architectures using OneLake and Delta Lake, including incremental merge workflows and Change Data Feed.
- Build pipelines to ingest, normalize, transform, and publish large volumes of financial market data.
- Design and implement bitemporal data models (valid‑time and system‑time) for regulatory‑grade time‑series datasets.
- Participate in cross‑functional discussions with engineering, compliance, research, and business stakeholders globally.
- Build and maintain testing frameworks (unit, regression, UAT) for data pipelines and transformations.
- Own end‑to‑end delivery of solutions, including ingestion pipelines, QA processes, correction handling, and audit trails.
- Collaborate on shared platform services and reusable components instead of siloed implementations.
- Apply business understanding of financial reference data (equities and other asset classes).
- Support AI enablement use cases such as AI‑assisted ingestion, anomaly detection, and semantic search over lakehouse data.
Requirements:
- 6–8 years of experience in data engineering.
- Strong proficiency in Python for data pipelines, transformations, and automation.
- Advanced SQL skills including window functions, partitioning, and time‑series query patterns.
- Hands‑on experience with Microsoft Fabric:
- OneLake
- Fabric Data Factory pipelines
- Fabric Lakehouse
- Fabric Warehouse (SQL endpoint)
- Strong working knowledge of Delta Lake:
- Table creation and management
- Incremental merges
- Z‑ordering
- Change Data Feed (CDF)
- Experience using AI‑assisted development tools (e.g., GitHub Copilot, Cursor).
- Proficient with Git for code versioning, branching strategies, and pull‑request workflows.
- Experience working with REST APIs for data ingestion and system integration.
- Familiarity with Azure services such as Azure Data Factory, Azure SQL, Azure Key Vault, and RBAC.
- Strong ownership, problem‑solving, and collaboration skills.
Good To Have Skills:
- Experience with pandas, PySpark, or similar data processing libraries.
- Knowledge of columnar storage and time‑series analytics (e.g., ClickHouse or equivalent).
- Familiarity with Microsoft Purview for data lineage, cataloging, and data classification.
- Understanding of bitemporal modeling for financial and regulatory datasets.
- Knowledge of financial reference data: equities, identifiers, corporate actions, index data.
- Exposure to CI/CD pipelines and automated data platform deployments.
Wissen Sites:
Website: www.wissen.com
LinkedIn: https://www.linkedin.com/company/wissen-technology