Search by job, company or skills

hyper lychee labs

Data Technical Lead / Data Architect

6-8 Years
Save
new job description bg glownew job description bg glow
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Work Mode: Remote

Experience: 5–8+ Years

Engagement Type: Contract | 6 Months (Likelihood of Extension)

Relevant Experience:  

  • Minimum 6 years of experience in Data Engineering and Business Intelligence – designing, developing, testing, and implementing data solutions.  
  • 1+ years of hands-on experience with Power BI / Tableau / Analysis Services or comparable tools for visualization and analytics in the cloud data environment.  
  • 2+ years of experience with multiple database systems such as Snowflake, Redshift, Azure Synapse, BigQuery, Oracle, SQL Server, MySQL.  
  • 2+ years of practical experience with Azure Databricks – managing clusters, optimizing workloads, developing notebooks, and implementing scalable data pipelines.  

Mandatory Skills

Technical

a. Strong ETL/ELT and SQL development expertise using cloud-based data solutions (Azure SQL, Synapse, Snowflake, Redshift, Databricks).

b. Extensive hands-on experience with Databricks, including:

  • Cluster design, configuration, scaling, and performance optimization.
  • Implementing Delta Lake, data ingestion, transformation, and job orchestraƟon.
  • Managing cost efficiency and autoscaling policies.
  • Integrating Databricks with Azure Data Factory, Power BI, and other downstream tools.
  • Implementing Unity Catalog for data governance and access control.

c. Design and develop star schema and data vault models; build scalable and reusable ETL/ELT pipelines.

d. Design/Develop using best-practice techniques across data modeling, table-driven transformation, and parameterized, dynamic ETL/ELT jobs.

e. Establish and maintain reporting dashboard solutions for enterprise analytics.

f. Exposure to programming languages (Python, Scala, Java).

  • Databricks Associate Certificate – Mandatory, Databricks Professional Certificate (Add-on)
  • Metadata-driven orchestration (ADF/Workflows) and reusable components for DQ & archival
  • Structured Streaming, Delta Live Tables, Spark SQL, T-SQL; strong SQL optimization & dimensional modelling
  • Security with Key Vault, service principals, RBAC; data at rest/in transit

Non-Technical

  • Work closely with customer stakeholders, solution architects, BAs, and PMs to design and deliver scalable BI & analytics solutions.
  • Interface with business users, report developers, and other data engineers to ensure consistency and standardization across analytics delivery.
  • Contribute to HLD and LLD documentation and reviews in partnership with business and enterprise architects.
  • Handle client-facing communication and requirement analysis independently.
  • Exceptional communication, analytical, and problem-solving skills.

Preferred Skills

Technical

  • Strong understanding of Databricks architecture (control plane, data plane, cluster policies, job clusters vs all-purpose clusters).
  • Experience implementing data governance, security, and monitoring using Unity Catalog, Azure Key Vault, and role/row-based access.
  • Knowledge of CI/CD for Databricks (e.g., using Azure DevOps, GitHub Actions, or Terraform).
  • Advanced SQL and experience with ETL tools (ex: ADF, Matillion, Informatica).
  • Familiarity with event-based integrations, API-led connectivity, and microservices.
  • Exposure to data observability and lineage tracking using tools like Purview or Collibra.

Non-Technical

  • Ability to define scalable and cost-optimized Databricks architectures aligned with enterprise standards.
  • Ability to provide technical leadership, mentor junior engineers, and set best practices for Databricks usage and data design.
  • Comfortable working with ambiguous requirements, building conceptual and solution-level architectures for complex data ecosystems.
  • Strong communication, articulation, and ability to influence technical direction are mandatory.

Sample Project Use Cases

  • End-to-end re-architecture of data pipelines from Azure Synapse to Microsoft Fabric, incorporating Databricks
  • Lakehouse for scalable processing and transformation.
  • Design and implementation of a Databricks cost-optimized architecture with auto-scaling and workload isolation for multiple business units.
  • Conceptualize, structure, and evangelize enterprise-wide central dashboard and data marketplace initiatives using Azure + Databricks stack.

About our client:  

Our client is a pure play software services specialist focusing on delivering technology services & solutions in the areas of operational efficiency & customer experience. Headquartered in Chennai, India, they services global customers in North America, EMEA, ANZ & APAC across a variety of industries such as Manufacturing, Energy & Utilities, Hi Tech, Financial Services, Retail and Automobile. Their core expertise is in building, implementing, testing, integrating & maintaining applications leveraging a variety of cutting edge tools & technologies.  

Key service offerings for the mid-market segment are:  

  • Application Development & Maintenance  
  • Outsourced Product Development  
  • Testing & Automation  

What our client Offers  

  • Opportunity to be part of the Technology Advisory Council (TAC) – a niche technology-focused group with a defined roadmap for professional and competency growth.  
  • Exposure to enterprise data management projects across diverse technologies, domains, and customers.  

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147494269