Search by job, company or skills

nu-pie analytics

Data Architect

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Title: Data Architect

Experience: 13 to 15 Years

Contract Duration: 6 Months

Work Mode: Remote

Work Timing: US Timing (10:30 PM IST cutoff for overlap)

Location: Remote / Open

Role Overview

We are seeking a highly experienced Data Architect with 7–10 years of relevant, proven architecture ownership who can hit the ground running from day one. This role demands an execution-focused architect who is equally comfortable designing scalable data platforms and driving implementation independently in complex, ambiguous environments. The ideal candidate will bring strong leadership, deep technical expertise, and a bias for action.

Mandatory Skills — Technical

•      ETL/ELT and SQL coding using cloud-based database solutions such as Azure SQL, Synapse, Redshift, Snowflake, or similar tools.

•      Design and develop star schema data models, and ETL/ELT jobs, to support use cases across typical business domains.

•      Design and develop using best-practice techniques across data modelling, table-driven control of transformation jobs, and dynamic ETL/ELT jobs that scale for expanding use cases.

  • Data Loading — Proficient in designing and implementing efficient data ingestion pipelines within the Databricks ecosystem.
  • Autoloader vs. Standard Spark — Well-versed in the distinctions between Databricks Auto Loader and standard Apache Spark-based data loading, with the ability to evaluate and recommend the right approach based on use case, volume, and latency requirements.
  • Delta Load Concepts — Solid understanding of incremental and delta loading strategies using Delta Lake, including merge (upsert), CDC (Change Data Capture), and time travel capabilities.
  • Data Governance — Experienced in implementing data governance frameworks within Databricks, including Unity Catalog, access controls, data lineage, and audit logging.
  • Table Distribution — Knowledgeable in table distribution strategies such as partitioning, Z-ordering, and liquid clustering to optimize query performance and storage efficiency.

Key Responsibilities

•      Own end-to-end data architecture design from conceptualization to deployment.

•      Lead and execute the design of scalable, performant data platforms on cloud infrastructure.

•      Develop and maintain enterprise data models, including star and snowflake schemas.

•      Build and optimize ETL/ELT pipelines for high-volume, complex data environments.

•      Collaborate with business stakeholders to translate requirements into robust data solutions.

•      Drive best practices in data modelling, pipeline orchestration, and data quality assurance.

•      Establish governance standards for data assets, metadata, and documentation.

•      Support and mentor junior data engineers and analysts as needed.

•      Provide executive-level reporting and analytical dashboards for business insights.

•      Proactively identify and resolve data architecture bottlenecks and inefficiencies.

Required Qualifications

•      13–15 years of overall experience in data engineering, data warehousing, or data architecture.

•      7–10 years of proven hands-on data architecture ownership in enterprise environments.

•      Hands-on expertise with cloud data platforms: Azure Synapse, Azure SQL, Snowflake, Amazon Redshift, or equivalent.

•      Strong SQL coding skills for complex transformations, performance tuning, and schema design.

•      Demonstrated experience with ETL/ELT tools and frameworks.

•      Proficiency in Python and/or Java for data pipeline development.

•      Experience with enterprise reporting and BI tools (e.g., Power BI, Tableau, or similar).

•      Deep understanding of dimensional modelling, data vault, and relational design patterns.

•      Ability to work independently in ambiguous, fast-paced environments with minimal supervision.

Preferred Qualifications

•      Experience with data orchestration tools such as Apache Airflow, Azure Data Factory, or AWS Glue.

•      Exposure to real-time/streaming data architectures (Kafka, Spark Streaming, etc.).

•      Familiarity with DevOps / DataOps practices including CI/CD pipelines for data.

•      Experience working with cross-functional, globally distributed teams.

•      Cloud certifications (Azure, AWS, GCP) are a plus.

Engagement Details

Contract Typ: Contract — 6 Months

Work Hours: US Business Hours (workday ends at 10:30 PM IST to allow overlap for calls)

Work Mode: Fully Remote

Start: Immediate / ASAP

Note: Candidates must be available to attend calls/meetings during US business hours. The work schedule is structured to accommodate a meaningful overlap window ending at 10:30 PM IST.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145540897

Similar Jobs

Early Applicant