Job Title: Data Architect
Experience: 1215 Years
Location: Remote
Engagement: Contract
Job OverviewWe are looking for an experienced Data Architect to lead the design, implementation, and modernization of large-scale enterprise data platforms. The ideal candidate will bring deep expertise in cloud-based data architectures, particularly Snowflake (primary focus), along with Databricks, Azure Data Factory, Apache Spark, and Python.
This role demands strong architectural thinking, hands-on technical leadership, and the ability to build scalable Data Lakes, Data Warehouses, and Data Lakehouse solutions that support analytics, BI, and AI/ML workloads across batch and streaming environments.
Key Responsibilities- Define and drive the enterprise data architecture strategy supporting analytics, BI reporting, and AI/ML initiatives.
- Architect, design, and implement Data Lakes, Data Warehouses, and Data Lakehouse platforms on cloud environments.
- Lead data ecosystem modernization with a strong emphasis on Snowflake, supported by Databricks, Spark, Azure Data Factory, and Python.
- Design and optimize batch and real-time streaming data pipelines for high-volume, enterprise-scale workloads.
- Establish robust data ingestion, transformation, orchestration, and automation frameworks.
- Define and enforce data governance, data quality, metadata management, and lineage standards.
- Collaborate with business stakeholders, data engineers, analytics teams, and leadership to align data architecture with business goals.
- Provide technical leadership and mentorship to engineering teams, promoting best practices and architectural standards.
- Continuously evaluate and integrate emerging technologies to enhance performance, scalability, security, and cost efficiency.
Required Skills & Experience- 1215 years of overall IT experience, with 8+ years in a Data Architect or Senior Data Architecture role.
- Strong hands-on experience designing and implementing enterprise data platforms from scratch.
- Core technical expertise in:
- Snowflake (mandatory / primary focus)
- Databricks and Apache Spark (batch & streaming)
- Azure Data Factory (ADF) for orchestration and data pipelines
- Python for data engineering, automation, and custom processing
- Solid understanding of ETL/ELT patterns, data modeling, schema design, and query performance optimization.
- Experience with real-time data streaming technologies (e.g., Kafka, Spark Streaming, Azure Event Hubs).
- Working knowledge of data governance, cataloging, lineage, compliance, and security frameworks.
- Strong understanding of cloud-native services across Azure, AWS, or GCP.