Search by job, company or skills

  • Posted an hour ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are seeking for a Data Architect with over a decade of experience designing scalable data platforms and enterprise data models. This role requires strong knowledge of SQL/NoSQL databases, distributed systems, and cloud ecosystems across AWS, Azure, or GCP. The ideal candidate will build high-volume data pipelines, real-time streaming solutions, and robust ETL/ELT frameworks. Proficiency with tools like Spark, Databricks, Snowflake, and major integration platforms is essential. You will design secure, compliant data architectures aligned with GDPR, HIPAA, or CCPA. Additionally, you will drive data governance practices including cataloguing, lineage, and access control.

Key Responsibilities

  • Experience in understanding and translating data, analytic requirements and functional needs into technical requirements while working with global customers.
  • Design cloud-native data architectures that support scalable, real-time, and batch processing.
  • Build and maintain data pipelines to support large scale data management in alignment with data strategy and data processing standards.
  • Define strategies for data modeling, data integration, and metadata management.
  • Strong experience in database, data warehouse, data lake design & architecture.
  • Leverage cloud platforms (AWS, Azure, GCP, etc.) for data storage, compute, and analytics services such as Azure Synapse, AWS Redshift, or Google BigQuery.
  • Experience in Database programming using multiple flavour of SQL.
  • Implement data governance frameworks, including data quality, lineage, and cataloguing.
  • Collaborate with cross-functional teams, including business analysts, data engineers, and DevOps teams.
  • Experience in Big Data ecosystem - on-prem (Hortonworks/MapR) or Cloud (Dataproc/EMR/HDInsight/Databricks/SnowPark)
  • Evaluate emerging cloud technologies and recommend improvements to data architecture.
  • Experience in any orchestration tool such as Airflow/Oozie for scheduling pipelines
  • Hands-on experience in using Spark Streaming, Kafka, Databricks, Snowflake, etc.
  • Experience working in an Agile/Scrum development process.
  • Experience to optimize data systems for cost efficiency, performance, and scalability.

Required Skills and Experience

  • Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
  • 10+ years of experience in data architecture, data engineering, or related roles.
  • 5+ years of experience in building scalable enterprise data warehousing and modelling on prem and/or cloud (AWS, GCP, Azure).
  • Expertise in data structures, distributed computing, manipulating and analyzing complex high-volume data from variety of internal and external sources.
  • Expertise in cloud platforms such as Azure, AWS, or Google Cloud and their data services (e.g., Azure Data Lake, AWS S3, BigQuery).
  • Hands-on experience with data integration tools (e.g., Azure Data Factory, Talend, Informatica).
  • Experience in developing ETL designs and data models for structured/ unstructured and streaming data sources
  • Experience with real-time data streaming technologies like Kafka, Kinesis, or Event Hub.
  • Proficiency with big data tools and frameworks like Apache Spark, Databricks, Snowflake or Hadoop.
  • Experience with SQL and NoSQL databases (e.g., SQL Server, Snowflake, Cosmos DB, DynamoDB).
  • Solid knowledge of scripting and programming languages such as Python, Spark, Java, or Scala.
  • Design secure data solutions ensuring compliance with standards such as GDPR, HIPAA, or CCPA.
  • Implement and enforce data governance frameworks, including data cataloguing, lineage, and access controls.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 145112031

Similar Jobs