Search by job, company or skills

JKTech

Senior Data Architect

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 28 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About the Role:

We are looking for a Data Architect with a strong background in data engineering & cloud data platforms. The ideal candidate will design and implement scalable data architectures that power enterprise analytics, AI/ML, and GenAI solutions — ensuring data availability, quality, and governance across the organization.

Key Responsibilities:

Data Architecture & Strategy

  • Design & Architecture: Design and implement robust, scalable, and optimized data engineering solutions on the Databricks platform. Architect data pipelines that scale efficiently and reliably.
  • Data Pipeline Development: Develop ETL/ELT pipelines leveraging Databricks notebooks, Delta Lake, Snowflake tech stack, Azure Data Factory etc.
  • Cloud Integration: Work closely with cloud platforms like Azure, AWS, or GCP to integrate Databricks or Snowflake with data storage (e.g., ADLS, S3, etc.), databases, and other services.
  • Performance Optimization: Optimize the performance of data workflows by tuning Databricks clusters, improving query performance, and identifying bottlenecks in data processing.
  • Collaboration: Collaborate with data scientists, analysts, and business stakeholders to understand business requirements and translate them into scalable data solutions.
  • Data Governance & Security: Ensure best practices for data security, governance, and compliance when working with sensitive or large datasets.
  • Automation & Monitoring: Automate data pipeline deployments and create monitoring dashboards for ongoing performance checks.
  • Continuous Improvement: Stay up to date with the latest Databricks features and Snowflake eco system best practices to continuously improve existing systems and processes.

Required Skills & Experience:

  • 12+ years of experience in Data Architecture / Data Engineering roles.
  • Proven expertise in data modeling, ETL/ELT design, and cloud-based data solutions (AWS Redshift, Snowflake, BigQuery, or Synapse).
  • Hands-on experience with data pipeline orchestration tools (Airflow, DBT, Azure Data Factory, etc.).
  • Proficiency in Python, SQL, and Spark for data processing and integration.
  • Experience with API integrations and data APIs for AI systems.
  • Excellent communication and stakeholder management skills.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145760603

Similar Jobs

Bengaluru, India

Skills:

atlassian object orientation Amazon Web ServicesProgramming LanguagesJIRAPythonHadoopGoogle Cloud PlatformScalaSqlAgile Software DevelopmentSparkAPI servicesgeneral security and networking technologiesdata analysis techniquesTrinoGoogle Big Querycloud-based platform servicesDocument DBsKey Value Storesmodern design patternsBLOB Object StorageStarburstdbtdata persistence solutionsdistributed architectures

Bengaluru, India

Skills:

enterprise data modeling snowflake Data security and privacyBi ToolsPower BiCloudformationData LineageSqlMetadata ManagementTerraformPythonAWSAirflowInfrastructure as CodeLakehouse architecturesData meshSemantic layersLookerScalable distributed data processingData product designELT frameworksData quality frameworks

Bengaluru, India

Skills:

SqlDatabricksPigPl SqlImpalaRDBMSKinesisHadoopPysparkAWSEtlCassandraHiveUnix Shell ScriptingPythonAzureNeo4jTerraformApache KafkaGitMongoDBSparkCircleCI

Bengaluru, India

Skills:

data engineering JavaCassandraKafkaNode.JSYarnRedisGoogle CloudHivePrestoSparkMongoDBAzurePythonAWSGolangAirflowNoSQL databasesHDFSTiDBMap-Reduce

Bengaluru, India

Skills:

Data AnalyticsData Securityaudit readinesstaggingdata pipelineslineagemetadata capabilitiesAWS cloud platform solutionsClassificationattribute-based access patternsCompliancediscoverabilityCatalogingdata engineering tech stackGovernanceGDPR regulations