Search by job, company or skills

JRD SYSTEMS

Data Architect

Save
new job description bg glownew job description bg glow
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About the Role

We are seeking an experienced Data Engineer to design, develop, and maintain scalable data solutions on Azure and Databricks as part of our enterprise data modernization initiatives.

The ideal candidate will have a strong background in data pipeline development, data integration frameworks, and cloud-based data engineering, with deep expertise in tools such as Databricks, Azure Data Factory, Alteryx, Ab Initio, Talend, and Informatica.

This role will lead the design and delivery of high-performance, governed data architectures for large-scale enterprise clients, driving data reliability, compliance, and analytics readiness.

Key Responsibilities

  • Design and implement scalable, reusable data pipelines using Databricks (PySpark, Delta Lake) and Azure Data Factory, ensuring high throughput and low latency.
  • Develop and maintain Common Data Platforms (CDP) and Medallion Architecture–based data lakes to consolidate legacy systems and improve data availability.
  • Build and manage backend data services using Python, Spark, or C#, ensuring performance, reliability, and scalability.
  • Develop metadata-driven data ingestion frameworks capable of handling schema drift across multiple data sources.
  • Lead data integration and migration initiatives using Talend, Ab Initio, Alteryx, and Informatica, transforming legacy ETL workflows into cloud-native pipelines.
  • Implement data quality, lineage, and governance frameworks (e.g., Unity Catalog, Data Stewardship, Data Catalog, MDM).
  • Ensure compliance with global data privacy regulations, including GDPR and CCPA.
  • Design and optimize Feature Store Data Marts to support AI/ML and business intelligence use cases.
  • Collaborate with cross-functional teams including Data Scientists, BI Developers, and Product Owners to deliver data-driven solutions.
  • Mentor junior engineers in data engineering best practices, CI/CD processes, and infrastructure-as-code methodologies.
  • Leverage caching technologies such as Redis to enhance performance and reduce latency in analytics workloads.
  • Own end-to-end delivery of assigned modules, including documentation, performance tuning, and quality assurance.

Technical Environment

Cloud Platforms:

  • Microsoft Azure (Data Factory, Synapse, Data Lake, Databricks)
  • AWS (S3, EC2, Glue, Redshift)

Data Engineering Tools:

  • Databricks (DLT, Auto Loader, Unity Catalog)
  • Talend, Ab Initio, Informatica, Alteryx

Programming Languages:

  • Python, PySpark, SQL, Unix Shell Scripting

Data Modeling:

  • Dimensional, Relational, Medallion Architecture

DevOps / CI-CD:

  • GitHub Actions, Azure DevOps, Databricks Asset Bundles (DABs)

Data Governance & Security:

  • RBAC/ABAC, Data Stewardship, MDM, GDPR/CCPA Compliance

Monitoring & Scheduling:

  • Airflow, Logic Apps, Azure Monitor

Required Qualifications

  • 12+ years of experience in Data Engineering, Data Architecture, or ETL development
  • Proven track record leading Azure and Databricks-based modernization or migration programs
  • Deep expertise in data integration tools such as Talend, Informatica, Ab Initio, and Alteryx
  • Hands-on experience with PySpark, SQL, and Azure data services
  • Strong understanding of data modeling, data quality, and governance frameworks
  • Experience with real-time and batch data ingestion technologies (Kafka, Kinesis, or Event Hub)
  • Excellent problem-solving, communication, and stakeholder management skills

Preferred Qualifications

  • Bachelor's degree in Computer Science or a related field
  • Strong analytical and critical thinking abilities
  • Azure Data Engineer Certification or equivalent
  • Familiarity with AI/ML data preparation and Feature Store design
  • Experience in cost optimization and cluster governance within Databricks

Why Join Us

  • Be part of a large-scale data modernization journey with a global enterprise brand
  • Work with cutting-edge cloud and data engineering technologies
  • Collaborate with a highly skilled, cross-functional team of architects, analysts, and data scientists
  • Opportunity to lead innovation in a multi-domain data transformation initiative

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147427407

Similar Jobs

Bengaluru, India

Skills:

snowflake JavaSqlNosqlAzure Data FactoryQlik ReplicateAzurePythonAirflowPerformance OptimizationAzure Data ExplorerData Architecture DesignGovernance ComplianceDBT scriptingData Integration ManagementFivetranTechnology Evaluation

Bengaluru, India

Skills:

data engineering BigQueryApi IntegrationData ModelingPysparkKafkaDataprocMicroservicesSparkData ArchitectureDataFlowAirflowData Lake ETL ELTPub SubGCP Google Cloud Platform

Bengaluru, India

Skills:

snowflake Data WarehouseMetadata ManagementAws RedshiftKafkaData ModelingInformaticaELTAzure SynapseDistributed SystemsOozieTalendData LakePythonAWSJavaHadoopScalaSqlSpark StreamingAzure Data FactoryGcpSparkData GovernanceDatabricksData IntegrationAzureEtlAirflowGoogle BigQueryNoSQL databasescloud ecosystems

Bengaluru, India

Skills:

AlteryxPysparkKafkaAzure DatabricksInformaticaRedisAzure Data FactoryKinesisSparkAb InitioTalendPythonAzure DevOpsAirflowLogic AppsEvent HubGitHub ActionsdbtDelta LakeCI-CDAzure Monitor

Bengaluru, India

Skills:

Data ModelingMySQLSQL ServerDimensional ModelingOracleSqlER ModelingNormalizationRelational DatabasesDenormalization