Search by job, company or skills

  • Posted 11 days ago
  • Over 50 applicants
Quick Apply

Job Description

  • To design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources.
  • Real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc.
  • To optimize the performance of data processing jobs and ensure system scalability and reliability.
  • To collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS.
  • To collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting.

Requirements-

  • Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift, RDS (Postgres) Athena, Kinesics, etc.
  • Expertise in data modeling and knowledge of modern file and table formats.
  • Proficiency in programming languages such as Python, PySpark, and SQL/PLSQL for implementing data pipelines and ETL processes.
  • Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in the enterprise.
  • Cloud/hybrid cloud (preferably AWS) solution for data strategy for Data lake, BI and Analytics.

What is in for you-

  • A stimulating working environment with equal employment opportunities.
  • Growing of skills while working with industry leaders and top brands.
  • A meritocratic culture with great career progression.

More Info

Job Type:
Industry:
Employment Type:
Open to candidates from:
Indian

Job ID: 119871863

Similar Jobs