Search by job, company or skills

Matrix Hr Technologies Private Limited

Senior Data Engineer [ Big Data & AWS ]

8-15 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant
Quick Apply

Job Description

Senior Data Engineer (Big Data & AWS)

Experience:

  • 812 years of experience in designing and building high-volume, high-performance data pipelines

Education:

  • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field


Role Overview:

  • Design, develop, and deploy scalable data solutions
  • Work on high-volume data processing within Big Data and AWS ecosystems
  • Collaborate with cross-functional teams to support business and product use cases


Key Responsibilities:

  • Collaborate with Data & Analytics teams to gather technical requirements
  • Build and maintain scalable ETL pipelines on Big Data / Cloud platforms
  • Handle data collection, storage, processing, and transformation of large datasets
  • Solve complex data engineering problems and improve architecture
  • Support production issues related to application functionality and integrations

Must-Have Skills:

  • Strong communication skills (verbal and written)
  • Expertise in Big Data and Cloud data ecosystems
  • Experience in building high-volume data pipelines using Spark / Big Data technologies
  • Strong programming skills in Python / Scala
  • Advanced SQL skills with ability to write complex and efficient queries
  • Hands-on experience with AWS Data Stack:
  • S3, AWS Glue, Lambda, Athena, Redshift, EMR
  • Experience in building Data Lakes
  • Exposure to real-time/streaming data processing:
  • Kafka, Spark Streaming, AWS Kinesis
  • Strong understanding of Data Warehouse concepts
  • Experience in data modelling and database design (Data Lake preferred)
  • Strong analytical and problem-solving skills
  • Ability to manage multiple tasks under tight deadlines

Good to Have:

  • Familiarity with DevOps/DataOps practices (CI/CD pipelines)
  • Experience working in Agile/Scrum environments (Jira)
  • Knowledge of Hadoop/HDFS
  • Experience with data modelling tools like ERWin, Visio

Tech Stack:

  • Python, PySpark, Scala
  • Spark (Big Data processing)
  • AWS Services: S3, Glue, Redshift, EMR, Lambda, Athena
  • Data Lake / Lakehouse architecture

Soft Skills:

  • Excellent communication and collaboration skills
  • Strong team player
  • Ability to mentor junior engineers
  • Comfortable working in Agile environments
  • Ability to work with minimal guidance in ambiguous situations

More Info

Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 144928515