Search by job, company or skills

A

Hadoop Lead

7-12 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 20 applicants
Early Applicant
Quick Apply

Job Description

This is a Big Data Engineer role with a strong focus on data warehousing and analytics within the AWS cloud platform. The position requires experience in building and managing data pipelines, using a range of technologies for data transformation, and leading projects from design to implementation.

Key Responsibilities

  • Data Pipeline & Transformation: You'll have experience with data pipelines using technologies like Apache Kafka, Storm, Spark, or AWS Lambda. The role requires at least 2 years of experience writing PySpark for data transformation. You'll also work with terabyte data sets using relational databases and SQL.
  • Data Warehousing & ETL: The position demands at least 2 years of experience with data warehouse technical architectures, ETL/ELT processes, and data security. You'll be responsible for designing data warehouse solutions and integrating various technical components.
  • Project Leadership: You'll have 2 or more years of experience leading data warehousing and analytics projects, specifically utilizing AWS technologies like Redshift, S3, and EC2.
  • Methodologies & Tools: You'll use Agile/Scrum methodologies to iterate on product changes and work through backlogs. Exposure to reporting tools like QlikView or Tableau is a plus, as is familiarity with Linux/Unix scripting.

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Job ID: 125371601