Search by job, company or skills

softview infotech

Big Data Engineer (Cloudera, Hadoop, Hive)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 10 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role: Big Data Engineer

Experience: 4-8 Years

Location: Chennai, Bangalore

Key Skills: Big Data, Cloudera, Hadoop, Hive

About The Role

We are looking for a skilled Data Engineer with strong expertise in the Cloudera platform, Hadoop ecosystem, and Hive. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data warehouse solutions to support business analytics and reporting.

Key Responsibilities

  • Design, develop, and maintain robust data pipelines using the Hadoop ecosystem
  • Work extensively with the Cloudera platform for data processing and management
  • Develop and optimize complex Hive queries for large-scale data processing
  • Perform data ingestion, transformation, and validation from multiple data sources
  • Optimize Hadoop jobs and Hive queries for performance and scalability
  • Collaborate with data analysts, architects, and stakeholders to gather requirements
  • Ensure data quality, integrity, and consistency across systems
  • Monitor, troubleshoot, and resolve data pipeline issues
  • Follow best practices in data engineering, coding standards, and documentation

Required Skills & Qualifications

  • 4+ years of experience as a Data Engineer or in a similar role
  • Hands-on experience with Cloudera Distribution (CDH/CDP)
  • Strong knowledge of Hadoop ecosystem components (HDFS, Hive, YARN, MapReduce)
  • Proficiency in Hive Query Language (HQL)
  • Strong understanding of SQL and data warehousing concepts
  • Experience in ETL development and data processing
  • Basic scripting knowledge (Python or Shell)
  • Experience in performance tuning and query optimization
  • Understanding of distributed data processing concepts

Preferred Skills

  • Experience with Spark or PySpark
  • Exposure to workflow orchestration tools like Oozie or Airflow
  • Knowledge of Unix/Linux environments
  • Experience with cloud platforms such as AWS or Azure
  • Familiarity with Agile methodologies

Educational Qualification

  • Bachelor's degree in Computer Science, Information Technology, or a related field

Skills: hadoop,hive,big data,cloudera,big data engineer

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145515345

Similar Jobs