Search by job, company or skills

Digital Impetus

Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role: GCP + Java / Big Data Engineer

Experience: 1–4 Years

Location: Chennai (Hybrid/On-site)

Role Overview:

We are seeking a passionate and detail-oriented Data Engineer with hands-on experience in GCP, Java, and Big Data technologies. In this role, you will be responsible for designing and developing scalable, high-performance data processing systems and cloud-based data pipelines.

You will work closely with cross-functional teams including Data Scientists, Analysts, and Cloud Engineers to build robust data solutions that drive business insights and decision-making.

Key Responsibilities:

  • Design, build, and maintain scalable ETL/ELT pipelines using Google Cloud Platform services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage
  • Develop and optimize data processing applications using Java for high-volume, low-latency environments
  • Leverage Apache Spark / PySpark for distributed data processing and transformation
  • Write complex and optimized SQL queries for large datasets ensuring performance and efficiency
  • Work with structured and unstructured data, ensuring data quality, validation, and governance
  • Collaborate with business stakeholders to understand data requirements and translate them into technical solutions
  • Perform performance tuning, debugging, and troubleshooting of data pipelines and applications
  • Contribute to CI/CD pipelines, version control, and deployment automation in cloud environments
  • Ensure adherence to best practices in data engineering, security, and scalability
  • Stay updated with emerging trends in Big Data, Cloud, and AI technologies

Primary Skills (Must Have):

  • Hands-on experience with Google Cloud Platform (GCP) (BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc.)
  • Strong programming skills in Java
  • Solid understanding of Big Data ecosystem (Hadoop, Spark, distributed computing concepts)

Secondary Skills (Good to Have):

  • Proficiency in SQL for data querying and transformation
  • Experience with Apache Spark / PySpark
  • Working knowledge of Python for scripting and data processing

Good to Have Skills:

  • Exposure to Generative AI (Gen AI) concepts, tools, or use cases
  • Familiarity with machine learning pipelines or AI integrations
  • Understanding of data governance, data security, and compliance practices

Qualifications:

  • Bachelor's degree in Computer Science, Engineering, or a related field
  • 1–4 years of relevant experience in Data Engineering / Big Data / Cloud

Key Competencies:

  • Strong analytical and problem-solving skills
  • Ability to work in a fast-paced, collaborative environment
  • Good communication and stakeholder management skills
  • Proactive mindset with a strong eagerness to learn and grow

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146849681

Similar Jobs