Search by job, company or skills

Impetus Consultrainers

Java GCP Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About the Organization-

Impetus Technologies is a digital engineering company focused on delivering expert services and products to help enterprises achieve their transformation goals. We solve the analytics, AI, and cloud puzzle, enabling businesses to drive unmatched innovation and growth.

Founded in 1991, we are cloud and data engineering leaders providing solutions to fortune 100 enterprises, headquartered in Los Gatos, California, with development centers in NOIDA, Indore, Gurugram, Bengaluru, Pune, Chennai and Hyderabad with over 4000 global team members. We also have offices in Canada and Australia and collaborate with a number of established companies, including American Express, Bank of America, Capital One, Toyota, United Airlines, and Verizon.

Locations- Chennai

Job Summary:

We are looking for skilled Java GCP Data Engineers with 1–4 years of hands-on experience in building scalable, high-performance data solutions. The ideal candidate will have strong expertise in Java-based big data processing frameworks and deep exposure to Google Cloud Platform (GCP) services for modern data engineering workloads.

Key Skills & Experience:

  • Strong programming expertise in Java, with experience in building distributed data processing applications
  • Hands-on experience with Big Data technologies such as Apache Spark (Java/Scala APIs), Hadoop, and Hive
  • Experience with Spark (DataFrame/Spark SQL) using Java or Scala (PySpark knowledge is a plus but not primary)
  • Solid understanding of data structures, algorithms, and object-oriented programming in Java
  • Strong knowledge of SQL, data modeling, and data warehousing concepts
  • Experience working with Linux/Unix environments and scripting (Bash or similar)
  • Proven analytical and problem-solving skills, especially in debugging and optimizing data pipelines
  • Ability to design and build scalable, fault-tolerant data processing systems

Preferred / Good to Have:

  • Hands-on experience with GCP services such as BigQuery, Dataflow (Apache Beam with Java), Dataproc, Cloud Storage, Pub/Sub, and IAM
  • Experience with workflow orchestration tools like Airflow or Cloud Composer
  • Exposure to cloud migration projects, especially transitioning from on-premise Hadoop ecosystems to GCP
  • Familiarity with streaming data pipelines using Pub/Sub and Dataflow
  • Understanding of CI/CD pipelines and DevOps practices in a cloud environment

Roles & Responsibilities:

  • Design, develop, and maintain scalable ETL/ELT pipelines using Java-based big data frameworks on GCP
  • Build and optimize batch and streaming data processing solutions using Dataproc, Dataflow, and Spark
  • Ensure high-quality, efficient, and maintainable code by following best practices and coding standards
  • Perform unit testing, integration testing, and troubleshoot complex data pipeline issues
  • Collaborate with cross-functional teams to understand data requirements and deliver robust solutions
  • Estimate development efforts and contribute to sprint planning and delivery
  • Participate in code reviews and mentor junior team members where required
  • Design cost-optimized and performance-efficient architectures leveraging GCP-native services

For Quick Response- Interested Candidates can directly share their resume along with the details like Notice Period, Current CTC and Expected CTC at [Confidential Information]

More Info

Job Type:
Industry:
Employment Type:

Job ID: 146435841

Similar Jobs