Search by job, company or skills

I

Data Engineer-Data Platforms

new job description bg glownew job description bg glownew job description bg svg
  • Posted 19 hours ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Introduction

A career in IBM Consulting is built on long-term client relationships and close collaboration worldwide. You'll work with leading companies across industries, helping them shape their hybrid cloud and AI journeys. With support from our strategic partners, robust IBM technology, and Red Hat, you'll have the tools to drive meaningful change and accelerate client impact. At IBM Consulting, curiosity fuels success. You'll be encouraged to challenge the norm, explore new ideas, and create innovative solutions that deliver real results. Our culture of growth and empathy focuses on your long-term career development while valuing your unique skills and experiences.

Your Role And Responsibilities

As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing.

Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets.

In This Role, Your Responsibilities May Include

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs

Preferred Education

Master's Degree

Required Technical And Professional Expertise

  • Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL.
  • Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) .
  • Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer
  • Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed.
  • Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java

Preferred Technical And Professional Experience

  • Basic understanding or experience with predictive/prescriptive modeling skills
  • You thrive on teamwork and have excellent verbal and written communication skills.
  • Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 138602861

Similar Jobs