Search by job, company or skills

I

Data Engineer-Data Platforms-AWS

new job description bg glownew job description bg glownew job description bg svg
  • Posted 18 days ago
  • Over 100 applicants

Job Description

Introduction

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Your Role And Responsibilities

  • Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive.
  • Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL.
  • Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows.
  • Write efficient code in Python and/or Scala for data manipulation and processing tasks.
  • Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions

Preferred Education

Master's Degree

Required Technical And Professional Expertise

  • Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive.
  • Strong understanding of AWS services, particularly S3, Redshift, and EMR.
  • Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization.
  • Experience using scheduling tools such as Airflow, Control M, or shell scripting.
  • Practical experience in Python and/or Scala programming languages

Preferred Technical And Professional Experience

  • Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive.
  • Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL.
  • Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows.
  • Write efficient code in Python and/or Scala for data manipulation and processing tasks.
  • Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 141664557