Search by job, company or skills

  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Duties and Responsibilities

  • Design and implement data pipelines using Big Data technologies such as Hadoop and Databricks.
  • Develop and optimize data processing workflows using Scala and PySpark.
  • Collaborate with data scientists and analysts to understand data requirements and deliver high-quality data solutions.
  • Monitor and troubleshoot data processing systems to ensure optimal performance and reliability.
  • Document data architecture and processes for future reference and compliance.

Qualifications And Requirements

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • 2-6 years of experience in Big Data engineering or a related role.
  • Proficiency in Big Data technologies, including Hadoop, Databricks, and PySpark.
  • Strong programming skills in Scala.
  • Experience with data modeling and data warehousing concepts.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138357295