Search by job, company or skills

  • Posted 5 days ago
  • Be among the first 50 applicants
Early Applicant

Job Description

Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and GCP databases on AWS Cloud Platform and REST API's

Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python for various use cases built on the platform

Experience in developing streaming pipelines and exposure to services like

* Glue

* Lambda

* S3

* SQL

* Redshift

* CloudWatch

* Secrets Manager

* Experience with distributed computing, data modeling, schema handling, and performance tuning.

* Strong debugging and problem-solving skills.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138806833

Similar Jobs