Search by job, company or skills

S

AWS Glue/ Pyspark Developer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 11 hours ago
  • Be among the first 30 applicants
Early Applicant
Quick Apply

Job Description

Roles & Responsibilities

  • Data Ingestion: Implement data ingestion pipelines from various data sources like databases, S3, and files.
  • ETL/Data Warehousing: Build ETL/Data Warehouse transformation processes.
  • Solution Development: Develop Big Data and non-Big Data cloud-based enterprise solutions using PySpark, SparkSQL, and related frameworks/libraries.
  • Framework Development: Develop scalable, re-usable, and self-service frameworks for data ingestion and processing.
  • End-to-End Integration: Integrate end-to-end data pipelines to move data from source to target repositories, ensuring data quality and consistency.
  • Performance Optimization: Conduct processing performance analysis and optimization.
  • Best Practices: Bring best practices in areas such as design & analysis, automation (Pipelining, IaC), testing, monitoring, and documentation.
  • Data Handling: Work with both structured and unstructured data.

Good to Have (Knowledge)

  • Experience in cloud-based solutions.
  • Knowledge of data management principles.

More Info

Job Type:
Employment Type:
Open to candidates from:
Indian

About Company

Job ID: 119837883

Similar Jobs