Search by job, company or skills

  • Posted 11 hours ago
  • Be among the first 10 applicants
Early Applicant
Quick Apply

Job Description

Roles & Responsibilities:

  • Design, develop, and maintain scalable data pipelines for large-volume datasets on GCP
  • Build and optimize ETL workflows using Python and PySpark for data processing and transformation
  • Develop and manage data solutions using GCP services such as BigQuery, Dataflow, and Dataproc
  • Write optimized SQL queries for data extraction, transformation, and analytics in BigQuery
  • Work on data warehousing concepts, data modeling, and data architecture design
  • Ensure smooth data flow across systems and maintain data integrity and consistency
  • Collaborate with cross-functional teams to support validation, testing, and project execution
  • Review and implement data policies and procedures, ensuring compliance with AEMP70 standards
  • Monitor and manage data systems performance and optimize pipeline efficiency
  • Support continuous improvement of data engineering processes and best practices 

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Apptad offers strategic consulting, enterprise information management and digital transformation services. With globally connected offices in US and India along with a team of trained and certified IT resources, Apptad ensures quick and effective delivery to its customers. Apptad is relentlessly reinventing the outlook of how companies leverage data.

With an effort to enable our customers the ability to solve biggest problems within their organization. We perceive our clients’ problems and respond with custom solutions instead of handing over boilerplate responses.

Job ID: 145634521

Similar Jobs