Search by job, company or skills

  • Posted 26 days ago
  • Over 50 applicants
Quick Apply

Job Description

  • Developing ETL pipelines involving big data.
  • Developing data processinganalytics applications primarily using PySpark.
  • Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming.
  • Clear understanding and ability to implement distributed storage, processing and scalable applications.
  • Experience of working with SQL and NoSQL database.
  • Ability to write and analyze SQL, HQL and other query languages for NoSQL databases.
  • Proficiency is writing disitributed scalable data processing code using PySpark, Python and related libraries.

Data Engineer AEP Comptency

  • Experience of developing applications that consume the services exposed as ReST APIs.
  • Special Consideration given forExperience of working with Container-orchestration systems like Kubernetes.
  • Experience of working with any enterprise grade ETL tools.
  • Experience knowledge with Adobe Experience Cloud solutions.
  • Experience knowledge withWeb AnalyticsorDigital Marketing.
  • Experience knowledge withGoogle Cloudplatforms.
  • Experience knowledge withData Science, ML/AI, RorJupyter.

More Info

Job Type:
Employment Type:
Open to candidates from:
Indian

About Company

We are trusted digital product engineering partner, enabling organizations to embrace the digital revolution and stay ahead of the competition. With our deep industry knowledge, agile methodologies, and a relentless pursuit of excellence, we collaborate with businesses to engineer digital solutions that drive operational efficiency, enrich customer experiences, and unleash growth potential.

Job ID: 119043469

Similar Jobs