Search by job, company or skills

  • Posted 14 days ago
  • Be among the first 20 applicants
Early Applicant
Quick Apply

Job Description

  • Build & Optimize Pipelines: Develop high-throughput ETL workflows using PySpark on Databricks.
  • Data Architecture & Engineering: Work on distributed computing solutions, optimize Spark jobs, and build efficient data models.
  • Performance & Cost Optimization: Fine-tune Spark configurations, optimize Databricks clusters, and reduce compute/storage costs.
  • Collaboration: Work closely with Data Scientists, Analysts, and DevOps teams to ensure data reliability.
  • ETL & Data Warehousing: Implement scalable ETL processes for structured & unstructured data.
  • Monitoring & Automation: Implement logging, monitoring, and alerting mechanisms for data pipeline health and fault tolerance.

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

About Cognizant
Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at www.cognizant.com or @cognizant.

Job ID: 133048597

Similar Jobs