
Search by job, company or skills
We are seeking an experienced Data Engineer to join our dynamic team in India. The ideal candidate will have 6+ years of experience
Must have 3+ years in Pyspark.
Strong programming experience, Python, Pyspark, Scala is preferred.
Experience in designing and implementing CI/CD, Build Management, and Development strategy.
Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions
Scope to get trained on AWS cloud technology
Proficient in leveraging Spark for distributed data processing and transformation.
Skilled in optimizing data pipelines for efficiency and scalability.
Experience with real-time data processing and integration.
Familiarity with Apache Hadoop ecosystem components.
Strong problem-solving abilities in handling large-scale datasets.
Ability to collaborate with cross-functional teams and communicate effectively with stakeholders.
Primary Skills :
Pyspark
SQL
Secondary Skill:
Experience on AWS/Azure/GCP would be added advantage
Capgemini
Job ID: 135260891