Role : Data Engineer(Pyspark & Palantir)
Location: Bangalore
Experience range: 5 to 10yrs
Notice period: 0-30/45 days or Immediate joiners are preferred
Mode of Interview: Virtual
Date of Interview: 18th Dec,25(Thursday)
Time of Interview: 10am to 2pm
Ex-TCSers are not allowed and experience range less than 5yrs will not be considered
Key Responsibilities:
- Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry.
- Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
- Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases.
- Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance.
- Ensure data security, compliance, and governance in line with industry and client standards.
- Identify opportunities for automation and process improvement across data systems and integrations.
Required Skills & Qualifications:
- 5-10 years of overall experience in data engineering roles.
- Strong hands-on expertise in PySpark (data frames, RDDs, performance optimization).
- Proven experience working with Palantir Foundry.
- Good understanding of reinsurance including exposure, claims, and policy data structures.
- Proficiency in SQL, Python, and working with large datasets in distributed environments.
- Experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and related data services (e.g., S3, Snowflake, Databricks).
- Knowledge of data modeling, metadata management, and data governance frameworks.
- Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies.
Thanks & Regards
Shilpa Silonee
BFSI TAG Team