Description The Palantir Foundry role focuses on designing, developing, and maintaining data pipelines and applications and report building within the Palantir Foundry platform. The ideal candidate should have more than 8 years of IT experience, with at least 2 years of specific experience working with Palantir and Python. The position emphasizes leveraging the Workshop application to build user interfaces, dashboards, and workflows that enable data-driven decision-making. This role requires technical expertise in Python, PySpark, and SQL, along with hands-on experience in Foundry's core components such as Code Repositories, Pipeline Builder, and Ontology. The candidate will collaborate with cross-functional teams, including data engineers, business users, and analysts, to deliver high-quality, actionable data solutions while ensuring performance and data integrity.
Responsibilities
- Develop and maintain data pipelines using Python, PySpark, and SQL for data transformations and workflows in Foundry.
- Build user interfaces, dashboards, and visualizations within Foundry's Workshop application for data analysis and reporting.
- Collaborate with stakeholders including data engineers, business analysts, and business users to gather requirements, design solutions, and ensure project success.
- Ensure data quality and performance by implementing validation, testing, and monitoring processes.
- Contribute to the Foundry ecosystem through code reviews, documentation, and sharing best practices to strengthen overall platform adoption and success.
Qualifications Skills And Qualifications
- Bachelor's degree in computer science, Data Science, or a related field; advanced degree preferred.
- Palantir Foundry Expertise: Hands-on experience with Foundry components such as Workshop, Code Repositories, Pipeline Builder, and Ontology.
- Programming Skills: Proficiency in Python and PySpark for data manipulation and pipeline development.
- Database Knowledge: Strong SQL skills for data extraction, transformation, and query optimization.
- Data Engineering Background: Experience with ETL/ELT workflows, data modeling, and validation techniques.
- Cloud Platform Familiarity: Exposure to GCP, AWS or Azure is preferred.
- Collaboration and Communication: Strong interpersonal and communication skills to work effectively with technical and non-technical stakeholders.
This job posting will remain open a minimum of 72 hours and on an ongoing basis until filled.
Job Information Technology
Primary Location India-Karnataka-Bengaluru
Schedule: Full-time
Travel: No
Req ID: 254577
Job Hire Type Experienced Not Applicable #BMI N/A