Project Role : Application Lead
Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact.
Must have skills : Palantir Foundry
Good to have skills : NA
Minimum 5 Year(s) Of Experience Is Required
Educational Qualification : 15 years full time education
Project Role : Lead Data Engineer Project Role Description : Design, build and enhance applications to meet business process and requirements in Palantir foundry. Work experience : Minimum 6 years Must have Skills : Palantir Foundry , PySpark Good to Have Skills :
- Experience in Pyspark, python and SQL
- Knowledge on Big Data tools & Technologies
- Organizational and project management experience. Job Requirements & Key Responsibilities :
- Responsible for designing , developing, testing, and supporting data pipelines and applications on Palantir foundry.
- Configure and customize Workshop to design and implement workflows and ontologies.
- Collaborate with data engineers and stakeholders to ensure successful deployment and operation of Palantir foundry applications.
- Work with stakeholders including the product owner, data, and design teams to assist with data-related technical issues and understand the requirements and design the data pipeline.
- Work independently, troubleshoot issues and optimize performance.
- Communicate design processes, ideas, and solutions clearly and effectively to team and client.
- Assist junior team members in improving efficiency and productivity. Technical Experience :
- Proficiency in PySpark, Python and Sql with demonstrable ability to write & optimize SQL and spark jobs.
- Hands on experience on Palantir foundry related services like Data Connection, Code repository, Contour , Data lineage & Health checks.
- Good to have working experience with workshop , ontology , slate.
- Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry.
- Experience in ingesting data from different external source systems using data connections and sync.
- Good Knowledge on Spark Architecture and hands on experience on performance tuning & code optimization.
- Proficient in managing both structured and unstructured data, with expertise in handling various file formats such as CSV, JSON, Parquet, and ORC.
- Experience in developing and managing scalable architecture & managing large data sets.
- Good understanding of data loading mechanism and adeptly implement strategies for capturing CDC.
- Nice to have test driven development and CI/CD workflows.
- Experience in version control software such as Git and working with major hosting services (e. g. Azure DevOps, GitHub, Bitbucket, Gitlab).
- Implementing code best practices involves adhering to guidelines that enhance code readability, maintainability, and overall quality. Educational Qualification:15 years of full-term education