About the Role:
As a Senior Data Engineer, you will be responsible for designing and implementing complex data pipelines and analytics solutions to support key decision-making business processes in our Property & Casualty business domain. You will gain exposure to a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re Property & Casualty.
You will be expected to take end-to-end ownership of deliverables, gaining a full understanding of the Property & Casualty data and the business logic required to deliver analytics solutions.
Key responsibilities include:
- Work closely with Product Owners and Architects to understand requirements, formulate solutions, and evaluate the implementation effort.
- Design, develop and maintain scalable data transformation pipelines.
- Data model Design, Data architecture implementation
- Working with Palantir platform for implementation
- Evaluate new capabilities of the analytics platform, develop prototypes and assist in drawing Development of single source of truth about our application landscape.
- Collaborate within a global development team to design and deliver solutions
- Assist stakeholders with data-related functional and technical issues
- Working with data governance platform for data management and stewardship.
About the Team:
This position is part of the Property & Casualty Data Integration and Analytics project within the Reinsurance Data office team under Data & Foundation. We are part of a global strategic initiative to make better use of our Property & Casualty data and to enhance our ability to make data driven decisions across the Property & Casualty reinsurance value chain.
About You:
You enjoy the challenge of solving complex big data analytics problems using state-of-the-art technologies as part of a growing global team of data engineering professionals. You are a self-starter with strong problem-solving skills and capable of owning and implementing solutions from start to finish. Key qualifications include:
- Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline
- At least 10 years of experience working with large scale software systems
- At least 5 years of experience in Pyspark and Proficient in designing Large Scale Data Engineering solutions
- At least 3 years of Experience working with Palantir Foundry
- Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred)
- Experience with TypeScript/JavaScript/HTML/CSS a plus
- Knowledge of data management fundamentals and data warehousing principals
- Demonstrated strength in data modelling, ETL and storage/Data Lake development
- Experience with Scrum/Agile development methodologies
- Experience working in a Cloud environment such as Palantir Foundry
- Knowledge of Insurance Domain, Financial Industry or Finance function in other industries is a strong plus
- Experienced in working with a diverse multi-location team of internal and external professionals
- Strong analytical and problem-solving skills
- Self-starter with a positive attitude and a willingness to learn
- Ability to manage own workload self-directed
- Ability and enthusiasm to work in a global and multicultural environment
- Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments
Keywords:
Reference Code:137462