About the Role
As a Data Architect, you'll be at the intersection of business needs and technical implementation, designing and developing end-to-end data pipelines that transform raw information into actionable insights. You'll work collaboratively with stakeholders across the organization to understand requirements and deliver scalable, efficient data solutions.
Key Responsibilities
- Design and implementdata storage solutions including data lakes and data warehouses
- Develop and maintainrobust data pipelines to extract, transform, and load data from various source systems
- Create and managedata models to support analytical and operational requirements
- Optimizedata storage and processing for performance, cost, and reliability
- Collaboratewith business stakeholders to understand requirements and provide efficient solutions
- Integratedata from multiple sources, including APIs, databases, and external data providers
- Ensuredata quality and integrity through validation and transformation processes
- Documentdata engineering processes, systems, and standards
- Monitor and optimizethe performance of data systems and pipelines
- Promotebest practices for data management and engineering
- Partnerwith Product Owners and Engineering Leads to build target architectures and improve business processes
About the Team
The role is assigned to the Data Engineering & Analytics Product Area (Area III) which is the data engineering and analytics backbone for business teams spanning HR, Legal & Compliance, Procurement, Communications & Corporate Real estate.
About You
You're a problem-solver at heart with a passion for data and technology. You thrive in collaborative environments where you can apply your technical expertise to create solutions that deliver business value. You enjoy staying current with emerging technologies and are committed to continuous learning and improvement.
We are looking for candidates who meet these requirements:
- Bachelor's or Master's degree in a quantitative field such as Computer Science, Mathematics, Engineering, or Statistics
- 4-6 years of experience designing and implementing end-to-end data pipelines, data models, and analytical dashboards
- Strong programming skills with proficiency in Python, PySpark, SQL, and TypeScript
- Experience with data visualization tools like Palantir Workshop, Slate, and Contour
- Proficiency in Spark-based data lake design and operations
- Experience with integration technologies including REST/SOAP APIs and event-based architecture
These are additional nice to haves:
- Palantir certification, Azure Data Engineering certification, or AI-102 Azure AI Engineer certification
- Experience with relational databases such as Oracle and Azure SQL
- Familiarity with cloud platforms (particularly Azure) and their data services
- Experience with data modeling and schema design
- Strong interpersonal, written, and verbal communication skills
- Ability to explain technical concepts to non-technical audiences
Experience working in Agile/Scrum environments
Keywords:
Reference Code:137378