- Integrate and transform data from various sources to our Data Lakehouse platform,
- Design and develop data models to ensure data consistency, accuracy and integrity,
- Develop and optimize data pipelines and data sets into an analysis-ready format,
- Collaborate with data analyst teams to understand data needs and provide solutions that support business objectives,
- Attain and sustain high data quality through comprehensive data validation,
- Monitor performance and optimize data computes to ensure optimal performance,
- Learn the latest data engineering and data warehousing technologies,
- Mentor and lead junior team members and provide technical guidance as needed,
- Support of data pipelines owned by our team.
Who You Are:
You will be working with cross-functional teams to develop and deliver data-driven solutions that support business objectives.
For This Role, You Will Need:
- You have advanced SQL skills and hands-on experience with SQL databases,
- Strong knowledge of data warehousing concepts,
- You have transformed business logic into ETL processes with SQL and Python,
- Experience in data testing to achieve high data quality and reliability,
- Ability to ask and answer meaningful questions by collecting, analyzing, and making sense of data in a given business context.
- You have excellent analytical and problem-solving skills.
Preferred Qualifications that Set You Apart:
- Snowflake - cloud-based data platform,
- HVR/FiveTran - data integration tool,
- dbt - data transformation tool,
- Apache Hive and Apache Spark,
- CI/CD pipelines and GIT repository.