Help design, build and continuously improve the clients online platform.
Research, suggest and implement new technology solutions following best practices/standards.
Take responsibility for the resiliency and availability of different products.
Be a productive member of the team.
Requirements
Minimum 7+ years of experience in data engineering, analytics, or data management with a strong focus on cloud-based data solutions, preferably Azure
Hands-on expertise in Azure Databricks, including PySpark, Delta Lake, Workflows, and Notebooks for large-scale data processing and orchestration
Advanced proficiency in SQL for complex data modeling, transformation, and performance optimization
Proven experience designing and implementing end-to-end data pipelines using Azure Data Factory (ADF), Synapse Analytics, and Azure Data Lake Storage (ADLS)
Strong understanding of data architecture, data warehousing, and data governance within Azure ecosystems
Experience in implementing data quality, lineage, and security frameworks aligned with enterprise and regulatory standards
Ability to lead and mentor teams, manage deliverables, and collaborate with business, analytics, and IT stakeholders
Excellent communication, analytical, and problem-solving skills with a track record of delivering scalable, high-performance data solutions