Bachelor's degree in Computer Science, Information Systems, Engineering, or a related quantitative field.
10+ years of progressive experience in data engineering, including 3–5+ years in a technical lead or senior engineering role.
Extensive hands-on experience with Snowflake and dbt in large-scale, production-grade environments, including architecture design and optimization.
Expert-level proficiency in SQL, including complex query design, performance tuning, cost optimization, and query profiling.
Deep expertise in data warehousing and modeling concepts, including dimensional modeling, CDC (Change Data Capture), SCD Types, and data vault–style patterns.
Proven experience leading the design, development, and deployment of scalable, cloud-native data pipelines(ETL/ELT) supporting enterprise analytics and reporting.
Demonstrated ability to define data engineering standards, best practices, and reusable frameworks across teams.
Strong knowledge of data governance, security, privacy, and compliance, including role-based access, data classification, and auditability.
Proven experience mentoring junior engineers, conducting code/design reviews, and guiding teams toward scalable and maintainable solutions.
Excellent problem-solving and analytical skills, with the ability to make architectural decisions under ambiguity.
Strong communication and stakeholder management skills, with the ability to translate business requirements into technical solutions.
Ability to own end-to-end data initiatives, work independently, and lead teams effectively in a fast-paced, Agile environment.