Role Overview
- As a Data Engineer within the Data Foundation team, you will act as a technical subject-matter expert responsible for the configuration, development, integration, and operation of enterprise data platforms and products.
- You will collaborate closely with Product Owners, Product Architects, Engineering Managers, and cross-functional teams to translate business requirements into robust technical solutions aligned with engineering standards and roadmaps. The role involves hands-on development, agile delivery, documentation, and continuous improvement across the full data lifecycle.
- This position reports to the Data Engineering Lead.
Key Responsibilities
- Design, develop, and maintain scalable and reliable data pipelines and ETL/ELT processes
- Act as an individual contributor (60%), delivering high-quality data engineering solutions
- Coach and mentor data engineers on best practices, design patterns, and implementation approaches
- Translate high-level architecture and design documents into executable development tasks
- Optimize data infrastructure performance and resolve complex technical issues and bottlenecks
- Drive technical excellence through code reviews, design reviews, testing, and deployment practices
- Ensure adherence to coding standards, architectural guidelines, and security best practices
- Embed DevSecOps and CI/CD practices into daily engineering workflows
- Manage and reduce technical debt while guiding sustainable development approaches
- Lead and contribute to cross-functional technical discussions and solution design reviews
- Support hiring, onboarding, mentoring, and development of engineering talent
- Improve engineering processes to enhance delivery efficiency, quality, and reliability
- Collaborate with Product Owners, Engineering Managers, Business Analysts, and Scrum Masters to align on sprint goals, timelines, and priorities
Technology Stack
Must Have
- Strong proficiency in Python and SQL
- Experience with big data technologies, including MPP systems and streaming platforms
- Hands-on experience with cloud platforms (AWS, Azure, or GCP)
- Experience with data and compute platforms such as Databricks, Snowflake, or BigQuery
- Experience with CI/CD tools (Azure DevOps, Jenkins, Git)
- Experience integrating data via APIs
- Ability to interpret architecture and design documentation
Nice to Have
- Experience with Microsoft data stack (Azure Data Factory, Azure Synapse, Databricks, Fabric, Power BI)
- Data modeling and data architecture expertise
- ETL pipeline design and performance optimization
- Advanced PySpark expertise
- Logging and monitoring using Azure or Databricks services
- Experience with Apache Kafka or similar streaming technologies
- Exposure to machine learning and AI technologies
Qualifications and Experience
- 35 years of professional experience in Data Engineering
- Strong hands-on experience with data integration, ETL/ELT, and data warehousing
- Solid understanding of software engineering principles, coding standards, and modern architectures
- Experience delivering end-to-end DataOps / Data Engineering projects
- Familiarity with data governance, security, and compliance standards
- Proven ability to mentor and guide engineers across experience levels
- Strong analytical, problem-solving, and decision-making skills
- Ability to work independently while collaborating effectively in team environments
- Excellent written and verbal communication skills
- Pragmatic mindset with a strong focus on quality and delivery