Summary:
The main function of an AI Data Engineer is to construct and maintain data pipelines, ETL processes, and data storage systems to facilitate efficient data collection, processing, and analysis. They ensure data quality and accessibility for data-driven decision-making, collaborating closely with data scientists and analysts.
Responsibilities:
- Drive critical initiatives to help our data platform scale to the needs of business priorities and technology advancements.
- Build scalable data models and data pipelines to extract, load, and transform data, ensuring secure data storage with a focus on data quality and compliance on Azure using services such as ADF, HDI, Databricks, Synapse, and Fabric.
- Play a critical role in developing and building datasets and integrating with AI/ML and Co-Pilot applications.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Implement and manage CI/CD (YAML Classic) pipelines for data engineering projects, leveraging tools like Azure DevOps.
- Anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies. Govern data accessibility within assigned pipelines.
- Act as a Designated Responsible Individual (DRI) and guide other engineers by developing and following the playbook, working on-call to monitor system/product/service for degradation, downtime, or interruptions, alerting stakeholders about status, and initiating actions to restore system/product/service for simple and complex problems when appropriate.
Requirements:
- Bachelor&rsquos degree in Computer Science, Engineering, or a related field, OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field, OR equivalent experience.
- 5 years of experience in data engineering or a similar role.
Required Skills:
- Experience with building data pipelines and data stores in Azure, PySpark, Fabric.
- Proficiency in programming languages such as Python, Scala, or Java.
- Strong experience with SQL and database technologies.
Preferred Skills:
- Familiarity with big data technologies (e.g., Big Data, Spark, Python, Azure tech Stack, Fabric).
#AditiConsulting
# 26-02130