
Search by job, company or skills

This job is no longer accepting applications
Responsibilities
platforms.
transformation pipelines across on-prem and cloud platforms.
Required Skills:
Design, develop, and support data pipelines and related data products and platforms.
Design and build data extraction, loading, and transformation pipelines and data products across on-
prem and cloud platforms.
Perform application impact assessments, requirements reviews, and develop work estimates.
Develop test strategies and site reliability engineering measures for data products and solutions.
Participate in agile development "scrums" and solution reviews.
Mentor junior Data Engineers.
Lead the resolution of critical operations issues, including post-implementation reviews.
Perform technical data stewardship tasks, including metadata management, security, and privacy by
design.
Design and build data extraction, loading, and transformation pipelines using Python and other GCP
Data Technologies
Demonstrate SQL and database proficiency in various data engineering tasks.
Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.
Develop Unix scripts to support various data operations.
Model data to support business intelligence and analytics initiatives.
Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation.
Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data
Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and
Dataproc (good to have).
Qualifications:
Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related
field.
4+ years of data engineering experience.
2 years of data solution architecture and design experience.
GCP Certified Data Engineer (preferred).
Interested candidates can send their resumes to [Confidential Information]
Job ID: 126905931