We are looking for an experienced Data Engineer
- Having experience in building large-scale data pipelines and data lake ecosystems.
- Our daily work is around solving interesting and exciting problems against high engineering standards.
- Even though you will be a part of the backend team, you will be working with cross-functional teams across the org.
This role demands good hands-on different programming languages, especially Python, and the knowledge of technologies like Kafka, AWS Glue, Cloudformation, ECS, etc.
- You will be spending most of your time on facilitating seamless streaming, tracking, and collaborating huge data sets.
- This is a back-end role, but not limited to it.
- You will be working closely with producers and consumers of the data and build optimal solutions for the organization.
- Will appreciate a person with lots of patience and data understanding.
- Also, we believe in extreme ownership!
Responsibilities
- Design and build systems to efficiently move data across multiple systems and make it available for various teams like Data Science, Data Analytics, and Product.
- Design, construct, test, and maintain data management systems.
- Understand data and business metrics required by the product and architect the systems to make that data available in a usable/queryable manner.
- Ensure that all systems meet the business/company requirements as well as best industry practices.
- Keep ourselves abreast of new technologies in our domain.
- Recommend different ways to constantly improve data reliability and quality.
Requirements
- Bachelors/Masters, Preferably in Computer Science or a related technical field.
- 2-5 years of relevant experience.
- Deep knowledge and working experience of Kafka ecosystem.
- Good programming experience, preferably in Python, Java, Go, and a willingness to learn more.
- Experience in working with large sets of data platforms.
- Strong knowledge of microservices, data warehouse, and data lake systems in the cloud, especially AWS Redshift, S3, and Glue.
- Strong hands-on experience in writing complex and efficient ETL jobs.
- Experience in version management systems (preferably with Git).
- Strong analytical thinking and communication.
- Passion for finding and sharing best practices and driving discipline for superior data quality and integrity.
- Intellectual curiosity to find new and unusual ways of how to solve data management issues.