About the Role:
We are seeking a talented AWS Data Engineer with expertise in AWS Services, to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics.
Responsibilities:
- Develop and maintain ETL pipelines using AWS Glue to process and transform large volumes of data efficiently.
- Collaborate with analysts to understand data requirements and ensure data availability and quality.
- Write and optimize DB queries for data extraction, transformation, and loading.
- Utilize Git for version control, ensuring proper documentation and tracking of code changes.
- Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval.
- Develop and optimize high-performance, scalable databases using Amazon DynamoDB.
- Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations.
- Automate workflows using AWS Cloud services like event bridge, step functions.
- Monitor and optimize data processing workflows for performance and scalability.
- Troubleshoot data-related issues and provide timely resolution.
- Stay up-to-date with industry best practices and emerging technologies in data engineering.
Qualifications:
- Bachelor's degree in Computer Science, Data Science, or a related field.
- Master's degree is a plus.
- Experience with version control systems, preferably Git.
- Strong knowledge of AWS services, including S3, Redshift, Glue, Step Functions, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc.
- Familiarity with Databricks and it's concepts is a plus.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills to work effectively within a team.
- Ability to manage multiple tasks and prioritize effectively in a fast-paced environment.