Build and optimize extensive data sets, big data pipelines, and architectures.
Apply top-notch analytical skills to navigate unstructured datasets effectively.
Develop processes supporting data transformation, workload management, data structures, dependencies, and metadata.
Conduct root cause analyses on external and internal processes and data to uncover improvement opportunities and answer critical questions.
Identify, design, and implement process enhancements, focusing on automation, usability, and scalability.
Collaborate seamlessly with product and technology teams to design and validate the capabilities of the data platform.
Establish and maintain high programming standards and practices across the ecosystem.
Provide support and collaboration with cross-functional teams.
Communicate effectively, presenting complex ideas in a clear and concise manner to diverse audiences.
QUALIFICATIONS
Possess 2+ years of hands-on experience with a diverse range of AWS technologies (e. g. , S3, Lambda, Glue, Athena, IAM, SQS, CloudWatch, CloudFormation).
Proficient use of tools such as Git, Gitlab, and Jira.
Demonstrated knowledge and proficiency (2+ years) in SQL & Python,
Demonstrated knowledge and proficiency (1+ years) in technologies like Apache Superset, Tableau, Spark, Hive, Java and Hadoop.
Demonstrated expertise in building and optimizing data pipelines within a distributed environment.
Bonus points for experience in building Business Intelligence (BI) Tools.
Back your work with either a masters degree, bachelors degree, or equivalent work experience that showcases your prowess.
Excellent communication skills, fluent English and ability to work with healthy overlap to US business hours is a must.
Join us in reshaping the landscape of data possibilities, where your skills will not only be valued but will play a crucial role in defining what lies ahead.