Experience in developing AWS/ Azure based data pipelines and processing of data at scale using latest cloud technologies
Experience in design and development of applications using Python (must have) and in Big Data technologies and tools such as PySpark/Spark, Hadoop, Hive,Kafka etc.
Good understanding of data warehousing concepts and Expert level skills in writing and optimizing SQL.
Experience in low code type development and metadata configuration-based development, e.g., metadata based data ingestion, schema shift detection, event based development, etc.
Develop and unit test the functional aspect of the required data solution leveraging the core frameworks (foundational code framework)
Prioritize your work in conjunction with multiple teams as you will be working with different components
Able to setup and take on design tasks if necessary
Deal with incidents in a timely manner coming up with rapid resolution
The ability to keep current with the constantly changing technology industry.