Help write analytics code, services and components in Java, Apache Spark and related Technologies such as Hadoop, Zookeeper
Responsible for systems analysis - Design, Coding (test driven development), Unit Testing and other SDLC activities
Responsible for designing and developing cloud native applications and migrate existing on-premises Applications to Cloud
Requirement gathering and understanding, analyze and convert functional requirements into development line items and able to provide reasonable effort estimates
Work proactively, independently and with both local and global teams to address project requirements, and articulate issues/challenges with enough lead time to address project delivery risks
Participate in Code and Test case reviews and ensure code developed meets the business requirements and coding standards
Hands-on experience in solving technical issues during project delivery, investigate, and resolve issues/incidents
Qualification Experience:
Experience with Java is essential, Python (and Scala would be highly desired)
Understanding of core AWS services, uses, and basic AWS architecture best practices
Experience with Cloud Data Lake technologies such as Snowflake, Databricks, AWS redshift/Spectrum will be a plus
For a Senior developer experience - working experience Apache Spark streaming and batch framework, and data processing platforms AWS EMR, Hadoop, Spark is required
Experience with Machine Learning technologies such as Apache Spark MLlib, Sci-kit Learn, Tensorflow or others would be a major plus
Proficiency in developing, deploying, and debugging cloud-based applications using AWS
Ability to use the AWS service APIs, AWS CLI, and SDKs to write applications
Ability to use a CI/CD pipeline to deploy applications on AWS