
Search by job, company or skills

This job is no longer accepting applications
Key Responsibilities:
Design, develop, and maintain real-time/batch data processing applications using Scala, Java, and Apache Flink.
Build and optimize distributed data pipelines on AWS cloud infrastructure (EMR, Lambda, SQS, S3, EC2).
Ensure reliability, scalability, and performance of streaming data applications.
Collaborate with DevOps, Analytics, and Product teams to deliver end-to-end solutions.
Monitor applications, troubleshoot issues, and drive continuous performance improvements.
Participate in code reviews, design discussions, and knowledge-sharing sessions.
Required Skills
Strong experience with Scala, Java, Apache Flink.
Proficiency in AWS services: EMR, Lambda, SQS, S3, EC2.
Solid understanding of distributed systems, data streaming, and application performance optimization.
Preferred Skills:
Exposure to Splunk, Apache Spark, Kafka, Docker, Kubernetes (k8s).
Familiarity with GitLab CI/CD pipelines and automation frameworks.
Job ID: 126218751