We are looking for a 6+Senior Data Engineer with strong expertise in Kafka and AWS to join a fast-growing team working on scalable, real-time data solutions.
Key Responsibilities:
- Design, develop, and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services
- Configure and manage Kafka connectors for seamless data integration across systems
- Work with Kafka ecosystem components including producers, consumers, brokers, topics, and schema registry
- Build and optimize scalable ETL/ELT workflows for large-scale data processing
- Enhance data lake and data warehouse architectures using AWS services like Lambda, S3, and Glue
- Implement strong monitoring, testing, and observability practices
- Ensure data security, governance, and compliance across platforms
Requirements:
- 5+ years of experience in Data Engineering or related roles
- Hands-on expertise with Apache Kafka & AWS (MSK, Glue, Lambda, S3, etc.)
- Strong programming skills in Python, SQL, and Java (Java preferred)
- Experience with Infrastructure-as-Code (e.g., CloudFormation) and CI/CD pipelines
- Excellent problem-solving and communication skills
- Ability to write high-quality, production-ready code