
Search by job, company or skills
Job Responsibility:
Strong programming skills in Python and SQL. Experience with data processing leveraging programming skills in Python and Spark In-depth understanding of the Kafka platform for (real-time) data ingestion and processing of high-volume data. Design and architect data flows, data management in Cloud environment which are scalable, repeatable and eliminate time consuming steps. Clear skills in using version control systems like GIT and developing (technical) documentation (e.g., in WIKI). Experiences at infrastructure components as code for robust deployment, replication, and uniform management Effective collaboration and communication skills to engage with cross functional teams A proactive mindset with the ability to drive tasks to completion Proficiency in Agile/Scrum/Kanban methodologies for efficient product delivery and management. Familiarity with AWS cloud services like Glue, Athena etc.
Qualifications: Bachelor or Master's degree in Computer Science/Information Systems At least 7+ years of experience in building data flows and data management on modern big data tech stack Strong experience in using ETL framework (eg. Airflow, Jenkins) to build and deploy production-quality ETL pipelines Knowledge of data structures. Open to learn and implement new technologies Ability to think and perform data engineering workstreams with a product mindset Skills. Fluency in English
Bachelor Of Technology (B.Tech/B.E)
Job ID: 135105919