Search by job, company or skills

KiE Square Analytics

KiE Square Analytics - Lead Data Engineer - Python/Spark

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Description

Key Responsibilities :

  • Lead the design, development, and maintenance of data pipelines and ETL processes for efficient data integration and transformation.
  • Manage and optimise data storage and data flows on at least 2 of the following cloud ecosystems - GCP, AWS, Azure, Oracle Cloud.
  • Work with large-scale datasets, streaming data, ensuring data quality, consistency, and reliability across systems.
  • Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions.
  • Implement and enforce data governance, security, and compliance standards.
  • Develop and Enhance Cloud Architecture that could be used for New proposals as well as for Data Engineering pipelines, refreshes, automations and integrations.
  • Monitor data pipelines, troubleshoot issues, and ensure high availability of data platforms.
  • Optimize database performance and ensure cost-effective cloud resource utilization.
  • Mentor junior engineers, provide technical guidance, and contribute to best practices in data engineering.

Qualifications

  • Strong proficiency in data storage and data flows on at least 2 of the following cloud ecosystems - GCP, AWS, Azure, Oracle Cloud.
  • Hands-on experience with ETL tools (Oracle DI, Informatica, Talend, or similar).
  • Advanced knowledge of SQL, PL/SQL, and database performance tuning.
  • Solid understanding of data warehousing concepts and big data technologies.
  • Strong skills in Python for data processing and automation.
  • Experience with streaming data pipelines (Kafka, Spark Streaming).
  • Past experience of developing or enhancing Cloud and data flow Architecture, including Data Engineering pipelines, refreshes, automations and integrations.
  • Experience of Application/ Data Integrations with Internal and Third Party APIs, MCPs, LLMs & other multimodal language models would be required
  • Web Data Harvesting, Automations & API integration would be a good-to-have skill
  • Knowledge of data modelling and data governance best practices.
  • Exposure to containerization technologies (Docker, Kubernetes) is a plus.
  • Strong analytical and problem-solving abilities.
  • Excellent communication and collaboration skills.
  • Ability to work independently, manage multiple priorities, and thrive in a fast-paced environment.

(ref:hirist.tech)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 143978647