Search by job, company or skills

Soojh AI

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 7 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Company Description

Soojh AI is a leading AI services and consulting company that specializes in empowering businesses with Artificial Intelligence and Generative AI solutions to drive efficiency and growth. The organization offers end-to-end expertise for AI integration, AI-driven product development, and establishing in-house AI capabilities. With successful collaborations with Fortune 500 companies and startups that have scaled to unicorn status, Soojh AI has delivered impactful solutions across diverse industries including FinTech, LegalTech, Healthcare, Media, Hospitality, and Airlines. Committed to fostering AI innovation, Soojh AI helps businesses stay competitive and future-ready.

Role Summary

We are looking for a Data Engineer to develop and support real-time data pipelines that power enterprise analytics platforms. The role involves integrating operational systems such as Oracle Fusion and building streaming pipelines that process high-volume event data.

You will work with technologies including Apache Kafka, Apache Flink, and Snowflake to build scalable pipelines that deliver reliable datasets for analytics and reporting.

Responsibilities

  • Develop and maintain streaming data pipelines using Kafka.
  • Build and deploy real-time processing jobs using Flink or similar frameworks.
  • Implement ingestion pipelines integrating Oracle ERP / Oracle Fusion systems.
  • Build CDC pipelines to capture changes from operational databases.
  • Develop data transformations and models within Snowflake.
  • Monitor pipeline performance and troubleshoot data processing issues.
  • Collaborate with architects and platform teams to implement data platform standards.

Required Skills

  • Experience integrating data from Oracle ERP / Oracle Fusion systems.
  • Hands-on experience with Apache Kafka or other streaming platforms.
  • Experience working with Flink, Spark Streaming, or similar processing frameworks.
  • Strong SQL and Python skills.
  • Experience working with Snowflake or other cloud data warehouses.

Nice to Have

  • Experience with CDC tools such as Debezium
  • Familiarity with Docker or Kubernetes
  • Experience working on AWS or Azure

Experience

  • 36 years of experience in data engineering or real-time data pipelines

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144623537

Similar Jobs