Search by job, company or skills

Ownly

Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 19 hours ago
  • Be among the first 50 applicants
Early Applicant

Job Description

About Ownly:

At OWNLY (Backed by Rapido) we're on a mission to make food delivery simple, fair, and exactly what you ordered and nothing extra. Our team is built of problem-solvers, innovators, and doers, all working to make OWNLY the most trusted way to get your food without hidden charges or inflated prices and we're just getting started. We connect customers to their favourite restaurants through our reliable delivery network, while helping restaurant partners keep more of what they earn. As we grow, our focus remains the same: fair pricing, honest service, and a platform that respects everyone involved in the journey from kitchen to doorstep.

What You'll Do

  • Build the Data Stack: Design, set up, and maintain a modern, scalable data infrastructure using cloud-native tools (AWS/GCP/Azure)
  • Pipeline Development: Build robust ETL/ELT pipelines to collect and process data from multiple sources — app events, transactions, logistics, etc.
  • Data Modeling: Define clean, efficient data models and schemas that support real-time dashboards and analytics
  • Cross-Team Enablement: Collaborate with product, growth, ops, and engineering teams to understand data needs and deliver reliable pipelines
  • Data Warehousing: Set up and maintain the data warehouse (e.g., BigQuery, Redshift, Snowflake)
  • Monitoring & Quality: Implement tools to monitor pipeline health, ensure data accuracy, and prevent duplication or drift
  • Tooling & Automation: Build internal tools for easier data access, self-serve analytics, and automated reporting
  • Scalability: Design for growth — ensuring the system scales as new data sources and higher volumes come in

What We're Looking For

  • 2 – 4 years of experience as a Data Engineer, Backend Engineer (with data focus), or similar role
  • Proficiency in Python or another scripting language used for ETL
  • Hands-on experience with cloud data platforms (AWS/GCP/Azure), especially services like S3, Lambda, Pub/Sub, BigQuery, Redshift, etc.
  • Strong SQL skills and experience working with large-scale structured and semi-structured dat
  • Experience with tools like Airflow, DBT, Kafka, or similar is a big plus
  • Comfort with early-stage environments — willing to build fast, iterate, and own end-to-end systems
  • Bonus: Exposure to analytics tools (Looker, Metabase, Tableau), product event tracking (Mixpanel, Segment), or ML pipelines

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146059751

Similar Jobs

Early Applicant