Search by job, company or skills

Zingroll

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

What You'll Do:

  • Architect and sustain scalable data pipelines to continuously ingest user event data into our centralized data warehouse.
  • Engineer canonical datasets and key performance metrics that enable the tracking of user growth, engagement, retention, and revenue.
  • Design robust, fault-tolerant ingestion systems to ensure high availability and data consistency during processing.
  • Ensure the security, integrity, and compliance of data according to industry and company standards.
  • Partner closely with cross-functional teams including Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand data needs and deliver scalable solutions.
  • Set up and manage data pipelines for the streaming platform, integrating data from product events, internal tools, and third-party platforms such as Google Ads Manager, Meta Ads Manager, and YouTube Ads Manager into a single analytics layer.
  • Plan and implement event tracking frameworks across Android, iOS, Web, and TV platforms.
  • Build and maintain A/B testing and experimentation pipelines across all platforms.
  • Set up data pipelines for internal company tools, and create dashboards for tracking and analysis.

We're looking for experience with:

  • Around 2 years of experience as a Data Engineer or in a similar role involving the implementation and maintenance of complex software systems.
  • Proficiency in at least one language commonly used within Data Engineering, such as Python, Scala, or Java.
  • Hands-on use of distributed processing technologies and frameworks such as Hadoop and Flink, alongside distributed storage systems like HDFS and S3.
  • Deep familiarity with distributed data processing frameworks including Apache Spark, Apache Flink, Apache Beam, and query engines like Presto/Trino.
  • Managing data migration and complex workflows using ETL schedulers like Apache Airflow, AWS Glue, Dagster, or Prefect.
  • Implementation of real-time streaming and messaging systems such as Apache Kafka, AWS Kinesis, Google Pub/Sub, or Apache Pulsar.

Bonus Points:

  • Experience working in fast-moving startup environments.
  • Thrives in high-ownership, high-speed, and hands-on problem-solving environments.

Compensation: Competitive and market-aligned - The final package depends on skills, experience, and qualifications.

More Info

Job Type:
Industry:
Employment Type:

About Company

Zingroll

Job ID: 135785417

Similar Jobs