
Search by job, company or skills
About Pocket FM
Pocket FM is a leading audio entertainment platform focused on immersive, long-form audio storytelling. The platform offers episodic audio series across genres such as romance, drama, thriller, and fantasy. Pocket FM follows a mobile-first approach, enabling users to listen anytime and anywhere. Founded in India, the company has expanded rapidly across global markets, including the US.It supports multiple regional and international languages to reach a diverse audience. Pocket FM empowers creators through a strong content and monetization ecosystem
About the Role
We are seeking a talented AI & Data Engineer to join our team. In this role, you will design, build, and maintain robust data pipelines while developing and deploying cutting-edge AI solutions. You will work at the intersection of data engineering and artificial intelligence, leveraging modern cloud platforms and AI frameworks to drive business value.
We are seeking a skilled Data Engineer to join our team. The candidate will be responsible for designing, building, and maintaining robust data infrastructure that powers PocketFM's recommendation systems, analytics, and business intelligence capabilities. This role offers an exciting opportunity to work with large-scale data systems that directly impact millions of users audio entertainment experience.
Key Responsibilities:
Data Infrastructure & Pipeline Development
Design, develop, and maintain scalable ETL/ELT pipelines to process large volumes of user interaction data, content metadata, and streaming analytics
Build and optimize data warehouses and data lakes to support both real-time and batch processing requirements
Implement data quality monitoring and validation frameworks to ensure data accuracy and reliability
Develop automated data ingestion systems from various sources, including mobile apps, web platforms, and third-party integrations
Analytics & Reporting Infrastructure
Create and maintain data models that support business intelligence, user analytics, and content performance metrics
Build self-service analytics platforms enabling stakeholders to access insights independently
Implement real-time dashboards and alerting systems for key business metrics
Support A/B testing frameworks and experimental data analysis requirements
Data Architecture & Optimization
Collaborate with software engineers to optimize database performance and query efficiency
Design data storage solutions that balance cost, performance, and accessibility requirements
Implement data governance practices including data cataloging, lineage tracking, and access controls
Ensure GDPR and data privacy compliance across all data systems
Collaboration & Support
Work closely with data scientists, product managers, and analysts to understand data requirements
Participate in code reviews and maintain high standards of code quality and documentation
Mentor junior team members and contribute to knowledge-sharing initiatives
Required Qualifications
Technical Skills
Programming Languages: Proficiency in Python, SQL
Big Data Technologies: Hands-on experience with Apache Spark, Kafka, Airflow, and distributed computing frameworks
Cloud Platforms: Strong experience with AWS(S3, EMR etc.)
Database Systems: Expertise in both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, Redis) databases
Data Warehousing: Experience with modern data warehouse solutions like Databricks, Snowflake or BigQuery
Containerization: Proficiency with Docker and Kubernetes for deploying data applications
Experience Requirements
4 years of experience in data engineering or related roles
Proven track record of building and maintaining production data pipelines at scale
Experience with streaming data processing and real-time analytics systems
Strong understanding of data modeling, schema design, and data architecture principles
Experience with version control systems (Git) and CI/CD pipelines
Advanced Technical Skills
Understanding of data mesh architecture and domain-driven data design
Experience with data privacy and security implementations
Agentic-AI automation approach for Data and Analytics
Job ID: 147246179
Skills:
snowflake , Apache Airflow, Data Modelling, Python, Sql, AWS – Glue Lambda Step Functions, dbt, CI CD
Skills:
Java, BigQuery, Scala, PostgreSQL, AWS Glue, Kafka, HBase, Redshift, Sql, Apache Airflow, MySQL, Spark, Oozie, Python, ClickHouse, Luigi, Flink
Skills:
Storm, Adf, Cassandra, Kafka, Nosql, Sqoop, Oozie, Cosmos DB, Scala, HBase, Hive, Databricks, MongoDB, Azure, Airflow, Flink, MPP platforms, Hadoop stack, Event Hubs, NiFi, ADLS Gen2, HDFS, Spark ecosystem, Synapse, Pulsar
Skills:
Pyspark, Kafka, Sql, ELT, Goldengate, Spark, Rest Apis, Python, Etl, Airflow, ODI Data Transforms, Medallion Architecture
Skills:
Java, Hadoop, Scala, Apache Spark, Kafka, Node.js, Sql, ELT, Gcp, Linux, Azure, Python, AWS, Etl
We don’t charge any money for job offers