Search by job, company or skills

RevSure AI

Senior Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About RevSure

Revsure.ai is an early-stage US VC-backed company building a go-to platform for marketing trailblazers with bold pipeline and ROI goals, offering killer insights, spot-on predictions, and actionable recommendations. The platform empowers modern demand generation teams to 3X their pipeline and confidently prove marketing ROI. Unlike legacy attribution solutions, RevSure combines full-funnel attribution with predictive intelligence and active recommendations, providing high-growth marketing teams the information they need to be more effective at every stage of the lead journey.

About Team And Role

The Data Platform team at RevSure AI is building an industry-first generic data model for storing and reporting on data from diverse sources systems that our customers use in tracking their end-to-end revenue life cycle. Our reporting layer is being built to power real-time querying of complex RevOps metrics in a multi-tenant ecosystem. Building all of this comes with interesting challenges in every aspect of data engineering including Data Modelling, Configurability of ETL pipelines, Entity Resolution, Metadata Management, Data Governance etc. The role requires solid understanding of data systems and ETL patterns to design and author complex data pipelines. Being an early member of the data team also provides a ton of learning opportunities and the chance to define and shape the architectural decisions involved in solving these challenges to help usher in a new era of Revenue Intelligence.

Work Setup

Full-time; Hybrid working - location: Bengaluru

Experience And Skills

  • Have 4+ years of experience with developing data applications using big data technologies such as Hadoop, Spark, Flink, Dataflow etc.
  • Experience with workflow orchestration tools such as Airflow/Luigi/Azkaban etc.
  • Experience with coding languages like Python/Java/Scala
  • Experience with at least one cloud platform AWS/GCP/Azure
  • Hands-on experience and highly advanced knowledge of SQL, Data Modeling, ETL Development, and Data Warehousing
  • Knowledge and experience with Data Management and Data Storage best practices.
  • Exposure to large databases, BI applications, data quality and performance tuning
  • Good to have understanding of job management, resiliency
  • Good to have prior experience with Graph, Time-series databases
  • Good to have some experience with GenAI and Agentic Framework

Roles And Responsibilities

  • Architect highly metadata-driven data pipelines with algorithms for data deduplication, data harmonisation, fuzzy matching, identity resolution
  • Design and architect relational, time series and graph databases to run OLAP queries
  • Design and develop SDKs and APIs to enable configurable data consumption paradigms
  • Build tools to monitor the health of the data pipelines and data infrastructure

Skills:- Hadoop and Python

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147376283

Similar Jobs

Bengaluru, India

Skills:

PysparkSparkBig DataPythonObject-Oriented Programming

Bengaluru, India

Skills:

Azure Data FactoryT-sqlPysparkPythonStar Schema modelingGit integrationKQLDelta LakeMicrosoft Fabric

Bengaluru, India

Skills:

snowflake ApisPysparkSqlELTGcpDatabricksAzurePythonAWSEtlStreaming DataSynapselarge-scale distributed systems

Bengaluru, India

Skills:

Data WarehouseAbinitioHadoopGoogle Cloud PlatformKafkaAutosysSqlHiveNeo4jDockerSparkMongoDBKubernetesPythonGoogle Big Query

Bengaluru, India

Skills:

JavaBigQueryScalaPostgreSQLAWS GlueKafkaHBaseRedshiftSqlApache AirflowMySQLSparkOoziePythonClickHouseLuigiFlink