Search by job, company or skills

aionos

Adobe Experience Platform (AEP) Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

Adobe Experience Platform (AEP)

Data Engineer

  • XDM Schema Design
  • Ingestion Pipelines
  • Identity Management
  • Query Service
  • Data Governance

Job Title Adobe Experience Platform (AEP) Data Engineer

Department Data Engineering / Marketing Technology

Collaborates With Data Architecture, Analytics, Privacy & Compliance, Marketing Ops

Location Remote / Hybrid

Employment Type Full-Time

Experience Level Mid–Senior (4–7 years in data engineering / MarTech)

Salary Range Competitive — commensurate with experience + performance bonus

About The Role

The Adobe Experience Platform (AEP) Data Engineer is a technically focused, high-impact role responsible for building and maintaining the data infrastructure that powers our customer experience platform. You will own the pipelines, schemas, and systems that bring data into AEP — ensuring it is clean, consistent, governed, and ready to fuel real-time personalization, audience segmentation, and analytics.

This role bridges the worlds of data engineering and marketing technology. You will work hands-on in AEP's ingestion layer, XDM schema registry, Query Service, and identity graph — while also partnering closely with architects, analysts, and marketing technologists to deliver a trusted, scalable data foundation.

Key Responsibilities

  • XDM Schema Design & Data Modeling → Design, build, and maintain XDM (Experience Data Model) schemas including ExperienceEvent, Individual Profile, and Lookup class datasets → Define custom field groups, data types, and mixin libraries aligned to business use cases and Adobe best practices → Govern schema versioning, backward-compatibility rules, and deprecation policies to protect downstream consumers → Collaborate with solution architects to map source system data structures to standardized XDM fields → Build and maintain a schema registry catalog documenting all datasets, field definitions, and ownership → Ensure schema designs support identity stitching (IdentityMap), consent fields, and data usage labeling requirements

2. Data Ingestion Pipeline Development → Build and operate batch ingestion pipelines using AEP Source Connectors, HTTP API, and Adobe I/O for on-premise and cloud data sources → Develop and maintain streaming ingestion pipelines using AEP Streaming Connection APIs and Kafka-based event forwarding → Configure and monitor Adobe Experience Platform Tags (Launch) and Web SDK (Alloy.js) for client-side event data collection → Design server-side event forwarding rules to route data streams from Launch to AEP and third party destinations → Implement data transformation logic (ETL/ELT) to normalize, enrich, and validate data before AEP ingestion → Manage ingestion SLAs: monitor pipeline health, error rates, throughput, and latency dashboards → Build automated alerting for ingestion failures, data anomalies, and schema validation errors

3. Identity Management & Profile Unification → Configure and maintain IdentityMap fields across all AEP datasets for accurate cross-device and cross-channel identity resolution → Define identity namespaces (ECID, CRM ID, email hash, phone) and establish namespace priority rules within the Identity Graph → Diagnose and resolve identity fragmentation issues including duplicate profiles, orphaned identity nodes, and namespace collisions → Implement deterministic and probabilistic identity linking strategies in alignment with Privacy and Data Governance teams → Monitor Identity Graph health metrics: average identities per profile, graph collapse rates, and cross-namespace linkage coverage → Support consent-aware identity resolution by integrating OneTrust or Adobe Consent Service signals into profile assembly

4. AEP Query Service & Data Validation → Write complex SQL queries in AEP Query Service (PostgreSQL dialect) to validate data quality, audit ingestion completeness, and explore profile data → Build reusable query templates and scheduled queries for ongoing data quality monitoring and business reporting → Develop row-level data validation frameworks to verify that ingested records conform to schema contracts and business rules → Create derived datasets and computed attributes using Query Service output for use in segmentation and analytics → Profile dataset statistics (null rates, cardinality, value distributions) to detect upstream data quality regressions → Support Analytics and Data Science teams with Query Service access management and performance optimization

  • Data Governance & Privacy Engineering → Apply DULE (Data Usage Labeling & Enforcement) labels to all datasets and fields in accordance with data governance policies → Configure data usage policies to restrict activation of sensitive data (PII, health, financial) to compliant destinations only → Implement consent enforcement logic within ingestion pipelines to honor opt-out and data deletion requests in real time → Support GDPR, CCPA, and HIPAA compliance requirements through Privacy Service API integrations for data access and deletion workflows → Maintain data lineage documentation from source systems through to AEP activation, enabling full auditability → Participate in data governance council reviews, providing engineering input on policy feasibility and implementation impact

6. Platform Operations & Performance → Manage AEP sandbox environments (development, staging, production) including configuration promotion and sandbox tooling → Monitor platform-level health metrics: profile store utilization, ingestion throughput, segment evaluation latency, and API rate limits → Optimize ingestion pipeline performance by tuning batch file sizes, parallelism, and retry logic → Participate in incident response for data pipeline outages, profile ingestion failures, and identity graph anomalies → Automate repetitive AEP configuration tasks using Adobe I/O Runtime, Experience Platform APIs, and scripting (Python / Node.js) → Maintain runbooks, data flow diagrams, and engineering documentation for all production pipelines and integrations

Required Qualifications

AEP Platform Skills

  • XDM schema design (ExperienceEvent, Profile, Lookup)
  • Batch & streaming ingestion (Source Connectors, HTTP API)
  • AEP Identity Service & IdentityMap configuration
  • Query Service (PostgreSQL / PSQL)
  • Query Service (PostgreSQL / PSQL)
  • DULE labels, data usage policies, Privacy Service API
  • AEP Sandbox management & configuration promotion

Data Engineering Skills

  • 4+ years in a data engineering or ETL/ELT development role
  • Strong SQL proficiency (complex joins, window functions, CTEs)
  • Experience with streaming platforms (Kafka, Kinesis, or Pub/Sub)
  • Python or Node.js for pipeline automation and API scripting
  • Adobe Web SDK (Alloy.js) & Launch/Tags
  • Cloud data platform experience (Snowflake, BigQuery, Redshift)
  • REST API design, consumption, and debugging (Postman / curl)
  • Git-based version control and CI/CD pipeline familiarity

Data Governance & Privacy

  • Working knowledge of GDPR, CCPA data rights obligations
  • Experience with consent management platforms (OneTrust, TrustArc)
  • Data lineage documentation and schema registry governance
  • PII handling, data masking, and anonymization techniques

Collaboration & Communication

  • Ability to translate complex data concepts for non technical stakeholders
  • Experience working in Agile/Scrum delivery teams
  • Strong documentation habits (runbooks, data dictionaries, ADRs)
  • Comfortable partnering across Engineering, Analytics, Marketing, and Legal

Preferred Qualifications

  • Adobe Certified Expert — Adobe Experience Platform or related AEP certification
  • Experience with Adobe Customer Journey Analytics (CJA) including connection setup, data views, and derived fields
  • Familiarity with AEP's Data Science Workspace or integration of ML models via AEP's Feature Pipeline
  • Hands-on experience with dbt (data build tool) for transformation layer management upstream of AEP
  • Knowledge of Apache Parquet, Delta Lake, or Iceberg table formats used in AEP's data lake layer
  • Exposure to Adobe Journey Optimizer data configuration: journey events, decision management datasets, and suppression lists
  • Prior experience supporting a migration from a legacy CDP (Segment, Tealium, or Salesforce CDP) to AEP
  • Familiarity with OpenAPI specs and experience contributing to internal developer portals or data catalogs

How Success Is Measured

Metric

  • Ingestion Pipeline Reliability
  • Schema Governance Coverage
  • Data Quality Score
  • Identity Resolution Rate
  • Query Service Performance
  • Privacy Request Fulfillment
  • Incident Response Time
  • Documentation Currency

Target

  • 99.9% uptime on all production ingestion pipelines; zero undetected data loss events
  • 100% of production datasets mapped to governed XDM schemas with documented field definitions
  • 95%+ completeness and validity rate across critical profile attributes quarter-over-quarter
  • 85%+ of anonymous profiles linked to known identities within 24 hours of authentication
  • All scheduled validation queries complete within defined SLA windows (no runaway queries)
  • 100% of GDPR/CCPA deletion requests processed within regulatory deadlines via Privacy Service API
  • Pipeline incidents acknowledged within 15 minutes; root cause analysis delivered within 48 hours
  • All data flows, schemas, and runbooks reviewed and updated within 30 days of any production change

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147252653