
Search by job, company or skills
About the Role
As a Data Engineer for O2C Phase‑1, you will build robust batch pipelines into a managed PostgreSQL data layer to ingest from CUBE/RegBook, MetricStream and the LSEG Entity master. You will implement high‑quality, auditable data flows with strong contracts, lineage and idempotency.
You will collaborate with the Data Architect, Integrations Engineer and Reporting to deliver reliable datasets and views that power persona‑based dashboards.
RequirementsKey Responsibilities
1. Pipeline Engineering
· Build and operate batch ingestion jobs (files/APIs) with retries, alerting and replay.
· Implement source‑to‑target mappings, data quality checks, and schema evolution safely.
2. PostgreSQL Data Layer
· Create and optimize tables, indexes and views for analytics and application use.
· Contribute to PDM standards, partitioning, retention and performance baselines.
3. Lineage & Controls
· Capture lineage and provenance; ensure auditability of changes and versioning.
· Handle PII/sensitive fields per policy; follow least‑privilege patterns.
4. Collaboration
· Work with Integrations to stabilise upstream feeds; support Reporting on semantic models.
· Support QA with data fixtures and automated validation for UAT.
Preferred Skills & Experience
· 5–10 years in data engineering with strong SQL and ETL/ELT experience.
· Hands‑on with PostgreSQL performance tuning and schema design.
· Experience integrating with enterprise systems via batch/APIs; strong understanding of DQ and idempotency.
· Familiarity with AWS or Azure data services and CI/CD for data pipelines is a plus.
Job ID: 145781477