Search by job, company or skills

THG Ingenuity

Senior Machine Learning Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Senior Machine Learning Engineer

About THG Ingenuity

THG Ingenuity is a fully integrated digital commerce ecosystem, designed to power brands without limits. Our global end-to-end tech platform is comprised of three products: THG Commerce, THG Studios, THG Fulfilment. Each represents a single, unified solution, overcoming challenges and taking brands direct-to-consumer. Our client portfolio includes globally recognised brands such as Coca-Cola, Nestle, Elemis, Homebase, and Proctor & Gamble.

About the Team

You will join one of the largest AI teams in retail and e‑commerce, operating at global scale across the full retail stack — from on‑site product discovery and personalisation, through to demand forecasting, pricing, fraud prevention, and warehouse fulfilment. We work across every modality (tabular, text, image, video) and combine classical techniques with state‑of‑the‑art deep learning, NLP, and generative AI to ship solutions that are sustainable, optimised, and commercially valuable.

We are an AI‑first team. That means we don't just build AI products — we build with AI. Coding agents are a core part of how our engineers work, and we expect everyone on the team to use them well: to move faster, ship higher‑quality code, and spend more time on the problems that genuinely require human judgement.

The Role

As a Machine Learning Engineer, you will own the end‑to‑end delivery of ML products — from problem framing and research through to production deployment, monitoring, and continuous improvement. You will split your time between shipping new, state‑of‑the‑art AI capabilities and paying down ML technical debt by modernising and optimising existing platform solutions.

This is a hands‑on, technical role. You will build and tune models, ship production‑grade code, wrangle and analyse data, and partner with Data Scientists, Engineers, Product Managers, and commercial stakeholders to turn ML into measurable business impact. You will use coding agents fluently as part of your everyday workflow, and you will provide technical guidance across the AI & Data function on solution design, algorithm choice, and experimentation.

Key Responsibilities :

  • Deliver end‑to‑end. Lead ML projects from scoping and data discovery through to deployment, monitoring, and iteration — aligned with business objectives, timelines, and cost constraints.
  • Build production‑grade ML systems. Design and deploy batch jobs, real‑time APIs, and data and feature pipelines using modern MLOps tooling (CI/CD, Docker, Kubernetes, model registries, feature stores).
  • Write production‑quality code. Ship clean, reliable, fault‑tolerant, well‑tested Python and SQL, and champion best practices including code review, pair programming, TDD, and internal knowledge‑sharing.
  • Build with AI. Use coding agents and AI‑assisted development tooling as a default part of your workflow to raise your own productivity and code quality, and help the team adopt these practices responsibly.
  • Choose the right ML approach. Apply techniques across the ML spectrum — classical models (Random Forest, Gradient Boosting, time‑series), modern deep learning, NLP, LLMs, RAG, and agentic systems — to problems in recommendations, search, forecasting, fraud, pricing, and supply chain.
  • Own the ML lifecycle. Drive feature engineering, experimentation, offline and online evaluation, A/B testing, model monitoring, drift detection, and retraining strategies.
  • Tackle technical debt. Modernise legacy models, reduce inference latency and cost, and improve reproducibility, observability, and governance across the ML estate.
  • Partner with stakeholders. Translate ambiguous business problems into well‑defined ML problems, and communicate results, trade‑offs, and risks in clear, non‑technical language.
  • Set technical direction. Contribute to coding standards, the ML platform roadmap, and mentor junior engineers and data scientists.

What We're Looking For

Essential

  • MSc or PhD in Computer Science, Machine Learning, Statistics, Mathematics, Physics, or a related quantitative discipline — or equivalent practical experience.
  • Proven track record of delivering ML products into production at scale, with a clear understanding of both model evaluation and productionisation challenges.
  • Strong foundations in data structures, algorithms, data modelling, and software architecture.
  • Strong grasp of the end‑to‑end ML lifecycle: data discovery, feature engineering, model development, evaluation, deployment, and monitoring.
  • Advanced Python skills and deep familiarity with the ML / Data Science ecosystem (Jupyter, Pandas, NumPy, Scikit‑learn, Matplotlib/Seaborn), plus fluent SQL for analytical work.
  • AI‑first mindset and hands‑on experience with coding agents (Claude Code, Cursor, GitHub Copilot, Windsurf, Cline, or similar) as part of your daily workflow. You should be able to describe, with concrete examples, how you use agents to plan, write, test, refactor, and review code — and how you manage their limitations.
  • Hands‑on experience with at least one major cloud platform — Google Cloud Platform (Vertex AI, BigQuery, Dataflow, Cloud Run/Functions, GKE) is strongly preferred.
  • Practical experience with containerisation (Docker), orchestration (Kubernetes), and CI/CD pipelines for ML workflows.
  • Solid applied statistics: hypothesis testing, sampling, experimentation and A/B testing, anomaly detection, predictive modelling, and regression analysis.
  • Excellent communication and stakeholder‑management skills, with the ability to run multiple projects in parallel and guide cross‑functional teams through ML delivery.

Desirable

  • Experience with agent frameworks such as Google Agent Development Kit (ADK) and Vertex AI Agent Builder.
  • Low‑level deep learning experience with PyTorch or TensorFlow for CV and multi‑modal use cases.
  • Distributed data processing (Spark, Beam, Dataflow) and/or streaming (Kafka, Pub/Sub).
  • Feature store and experiment‑tracking experience (Feast, Vertex Feature Store, Tecton, MLflow, Weights & Biases).
  • Exposure to retail or e‑commerce ML use cases: recommendations, search and ranking, personalisation, demand forecasting, pricing, fraud, or warehouse optimisation.
  • Published work, open‑source contributions, Kaggle results, or a strong public portfolio.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146880843

Similar Jobs