Search by job, company or skills

pyramid consulting, inc

Machine Learning Engineer

6-8 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Title: MLOps Engineer (6–8 Years Experience)

Location: Remote India

Project Type: Permanent Role

Working Hours:8 AM – 5 PM UK Time

Engagement Work:Migration of Data Science models over next few months, followed by involvement in data science model creation, operations and enhancements

Role Overview

The client is currently operating data workflows in GCP and is in the process of migrating their only the Data Science workloads to Azure Databricks. Input data originates in GCP (was and will continue to) , data science workflows to be executed in Azure Databricks (end state), and model outputs are written back to GCP (was and will continue to).

The MLOps Engineer will play a critical role in supporting model migration, building and optimizing MLOps workflows, and eventually contributing to the broader data movement automation and self‑serve capabilities across cloud environments.

This role requires someone who deeply understands Databricks internals, PySpark, CI/CD orchestration, and ML model operationalization. Along with knowledge of GCP and Azure.

Key Responsibilities

1. Model Migration & Optimization

  • Support migration of existing ML models from GCP to Azure Databricks. Understand existing model architecture and replicate/optimize it in Azure Databricks.
  • Work closely with the Data Science team to operationalize migrated models and further optimize to reduce compute cost and increase test coverag.

2. MLOps & Workflow Orchestration

  • Set up robust CI/CD pipelines using GitHub Actions for ML model deployments in Databricks.
  • Implement and manage MLflow for tracking, versioning, and managing model lifecycle.
  • Build efficient and scalable Data & ML pipelines using Databricks + PySpark.

3. Cloud & Data Movement Support

  • Collaborate with the Data Engineering team regarding data movement between GCP → Azure Databricks. In the future, take over parts of cross‑cloud data movement from the DE team and build self‑serve automation for data flows
  • Build pipelines where outputs from Azure Databricks must be transferred back to GCP.

4. Architecture & Best Practices

  • Provide architectural inputs and workflow optimization guidance during and after migration.
  • Ensure scalable, cost‑efficient, and reliable model execution in Databricks.
  • Improve testing, monitoring, and performance tuning for migrated and future ML models.

Required Experience

  • 6–8 years of experience in Data Engineering, ML Engineering, or MLOps roles.

Must‑Have Skills

  • Strong hands‑on expertise in Databricks and deep understanding of how it works under the hood.
  • Proficiency in PySpark: writing scalable jobs, understanding execution plans, and optimization techniques.
  • Experience building CI/CD pipelines using GitHub Actions.
  • Experience with MLflow for tracking and operationalizing ML models.
  • Knowledge of integrating workflows between GCP and Azure ecosystems.
  • Strong debugging, optimization, and cost‑efficiency mindset.

Good to Have

  • Experience with cross‑cloud data movement patterns.
  • Familiarity with DS model structures and ability to collaborate closely with DS teams.
  • Exposure to model monitoring and alerts in a distributed/cloud environment.

More Info

Job Type:
Industry:
Function:
Employment Type:

Job ID: 145468471