Job Title:
Senior MLOps Engineer (Databricks | MLflow | Cloud | PySpark)
Job Function:
MLOps Machine Learning Engineering Databricks Engineering Cloud & DevOps CI/CD Automation
Work Mode & Location:
Location: Ghansoli, Mumbai (Preferred)
Remote flexibility available
Experience:
46 years
Budget:
Up to 1 Lakh Per Month (1 LPM)
Job Summary:
We are seeking an experienced Databricks MLOps Engineer to join our Data and AI Engineering team. The ideal candidate should have strong expertise in Databricks, MLflow, cloud platforms, and end-to-end MLOps automation. You will collaborate with data scientists, ML engineers, and business stakeholders to build scalable and reliable production ML pipelines.
Key Responsibilities
1. Databricks Platform Management
- Work with Databricks Workspaces, Jobs, Workflows, Unity Catalog, Delta Lake, and MLflow.
- Optimize Databricks clusters, compute usage, permissions, and workspace configuration.
2. End-to-End MLOps Lifecycle
- Manage model training, versioning, deployment, monitoring, and retraining processes.
- Implement deployment strategies including A/B testing, blue-green, and canary releases.
3. Programming & ML Development
- Develop ML and data pipelines using Python (pandas, scikit-learn, PyTorch/TensorFlow), PySpark, and SQL.
- Maintain code quality through Git, code reviews, and automated tests.
4. Cloud & Infrastructure
- Deploy and maintain ML infrastructure across AWS/Azure/GCP.
- Implement IaC using Terraform.
- Build and manage containerized ML workloads using Docker or Kubernetes.
5. CI/CD & Automation
- Create CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate data validation, feature generation, training, and deployment pipelines.
6. Monitoring & Observability
- Configure monitoring for data quality, model drift, inference performance, and model health.
- Integrate model explainability using SHAP or LIME.
7. Feature Engineering & Optimization
- Build and manage definitions in Databricks Feature Store.
- Run distributed training and hyperparameter tuning using frameworks such as Optuna or Ray Tune.
8. Collaboration & Documentation
- Work along with DS, DE, DevOps, and business teams.
- Create detailed documentation for pipelines, processes, and systems.
- Mentor junior engineers and support best practices adoption.
Must-Have Skills
- Databricks (Core)
- MLflow
- End-to-End MLOps
- Python pandas, scikit-learn, PyTorch/TensorFlow
- PySpark
- AWS (SageMaker experience preferred)
- Docker / Kubernetes
- CI/CD Jenkins, GitHub Actions, GitLab CI
Location Preference:
Ghansoli, Navi Mumbai candidates preferred.
Shar your cv on [Confidential Information]