A fast-growing organization operating in the Artificial Intelligence and Machine Learning product engineering sector. We build and ship production-ready ML solutions that power analytics, intelligent automation, and predictive services for enterprise customers. This on-site role in India focuses on translating ML research into robust, scalable Python-based systems and production pipelines.
Role & Responsibilities
- Design, develop, and maintain end-to-end ML models and Python-based services from data ingestion through model training, evaluation, and inference.
- Implement data preprocessing and feature engineering pipelines, ensuring data quality and reproducibility for model development.
- Package and deploy models as containerized services (Docker) and integrate them with RESTful APIs for production consumption.
- Optimize model performance and inference latency via quantization, pruning, batching, and efficient serving strategies.
- Collaborate with product managers, data scientists, and SREs to define deployment requirements, monitoring, and rollback procedures.
- Establish CI/CD, unit tests, and observability for ML components; troubleshoot production incidents and iterate on model improvements.
Skills & Qualifications Must-Have
- AIML
- LLM
- Core AI
- Azure
- MCP
- RAG
- Docker/Kubernates
Preferred
Benefits & Culture Highlights
- On-site, collaborative engineering environment with direct ownership of model-to-production projects.
- Opportunities for skill growth through hands-on ML deployment, performance optimization, and cross-functional mentorship.
- Competitive compensation and a focus on engineering excellence, observability, and continuous delivery.
Skills: kubernate,llm,pytorch,scikit-learn,python,docker,aws,core ai,mcp,aiml,tensorflow,ml