Search by job, company or skills

BIG Language Solutions

Senior Python Backend Engineer (ML/AI)

5-7 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a month ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Senior Python Backend Engineer (ML/AI)

Role: Senior Python Backend Engineer (ML/AI)

Location: Hybrid / Remote

Team: AI & Innovation

Reports to: VP of Artificial Intelligence

About BIG Language Solutions

BIG Language Solutions is a global Language Service Provider (LSP) delivering world-class translation and interpretation services for clients across industries. We combine human linguistic expertise with cutting-edge AI to make multilingual communication faster, more accurate, and more accessible. Our innovation spans both written and spoken language solutionshelping organizations break barriers in real time and at scale.

Job Summary

We are seeking a Senior Python Backend Engineer (ML/AI) to design, build, and scale backend systems that power machine learning and AI-driven products. This role focuses on developing high-performance APIs, data pipelines, and services that support model training, inference, and real-time AI applications.

Experience: 5+ years of coding experience

Must-Have Skills

  • Advanced Python expertise
  • Strong command of Python internals, async programming, concurrency, and performance optimization
  • Experience writing production-grade, testable, and maintainable code
  • Backend & API Development
  • Proven experience building and maintaining high-performance REST/gRPC APIs
  • Strong hands-on experience with FastAPI (dependency injection, async endpoints, background tasks, middleware)
  • API security, authentication/authorization, rate limiting, and versioning
  • ML/AI Infrastructure Experience
  • Hands-on experience supporting ML/AI workloads in production
  • Strong understanding of model serving, inference pipelines, and latency optimization
  • Experience integrating ML models into backend services
  • NVIDIA Triton Inference Server
  • Practical experience deploying and managing models using Triton
  • Knowledge of model formats (ONNX, TensorRT, PyTorch, TensorFlow)
  • Experience with batching, concurrency, and performance tuning for inference workloads
  • Systems & Performance
  • Solid understanding of CPU/GPU utilization, memory management, and profiling
  • Experience debugging performance bottlenecks in distributed systems
  • Databases & Storage
  • Strong experience with SQL and/or NoSQL databases
  • Familiarity with vector databases and feature stores is a plus
  • Cloud & DevOps
  • Experience deploying backend services in cloud environments (AWS, GCP, or Azure)
  • Familiarity with Docker and containerized deployments
  • CI/CD pipelines and production monitoring/logging

Nice-to-Have Skills

  • Experience with Kubernetes for scaling ML services
  • Knowledge of TensorRT, CUDA basics, or GPU acceleration
  • Experience with message queues or streaming systems (Kafka, Redis, RabbitMQ)
  • Exposure to MLOps workflows and model lifecycle management

What We're Looking For

  • Ability to work independently on complex backend and ML infrastructure problems
  • Strong debugging and problem-solving mindset
  • Comfortable collaborating with ML engineers, researchers, and product teams
  • Experience taking systems from prototype to production at scale

Think global. Think BIG.

Visit us: https://biglanguage.com

Linkedin: https://www.linkedin.com/company/big-language-solutions/mycompany/

More Info

Job Type:
Industry:
Function:
Employment Type:

Job ID: 140743425