Search by job, company or skills

Uplers

Senior DevOps Engineer

6-8 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 19 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Experience: 6.00 + years

Salary: Confidential (based on experience)

Expected Notice Period: 15 Days

Shift: (GMT+05:30) Asia/Kolkata (IST)

Opportunity Type: Remote

Placement Type: Full Time Permanent position(Payroll and Compliance to be managed by: OG)

(*Note: This is a requirement for one of Uplers client - OG)

What do you need for this opportunity

Must have skills required:

AWS certifications, Multi-cloud Exposure, EC2, GitHub Actions, Helm Charts, Kafka/Kinesis, Prometheus/Grafana/Datadog, Cloud Server (Google / AWS), Jenkins, Python

OG is Looking for:

Senior DevOps Engineer

Location: Remote, anywhere in India.

Employment Type: Full-time

Compensation: Competitive aligned to experience and market benchmarks

Role Overview

We are hiring a Senior DevOps Engineer to own and evolve the cloud infrastructure backbone here. You will architect resilient AWS-based systems, drive full automation, and partner with product and data teams to ensure our platform scales reliably. Meaningful Python backend proficiency is expected as a secondary skill to support service-layer work when needed.

Key Responsibilities

Primary: Cloud Infrastructure & DevOps (AWS)

  • Architect and maintain scalable AWS infrastructure (EC2, EKS, RDS/Aurora, Lambda, S3, CloudFront, Route 53) across all environments.
  • Design and own CI/CD pipelines (GitHub Actions, Jenkins, or AWS CodePipeline) for fast, safe, and repeatable deployments.
  • Implement infrastructure-as-code using Terraform and/or AWS CDK enforce IaC best practices across the engineering org.
  • Lead container orchestration via Kubernetes (EKS) and Docker: autoscaling, resource optimisation, cost-aware scheduling, and workload isolation.
  • Own the observability stack: CloudWatch, Datadog, Prometheus/Grafana, OpenTelemetry lead incident response and blameless post-mortems.
  • Establish cloud security posture: IAM, VPC design, Secrets Manager/Vault, encryption, and SOC2/ISO 27001 compliance alignment.
  • Drive FinOps initiatives: right-sizing, Savings Plans, and cost-per-feature reporting for leadership.

Secondary: Python Backend Development


  • Build and maintain backend microservices and APIs in Python (FastAPI or Django REST Framework).
  • Contribute to data pipeline design using Airflow/Prefect and AWS SQS/Kafka for batch and streaming workloads.
  • Support data store decisions across PostgreSQL/RDS, Redis, and S3 advise on indexing and partitioning for performance.

Required Qualifications


  • 6+ years of experience building infrastructure and backend systems in production is mandatory (startups or high-growth product teams preferred).

AWS & Infrastructure (Primary)


  • Deep hands-on with EC2, EKS, RDS/Aurora, Lambda, S3, CloudFront, API Gateway, Route 53.
  • IAM, VPC architecture, KMS, Secrets Manager, Security Groups.
  • Terraform (modules, remote state, workspaces) and/or AWS CDK / CloudFormation.
  • CI/CD: GitHub Actions, Jenkins, or AWS CodePipeline Docker, Kubernetes (EKS), Helm charts, GitOps (ArgoCD/Flux).
  • Observability: CloudWatch, Datadog, Prometheus, Grafana, OpenTelemetry / X-Ray.
  • FinOps: Cost Explorer, Savings Plans, right-sizing, and cost-per-feature analysis.

Python Backend (Secondary)


  • Python 3.10+, async/await, type hints FastAPI or Django REST Framework.
  • PostgreSQL/RDS schema design, Redis caching, Airflow/Prefect pipeline orchestration.
  • REST & GraphQL API design, OAuth2/JWT, OpenAPI documentation.
  • Pytest: unit, integration, and property-based test suites.

Nice to Have


  • Multi-cloud exposure (GCP or Azure alongside AWS).
  • AWS certifications: Solutions Architect Professional or DevOps Professional.
  • Kafka, Kinesis, or other streaming infrastructure experience.
  • ML infrastructure or model-serving pipelines (SageMaker, MLflow).
  • Startup or high-growth product environment background preferred.

Tooling & Stack (Illustrative)


  • Runtime: Python / TypeScript / Go
  • Data: Postgres/BigQuery + object storage (S3/GCS)
  • Pipelines: Airflow/Prefect, Kafka/PubSub
  • Infra: AWS/GCP, Docker, Kubernetes, Terraform
  • Observability: OpenTelemetry, Prometheus/Grafana, ELK/Cloud Logging
  • Collab: GitHub, Linear/Jira, Notion, Looker/Metabase

Working Model


  • Hybrid-remote within India or UAE limited periodic in-person collaboration.
  • Startup velocity with pragmatic processes bias to shipping, measurement, and iteration.

Equal Opportunity


We are an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

How to apply for this opportunity

  • Step 1: Click On Apply! And Register or Login on our portal.
  • Step 2: Complete the Screening Form & Upload updated Resume
  • Step 3: Increase your chances to get shortlisted & meet the client for the Interview!

About Uplers:


Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement.

(Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well).

So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 145806169

Similar Jobs