We are seeking a Senior Platform Engineer to design, build, and operate a scalable, secure, and highly
available data and cloud platform. The role involves working across infrastructure, data platforms,
CI/CD, security, and compliance, supporting enterprise-grade workloads in a production environment.
You will play a key role in platform reliability, automation, observability, and governance while working closely with data engineering, security, and operations teams.
This role is ideal for engineers who are hands-on problem solvers, passionate about distributed data systems, and excited to work on modern data engineering practices.
Key Responsibilities
Platform Engineering
- Design, build, and maintain cloud-native data platforms
- Manage and automate infrastructure using Linux, Python, and Shell scripting
- Build and maintain Docker images and containerized workloads
- Deploy and operate workloads on Kubernetes (AKS or equivalent)
- Support and optimize Apache Spark and Apache Iceberg based data platforms
- Work with Datahub, Trino, and Ranger for metadata, query, and governance use cases
- Implement and maintain Azure CI/CD pipelines
- Ensure secure platform access using TLS/SSL, OAuth, and HashiCorp Vault
- Configure and manage Azure Cloud services (Blob Storage, VMs, VNETs, AKS)
- Implement Nginx, Ingress Controllers, and firewall rules
- Monitor platforms using Prometheus and Grafana
- Provide production support, troubleshooting, and performance tuning
- Ensure compliance with enterprise change and release processes.
Required Skills & Experience
- 8+ years of experience in platform or infrastructure engineering
- Strong experience with Linux administration
- Proficiency in Python and Shell scripting
- Hands-on experience with Docker and Kubernetes
- Experience with Apache Spark and Iceberg
- Knowledge of Azure Cloud and AKS
- Experience with Datahub, Trino, and Ranger
- Java development or troubleshooting experience
- Experience working in enterprise open-source platforms
- CI/CD experience using Azure DevOps
- Experience with Prometheus and Grafana
- Understanding of networking and security concepts
- Strong knowledge in Apache Spark (batch + streaming).
- Strong knowledge in Azure Cloud knowledge - Blob storage, VMs, vnet
- Familiarity with Docker/Kubernetes deployments.
- Hands on experience of data Lakehouse formats (Iceberg, Delta Lake, Hudi).
- Strong debugging and performance optimization skills.
We have similar position open for Bangalore location as well.