Description
We are seeking a highly skilled
Python Data Engineer with expertise in data engineering, cloud platforms, and cost optimization. The ideal candidate will design and optimize
ETL/ELT pipelines, work with
cloud billing datasets, and integrate analytics into DevOps workflows.
Required Qualifications
- 4- 8+ years of Python development experience.
- Strong experience with data engineering, especially ETL/ELT pipelines.
- Experience working with AWS, Azure, or GCP usage & billing datasets.
- Hands-on experience with pandas, NumPy, PySpark, or similar data processing frameworks.
- Familiarity with Kubernetes, cloud compute primitives, and distributed systems.
- Experience building dashboards or integrating with BI tools (e.g., Grafana, Datadog, custom internal tools).
- Strong understanding of cloud resource utilization metrics and cost drivers.
- Ability to work in a fast-paced, execution-driven environment.
Preferred Qualifications
- Prior FinOps or cloud cost optimization experience.
- Familiarity with Elasticsearch/OpenSearch, Redis, Cassandra/Keyspaces, Kafka, and other backend infra.
- Prior experience in a B2B SaaS environment with large-scale microservices.
- Experience integrating automation/analytics into DevOps workflows.
- Strong knowledge of time-series analysis and anomaly detection.
Key Responsibilities
- Design and maintain scalable ETL/ELT pipelines for cloud billing and usage data.
- Develop Python-based solutions for data processing and analytics.
- Collaborate with DevOps teams to integrate cost optimization tools into workflows.
- Build dashboards and reporting tools for cloud resource utilization and cost metrics.
- Ensure compliance with best practices for distributed systems and cloud infrastructure.
(ref:hirist.tech)