- Design and develop scalable data pipelines to migrate user knowledge objects from Splunk to ClickHouse and Grafana.
- Implement data ingestion, transformation, and validation processes to ensure data integrity and performance.
- Collaborate with cross-functional teams to automate and optimize data migration workflows.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Work closely with observability engineers and analysts to understand data requirements and deliver solutions.
- Contribute to the continuous improvement of the observability stack and migration automation tools.
Required Skills and Qualifications
- Proven experience as a Big Data Developer or Engineer working with large-scale data platforms.
- Strong expertise with ClickHouse or other columnar databases, including query optimization and schema design.
- Hands-on experience with Splunk data structures, dashboards, and reports.
- Proficiency in data pipeline development using technologies such as Apache Spark, Kafka, or similar frameworks.
- Strong programming skills in Python, Java, or Scala.
- Experience with data migration automation and scripting.
- Familiarity with Grafana for data visualization and monitoring.
- Understanding of observability concepts and monitoring systems.
Would be a plus
- Experience with Bosun or other alerting platforms.
- Knowledge of cloud-based big data services and infrastructure as code.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes).
- Experience working in agile POD-based teams.