Search by job, company or skills

Dremio

Technical Support Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Company Description

Dremio is the creator of The Agentic Lakehouse, a cutting-edge data platform designed for AI agents with advanced capabilities such as federated data access and unstructured data processing. Dremio's platform enables organizations to transform ideas into actionable insights with unparalleled speed using its AI Semantic Layer for rich business context. Dremio is built on open standards and natively integrates with Apache Iceberg, Apache Polaris, and Apache Arrow. It autonomously manages operations, optimizes performance, and empowers data engineering teams to focus on driving business outcomes. Learn more about Dremio at www.dremio.com.

About the role

You will join a highimpact, highvisibility support team that powers Dremio deployments for enterprise customers running on Linux, Kubernetes, and public cloud (AWS/Azure/GCP). In this role, you'll troubleshoot complex production issues, work closely with engineering, and help shape how customers experience Dremio's SQL lakehouse platform.

What you'll be doing

  • Own and drive resolution of L2 technical support cases for Dremio in onprem, cloud, and hybrid environments.
  • Troubleshoot Kubernetesbased deployments (pods, services, Helm, logging, storage) and coordinate with clusterlevel teams when needed.
  • Support cloud deployments on AWS/Azure/GCP, including VPCs, IAM, object storage, and networking.
  • Analyze SQL queries and workloads, interpret query plans, and recommend tuning or configuration changes.
  • Collaborate with L3/L4 support, engineering, and product to escalate, reproduce, and resolve critical issues.
  • Participate in oncall rotations and help maintain strong SLA compliance.

What we're looking for

  • 2 to 4 years in technical support with handson Linux or Kubernetes or Cloud experience.
  • Strong Linux skills (command line, shell scripting, log analysis, system monitoring).
  • Experience with Kubernetes (pods, deployments, services, Helm, kubectl) in production.
  • Working knowledge of public cloud (AWS/Azure/GCP) compute, storage, networking, security basics.
  • Experience with data platforms and databases, along with a solid understanding of relational data concepts and BI tools such as Tableau and Power BI.
  • Comfortable working in highimpact, highwill teams that move fast and solve realworld customer problems.

Bonus points if you have

  • Scripting in Python or Shell for automation and tooling.
  • Exposure to monitoring/observability (Prometheus, Grafana,Observe etc.).
  • Relevant certifications in Linux, Kubernetes, or cloud platforms (e.g., RHCE, CKA, AWS/Azure/GCP associate).

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 143286469

Similar Jobs