Job Description
Job Description
We are seeking a hands-on Data Engineer to build reliable, scalable Databricks pipelines. You will translate requirements into production ELT/ETL jobs, Delta Lake tables, and reusable components. You will refine reference patterns, optimize Spark for performance and cost, apply Unity Catalog best practices, orchestrate workflows, and deliver high-quality, well-documented code that empowers teams. Core Qualifications: Bachelors degree in CS/Engineering or related field; 5+ years in software/data engineering with 2+ years in Databricks/Spark; strong Python, SQL, PySpark; deep AWS/Azure knowledge; experience with Databricks Lakehouse, Workflows, SQL, and dbt; solid Lakehouse/warehousing architecture skills; AI/ML pipeline support; familiarity with Terraform/CloudFormation; strong analytical and troubleshooting abilities; excellent collaboration and communication; ability to lead discussions, provide strategic input, and mentor others.