You'll be expected to have :
- Bachelor's or master's degree in computer science, Engineering, or a related field.
- 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.
- Proficiency in programming languages such as Python, Scala, or SQL.
- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.
- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.
- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.
- Good to have experience with containerization technologies such as Docker and Kubernetes.
- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.