Search by job, company or skills

Z

Azure Databricks Engineer - Python/Spark

6-10 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a month ago
  • Be among the first 30 applicants
Early Applicant
Quick Apply

Job Description

Role Responsibilities

  • Design and implement scalable data pipelines leveraging Azure Databricks.
  • Develop efficient ETL processes to extract, transform, and load data from various sources.
  • Collaborate closely with data scientists and analysts to understand and refine data requirements.
  • Optimize Apache Spark jobs for improved performance and resource efficiency.
  • Monitor, troubleshoot, and maintain production workflows and data jobs.
  • Implement data quality checks and validation processes to ensure data accuracy and reliability.
  • Create and maintain comprehensive technical documentation covering data architecture and pipeline design.
  • Conduct code reviews to ensure adherence to best practices and maintain high code quality.
  • Integrate data from diverse sources, including databases, APIs, and third-party services.
  • Utilize SQL and Python extensively for data manipulation, analysis, and automation.
  • Collaborate with DevOps teams to deploy, automate, and maintain data engineering solutions.
  • Stay current with advancements in Azure Databricks and related big data technologies.
  • Support data visualization efforts to drive actionable business insights using tools like Power BI or Tableau.
  • Provide training, mentorship, and support to team members on data tools and best practices.
  • Participate in cross-functional projects to enhance data sharing, accessibility, and governance.

Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Minimum of 6 years of experience in data engineering or a related domain.
  • Strong expertise in Azure Databricks and data lake architectures.
  • Proficient in SQL, Python, and Apache Spark for large-scale data processing.
  • Solid understanding of data warehousing concepts and ETL frameworks.
  • Experience with cloud platforms, preferably Azure, and familiarity with AWS or Google Cloud.
  • Excellent problem-solving, analytical, and troubleshooting skills.
  • Ability to collaborate effectively within diverse, cross-functional teams.
  • Experience with data visualization tools such as Power BI or Tableau.
  • Strong communication skills, capable of explaining technical concepts to non-technical stakeholders.
  • Knowledge of data governance principles and data quality best practices.
  • Hands-on experience with big data technologies and distributed computing frameworks.
  • Relevant Azure certifications are a plus.
  • Adaptable to evolving technologies and dynamic business requirements.

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Zorba is 3.5 years old and has a broad range of offerings for organizations to champion the AI agenda with ad-hoc consulting delivery and training to entire program management of data initiatives for its client partners.

Job ID: 123279199