Search by job, company or skills

CURATAL

Big Data Engineer - Python/SQL/ETL

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Key Responsibilities

  • Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals.
  • Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala.
  • Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy.
  • Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing.
  • Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations.
  • Manage data storage and processing in cloud environments, with strong experience in AWS cloud services.
  • Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility.
  • Develop scripts and automation tools using Linux shell scripting and other languages as needed.
  • Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies.
  • Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity.
  • Work effectively within Agile development teams and contribute to sprint planning, reviews, and Skills & Experience :
  • 7+ years of experience in technology with a focus on application development and production support.
  • At least 5 years of experience in developing ETL pipelines and data engineering workflows.
  • Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms.
  • Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau.
  • Proficient in Python, Apache Spark, Hive, and SQL.
  • Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools.
  • Experience working with streaming data frameworks such as Kafka.
  • Working knowledge of Core Java, Linux, SQL, and at least one scripting language.
  • Experience with relational databases, preferably Oracle.
  • Hands-on experience with AWS cloud platform services related to data engineering.
  • Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes).
  • Exposure to Agile development methodologies.
  • Strong interpersonal, communication, and collaboration skills.
  • Ability and eagerness to quickly learn and adapt to new Qualifications :
  • Bachelors or Masters degree in Computer Science, Information Technology, or related fields.
  • Experience working in large-scale, enterprise data environments.
  • Prior experience with cloud-native big data solutions and data governance best practices.

(ref:hirist.tech)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 134676891