Search by job, company or skills

IndiHire

Specialist Software Engineer - Bigdata

new job description bg glownew job description bg glownew job description bg svg
  • Posted 11 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Purpose

We are seeking experienced Data Engineers with strong Snowflake skills to join our data engineering team. The ideal candidate will have a solid background in data ingestion, data processing, and data management using technologies such as Snowflake, Snowpark,DBT .

Knowledge on Apache Spark, Hive, and HDFS would be good to have. You will be responsible for migrating and testing data pipelines and ensuring efficient data storage and processing. In addition, you will work with CI/CD tools, such as Jenkins, and schedulers like CTRL-M, to automate data workflows

Roles and Responsibilities

  • DB: Snowflake, Snowpark
  • CSP: Azure
  • Big data with devops environment: HDFS, YARN, AWX, API file transfer, Hive Querying, Jenkins, CTRL-M
  • Language: Scala or Python
  • Source code: Git on github.
  • OS: Linux and good proficiency on shell scripting.
  • Methodology: Agile via Scrum/Kanban (tool Jira)
  • Design and build scalable Big Data pipelines using Python and PySpark
  • Develop and manage ETL/ELT workflows using Talend
  • Create and maintain data transformation models using dbt
  • Perform Analysis on existing data storage systems Big Data & development of data solutions in Snowflake
  • Implement and optimize cloud data warehouse solutions on Snowflake
  • Optimize Spark and Snowflake performance for scalability and cost efficiency
  • Ensure data quality, reliability, and governance through testing and monitoring
  • Collaborate with cross-functional teams to deliver analytics-ready data

Requirements

  • At least 4 to 6 Years of experience on big data platform and at least 2 years of experience in implementing DWH on Snowflake
  • Proven experience with cloud platforms preferable Azure particularly on data services
  • Good understanding of distributed computing frameworks like Apache Spark, Hadoop, etc.
  • High Proficiency in SQL and Scala/Python
  • Experience in working on migrating data from on-premise databases preferably Big Data platform to Snowflake
  • Expertise in building robust ELT/ETL processes , performance tuning of the data pipelines in Snowflake and should be able to trouble shoot the issues quickly
  • Strong Knowledge on Integration concepts and design best practices
  • Data Modeling & data integration, Advanced SQL skills for analysis, standardizing queries
  • Proven experience in managing and mentoring data engineering teams
  • Excellent interpersonal skills, with the ability to work across teams and communicate effectively with technical and non-technical stake holders
  • Strong analytical and trouble shooting skills with proven ability to find solutions in complex data environments

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144913823