Search by job, company or skills

  • Posted 15 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role Overview:

Experience: 6+years

As a Data Engineer, you will play a key role in designing, developing, and optimizing data pipelines and storage solutions in a complex, enterprise-scale data warehouse environment. You will contribute to full life-cycle software development projects, leveraging modern technologies and best practices to deliver high-quality, actionable data solutions.

Key Responsibilities:

  • Participate in the full software development lifecycle for enterprise data projects, from requirements gathering to deployment and support.
  • Design, develop, and maintain robust ETL processes and data pipelines using Snowflake, Hadoop, Databricks, and other modern data platforms.
  • Work with a variety of databases: SQL (MySQL, Postgres SQL, Vertica), NoSQL (MongoDB, Cassandra, Azure Cosmos DB), and distributed/big data solutions (Apache Spark, Cloudera).
  • Write advanced SQL queries and perform complex data analysis for business insights and operational reporting.
  • Develop scripts in Python and shell for data manipulation, automation, and orchestration.
  • Perform data modelling, analysis, and preparation to support business intelligence and analytics solutions.
  • Maintain and optimize Unix/Linux file systems and shell scripts.
  • Collaborate with cross-functional teams to translate business requirements into scalable data solutions.
  • Present analytical results and recommendations to technical and non-technical stakeholders, supporting data-driven decision making.
  • Troubleshoot, diagnose, and resolve complex technical issues across the data stack.
  • Stay current with industry trends, tools, and best practices to continuously improve data engineering processes.

Required Skills and Qualifications:

  • Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field (or equivalent experience).
  • Demonstrated full life-cycle experience in enterprise software/data engineering projects.
  • Hands-on experience with Snowflake and Hadoop platforms.
  • Proficient in SQL, Postgres SQL, Vertica, and data analysis techniques.
  • Experience with at least one SQL database (MySQL, Postgres SQL) and one NoSQL database (MongoDB, Cassandra, Azure Cosmos DB).
  • Experience with distributed/big data platforms such as Apache Spark, Cloudera, Vertica, Databricks, or Snowflake.
  • Extensive experience in ETL, shell or Python scripting, data modelling, analysis, and data preparation.
  • Proficient in Unix/Linux systems, file systems, and shell scripting.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and collaboratively as part of a team; proactive in driving business decisions and taking ownership of deliverables.
  • Excellent communication skills with experience in presentation design, development, and delivery to communicate technical insights and recommendations effectively.

Preferred/Desirable Skills:

  • Industry certifications in Snowflake, Databricks, or Azure Hyperscale are a strong plus.
  • Experience with cloud platforms such as AWS, Azure, or Snowflake.
  • Familiarity with BI reporting tools like Power BI or Tableau.
  • Proficient in using Git for branching, merging, rebasing, and resolving conflicts in both individual and team-based projects.
  • Familiar with GitHub Copilot to accelerate code writing, refactoring, and documentation tasks.
  • Knowledge of industry best practices and emerging technologies in data engineering and analytics.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 134113921

Similar Jobs

Early Applicant