Search by job, company or skills

GSPANN Technologies, Inc

Senior Data Engineer

This job is no longer accepting applications

new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 months ago

Job Description

AWS, Snowflake, Big Data, SQL, Python, PySpark, Python, Databricks, Apache Airflow

Description

GSPANN is hiring a Senior Data Engineer to build scalable data solutions using AWS, Snowflake, SQL, Python/PySpark, Databricks, and Airflow. Join our Bangalore team to work on cutting-edge data engineering projects!

Location: Bangalore

Role Type: Full Time

Published On: 25 February 2025

Experience: 6-10 Years

Share this job

Description

GSPANN is hiring a Senior Data Engineer to build scalable data solutions using AWS, Snowflake, SQL, Python/PySpark, Databricks, and Airflow. Join our Bangalore team to work on cutting-edge data engineering projects!

Role and Responsibilities

  • Contribute actively to all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, deployment, and support.
  • Utilize a disciplined development methodology to solve complex business challenges efficiently.
  • Develop scalable, flexible, and efficient solutions by leveraging appropriate technologies.
  • Analyze data from source and target systems, ensuring accurate transformation mapping based on business requirements.
  • Collaborate with clients and onsite coordinators throughout different phases of the project.
  • Work closely with business and technology stakeholders to design and implement product features.
  • Identify, anticipate, and resolve data management issues to enhance data quality.
  • Prepare, clean, and optimize large-scale data for ingestion and consumption.
  • Support the implementation of new data management initiatives while restructuring existing data architectures for improved performance.
  • Automate workflows and routines using workflow scheduling tools, ensuring efficiency in data operations.
  • Apply continuous integration principles, test-driven development (TDD), and production deployment frameworks.
  • Review and contribute to the design, code, test plans, and dataset implementation to maintain high data engineering standards.
  • Conduct data analysis and profiling to design scalable and efficient data solutions.
  • Troubleshoot data-related issues, perform root cause analysis and proactively resolve product challenges.

Skills And Experience

  • Over 6 years of experience in developing data and analytics solutions.
  • Expertise in building data lake solutions using Amazon Web Services (AWS), including Amazon Elastic MapReduce (EMR), Amazon Simple Storage Service (S3), Apache Hive, and PySpark.
  • Strong proficiency in relational Structured Query Language (SQL).
  • Hands-on experience with scripting languages, particularly Python.
  • Familiarity with source control tools such as GitHub and associated development processes.
  • Practical experience working with workflow scheduling tools like Apache Airflow.
  • Deep understanding of AWS Cloud services, including Amazon S3, EMR, and Databricks.
  • Passionate about designing and implementing data-driven solutions.
  • Strong analytical and problem-solving mindset.
  • Proven experience in designing, developing, and testing data pipelines.
  • Background in working with Agile development teams.
  • Ability to communicate effectively, both verbally and in writing, with team members and business stakeholders.
  • Quick adaptability to new programming languages, technologies, and frameworks.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 116664207

Similar Jobs