Job Description
Job Role: Senior Data Engineer- Implementation & Modernization
Location: Hyderabad
Experience: 5+ Years
Job Role Summary
The Data Engineer is responsible for building and optimizing high-performance, large-scaled data solutions on modern cloud platforms. This role focuses on developing data engineering pipelines, working closely with the Cloud Solution Architects who defines overall architectural design. The Data Engineer is expected to deeply understand approved solution designs, data architectures, and modeling patterns to translate them into efficient and secure data pipelines. The ideal candidate brings strong SQL and Python skills, strong collaboration skills, and must be able to integrate solutions with a variety of technologies. The data engineer should be self-driven, able to work with minimum supervision and have extensive experience in Data Modeling, Data Warehousing, and ETL processing in a cloud environment.
Duties & Responsibilities
Implement high-performing, scalable, data ingestion, transformation, and curation pipelines on cloud platforms, aligned to approved architectural designs and standards
Apply best practices in data modeling, data structures, and curated data layers to support business use cases
Partner with Cloud Solution Architects to understand solution designs and ensure accurate, efficient implementation, ensuring that all approved development and deployment procedures are followed.
Lead code review sessions, providing code assessment
Automate data workflows and pipeline operations using orchestration and scheduling tools
Monitor pipeline health, proactively identify issues, and troubleshoot data quality or performance problems
Optimize data processing workloads for performance, reliability, and cost efficiency
Qualifications
Required
Bachelor's degree in Computer Science, Engineering, Information Systems, or equivalent professional experience
5+ years of hand-on data engineering experience in cloud environments
Advanced SQL expertise and strong Python programming skills
Extensive hands-on experience developing end-to-end ELT/ETL pipelines
Demonstrated experience developing high-performance, large scale systems
Proven experiences in Snowflake and Databricks
Strong experience with data pipeline workflow automation and orchestration frameworks (Preferred: Dagster, DBT)
Experience working in controlled development, testing and production environments
Familiarity with Git-based version control and CI/CD practices
Application of Relational Database Principles and Advanced Features
Excellent written and verbal communication skills
Strong problem-solving skills and ability to debug complex systems
Ability to work independently while collaborating across cross-functional teams
Experience using Waterfall and Agile-based methodologies within a software development function
Preferred Qualifications
Cloud certifications in AWS, GCP, Azure, Snowflake, or Databricks
Industry expertise in marketing environments and related use cases
Experience implementing medallion or layered data architectures
Familiarity with observability frameworks and data reliability engineering