Search by job, company or skills

  • Posted 6 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Important Note (Please Read Before Applying)

Do NOT apply if:

  • You have less than 5 years of AWS Data Engineer experience.
  • You do not have hands-on AWS Data Services experience.
  • You are on a notice period longer than 15 days.
  • You are looking for remote only (role is hybrid in Hyderabad ).
  • You are a fresher or from unrelated backgrounds (e.g., Java-only, Testing-only, Support).

Apply ONLY if you meet ALL criteria above. Random / irrelevant applications will not be processed.

Job Title: AWS Data Engineer

Location: Hyderabad (Hybrid)

Experience: 5Years (STRICTLY)

Employment Type: Permanent

Notice Period: Immediate Joiners / <15 Days Only

About the Company

Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.

Job Description

Architect and implement scalable, fault-tolerant data pipelines using AWS Glue, Lambda, EMR, Step Functions, and Redshift

Build and optimize data lakes and data warehouses on Amazon S3, Redshift, and Athena

Develop Python-based ETL/ELT frameworks and reusable data transformation modules

Integrate multiple data sources (RDBMS, APIs, Kafka/Kinesis, SaaS systems) into unified data models

Lead efforts in data modeling, schema design, and partitioning strategies for performance and cost optimization

Drive data quality, observability, and lineage using AWS Data Catalog, Glue Data Quality, or third-party tools

Define and enforce data governance, security, and compliance best practices (IAM policies, encryption, access control)

Collaborate with cross-functional teams (Data Science, Analytics, Product, DevOps) to support analytical and ML workloads

Implement CI/CD pipelines for data workflows using AWS CodePipeline, GitHub Actions, or Cloud Build

Provide technical leadership, code reviews, and mentoring to junior engineers

Monitor data infrastructure performance, troubleshoot issues, and lead capacity planning

Required Skills & Qualifications

Bachelor's or Master's degree in Computer Science, Information Systems, or related field

5 years of hands-on experience in data engineering or data platform development

Expert-level proficiency in Python (pandas, PySpark, boto3, SQLAlchemy)

Advanced experience with AWS Data Services, including:

AWS Glue, Lambda, EMR, Step Functions, DynamoDB, EDW Redshift, Athena, S3, Kinesis, Amazon Quicksight.

IAM, CloudWatch, CloudFormation / Terraform (for infrastructure automation)

Strong experience in SQL, data modeling, and performance tuning

Proven ability to design and deploy data lakes, data warehouses, and streaming solutions

Solid understanding of ETL best practices, partitioning, error handling, and data validation

Hands-on experience in version control (Git) and CI/CD for data pipelines

Knowledge of containerization (Docker/Kubernetes) and DevOps concepts

Excellent analytical, debugging, and communication skills

---

Preferred / Nice-to-Have Skills

Experience with Apache Spark or PySpark on AWS EMR or Glue

Familiarity with Airflow, dbt, or Dagster for workflow orchestration

Exposure to real-time data streaming (Kafka, Kinesis Data Streams, or Firehose)

Knowledge of Lake Formation, Glue Studio, or DataBrew

Experience integrating with machine learning and analytics platforms (SageMaker, QuickSight)

Certification: AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect

---

Soft Skills

Strong ownership mindset with focus on reliability and automation

Ability to mentor and guide data engineering teams

Effective communication with both technical and non-technical stakeholders

Comfortable working in agile, cross-functional teams

More Info

Job Type:
Industry:
Employment Type:

Job ID: 138098265

Similar Jobs