Search by job, company or skills

Insight Global

AWS Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 9 days ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Job Details:

Data Engineer (AWS)

Hyderabad

We are seeking a highly skilled AWS Data Engineer to join our team and help design, build, and maintain scalable data pipelines and infrastructure in the cloud. The ideal candidate will have experience working with AWS services, relational databases, and ETL processes, as well as expertise in building out solutions in Databricks and Redshift.

Key Responsibilities:

  • Design, develop, and maintain scalable ETL pipelines using AWS services, Redshift, and Databricks.
  • Build and optimize data lakes and data warehouses leveraging AWS S3, Redshift, and related technologies.
  • Work with structured and unstructured data from various sources and integrate them into a centralized system.
  • Optimize performance and scalability of relational databases and big data processing systems.
  • Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data-driven decision-making.
  • Ensure data quality, security, and compliance with best practices and organizational standards.
  • Troubleshoot and resolve data-related issues and provide support for data infrastructure.

Required Skills & Experience:

  • 3+ years of experience as a Data Engineer or similar role.
  • Strong expertise in AWS services such as S3, Redshift, Lambda, Glue, Athena, EMR, and Step Functions.
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server).
  • Experience with Databricks for big data processing and analytics.
  • Solid understanding of ETL processes and data pipeline orchestration.
  • Familiarity with Python or Scala for data engineering tasks.
  • Experience with data modeling, performance tuning, and database optimization.
  • Knowledge of data governance and security best practices.

Preferred Qualifications:

  • Experience with Apache Spark and distributed computing frameworks.
  • Familiarity with Terraform or Infrastructure as Code (IaC) for cloud deployment.
  • Exposure to streaming data processing using tools like Kafka or Kinesis.
  • Knowledge of DevOps and CI/CD practices for data engineering workflows.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 133392753

Similar Jobs