Search by job, company or skills

  • Posted 2 days ago
  • Be among the first 20 applicants
Early Applicant
Quick Apply

Job Description

About the Role

  • We are looking for a skilled AWS Data Engineer who can design, build, and maintain scalable data pipelines and data platforms on AWS. The ideal candidate should have strong experience with AWS big data services, ETL/ELT pipelines, SQL, Python, and modern data engineering frameworks.

Key Responsibilities

Data Pipeline Development

  • Design and build scalable, secure ETL/ELT pipelines on AWS.
  • Ingest, transform, and process structured/unstructured data from various sources (RDS, APIs, S3, files, streaming, etc.).
  • Implement data lake and data warehouse solutions on AWS.

AWS Services Expertise

  • Build workflows using AWS Glue (ETL Jobs, Crawlers, Catalog).
  • Develop and optimize data pipelines using AWS Lambda, Step Functions, SNS/SQS.
  • Manage data storage using Amazon S3 (buckets, lifecycle, partitions).
  • Work with AWS EMR / Spark for big data processing.
  • Use Amazon Redshift for data warehousing and analytics.
  • Configure data ingestion using Amazon Kinesis / MSK for real-time streaming.

Optimization & Data Quality

  • Optimize Spark/Glue jobs for performance and cost.
  • Build data validation, monitoring, and error-handling frameworks.
  • Implement best practices for data quality, governance, and metadata management.

Collaboration

  • Work with Data Architects, Analysts, and BI teams to deliver analytics-ready datasets.
  • Support reporting tools like QuickSight, Tableau, or Power BI.
  • Participate in Agile development processes (sprints, daily stand-ups).

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

iKomet Technology Solutions Pvt Ltd is a Chennai-based IT consulting firm founded in 2015, offering cloud, AI/ML, analytics, web & mobile solutions to enterprise and mid-market clients

Job ID: 137757553

Similar Jobs

(estd)