Search by job, company or skills

Insight Global

ETL Developer

7-10 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

ETL Developer / Senior Data Engineer

Company: Insight Global (on behalf of our client)

Location: Remote (must be able to go onsite in Hyderabad for background screening)

Start Date: Immediate

Availability: Must be able to start within 23 weeks

Notice Period: ASAP hire only apply if you can start within 2 weeks of offer

Interview Process: Priority given to candidates who message on LinkedIn with their resume

Hiring Priority: Immediate joiners only

About the Role

Insight Global's client is seeking a Senior ETL / Data Engineer to design, build, and optimize scalable data pipelines within an AWS-based data platform. This role focuses on high-performance ETL workflows using Databricks, Apache Spark, and AWS-native data services, supporting enterprise analytics and data-driven initiatives.

Required Skills & Experience

  • 710 years of hands-on ETL and data engineering experience
  • Strong expertise with Databricks and Apache Spark
  • Solid experience with AWS data services, including Glue, S3, Lambda, EMR, Athena, and Secrets Manager
  • Advanced SQL skills with experience across both relational and NoSQL data stores
  • Experience with CI/CD pipelines, Git, and modern data engineering best practices
  • Strong debugging, performance tuning, and ETL pipeline optimization skills

Nice to Have

  • Experience with Python and/or Scala for data workflows
  • Familiarity with AWS orchestration and messaging services (Kinesis, SNS/SQS, CloudWatch)
  • Experience implementing data quality frameworks and data lineage
  • Exposure to large-scale, enterprise analytics initiatives
  • Knowledge of data modeling and job optimization techniques

Responsibilities

  • Design, build, and maintain scalable ETL pipelines in an AWS-based ecosystem
  • Develop and optimize high-performance data workflows using Databricks and Spark
  • Ingest, transform, and integrate structured and unstructured data sources
  • Implement monitoring, alerting, and data quality frameworks to ensure reliability
  • Optimize pipeline performance, cost, and scalability
  • Collaborate with U.S.-based stakeholders to deliver robust, production-ready data solutions

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145080999

Similar Jobs