Search by job, company or skills

Zorba AI

Big Data Engineer(AWS, Python,Airflow)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 9 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description AWS / Python / Airflow / Big Data Engineer

Role: AWS / Python / Airflow / Big Data Engineer

Experience: 35 Years

Job Overview

We are seeking two skilled AWS/Python/Airflow/Big Data Engineers with 35 years of hands-on experience to join our data engineering team. The ideal candidates will be responsible for designing, developing, and maintaining scalable data pipelines and workflows in a cloud-based AWS environment. This role requires strong expertise in Python, Apache Airflow, AWS services, and Big Data technologies.

Key Responsibilities

  • Design, develop, and maintain ETL and data pipelines using Apache Airflow.
  • Build and manage workflow orchestration (DAGs) for reliable data processing.
  • Implement scalable and cost-efficient solutions on AWS Cloud, leveraging:
    • S3, EC2, Lambda, EMR, Glue, Redshift
  • Write efficient, reusable, and maintainable Python code for data processing and automation.
  • Work with Big Data frameworks such as Apache Spark and Hadoop for large-scale data processing.
  • Optimize data workflows for performance, scalability, and cost efficiency.
  • Collaborate with cross-functional teams (data analysts, product, DevOps) to understand data requirements and deliver solutions.
  • Ensure data quality, reliability, security, and monitoring across all pipelines and workflows.
  • Troubleshoot and resolve data pipeline failures and performance bottlenecks.
Required Qualifications

Technical Skills

  • AWS Cloud Services: Hands-on experience with core AWS services used in data engineering.
  • Python Programming: Strong coding skills with experience using libraries such as Pandas, PySpark, and automation scripts.
  • Apache Airflow: Proven experience building and managing Airflow DAGs and workflows.
  • Big Data Technologies: Familiarity with Spark, Hadoop, or similar distributed data processing frameworks.
  • Data Modeling & ETL: Solid understanding of ETL processes, data transformations, and pipeline design.
  • Databases & SQL: Working knowledge of SQL and relational databases.

Soft Skills

  • Strong problem-solving and debugging abilities.
  • Excellent communication and collaboration skills.
  • Ability to work independently and in a fast-paced team environment.

Preferred Qualifications

  • Experience with CI/CD pipelines and DevOps practices.
  • Familiarity with containerization technologies such as Docker and Kubernetes.
  • Exposure to cloud monitoring, logging, and performance optimization tools.

Skills: python,pipelines,big data,cloud,aws,airflow

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 136621525

Similar Jobs