Search by job, company or skills

  • Posted a month ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Title: Data Engineer

Location: Hyderabad

Experience: 4 - 10 Years

Job Description

We are looking for a highly skilled Data Engineer Data Platforms with strong hands-on experience in PySpark, Spark, SQL, and Python, to design, build, and optimize data pipelines for enterprise-scale platforms. The ideal candidate should possess a deep understanding of complex data transformations, data modeling, and DevOps practices, with the ability to independently handle technical implementation and business stakeholder communication.

Key Responsibilities

  • Develop, optimize, and maintain data pipelines and ETL processes using PySpark, Spark, and Python.
  • Translate complex business transformation logic into efficient and scalable PySpark/Spark scripts for data loading into Enterprise Data Domain tables and Data Marts.
  • Design and implement data ingestion frameworks from multiple structured and unstructured data sources.
  • Work as an individual contributor, managing the full data lifecycle from requirement analysis and development to deployment and support.
  • Collaborate with business users, data architects, and analysts to understand requirements and deliver data-driven solutions.
  • Ensure data quality, consistency, and governance across all data layers.
  • Implement CI/CD and DevOps best practices for data workflows, version control, and automation.
  • Optimize job performance and resource utilization in distributed data environments.
  • Troubleshoot and resolve issues in data pipelines and workflows proactively.

Technical Skills

Core Skills:

  • PySpark / Spark (strong hands-on required)
  • SQL Advanced query writing, performance tuning, and optimization
  • Python Data processing, scripting, and automation
  • Big Data Ecosystem: Hadoop, Hive, or similar platforms
  • Cloud / On-Prem Experience: Any (Azure / AWS / GCP / On-premise acceptable)

DevOps & Deployment

  • Good understanding of DevOps concepts (CI/CD, version control, automation)
  • Experience with tools such as Git, Jenkins, Azure DevOps, or Airflow
  • Familiarity with containerization (Docker/Kubernetes) preferred

Additional Preferred Skills

  • Knowledge of data modeling and data warehousing concepts
  • Exposure to Delta Lake / Lakehouse architectures
  • Familiarity with data orchestration tools like Airflow / Data Factory / NiFi

Qualification

  • Bachelor's or Master's degree in Computer Science, Information Technology, or related discipline.
  • Certifications in Big Data, Cloud, or DevOps are an added advantage.

Soft Skills

  • Strong analytical and problem-solving abilities.
  • Excellent communication and stakeholder management skills.
  • Self-driven and capable of working independently with minimal supervision.
  • Strong ownership mindset and attention to detail.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 131411273

Similar Jobs

(estd)