Search by job, company or skills

Teizo Soft Private Limited

Lead Data Engineer (Data Bricks)

7-9 Years
25 - 30 LPA
new job description bg glownew job description bg glownew job description bg svg
  • Posted a month ago
  • Be among the first 30 applicants
Early Applicant
Quick Apply

Job Description

Job Title: Lead Data Engineer (Databricks)

Experience:7+ years

Location:Hyderabad (Hybrid) prefer local candidates

Employment Type:Full-time

Mandatory skills: Python, SQL, Databricks, AWS

Budget 30 LPA

Please note, Interviews will be lined up from Tuesday.

About the Role

We are looking for a highly skilledLead Data Engineerwho is passionate about building robust, scalable, and high-performance data systems. The ideal candidate will have deep expertise inSQL, Python, AWS, and Databricks, with a proven track record of designing and implementing modern data pipelines and analytical frameworks.

Key Responsibilities

  • Design, develop, and maintain scalabledata pipelinesandETL processesfor data ingestion, transformation, and storage.
  • Work with cross-functional teams to define and deliverdata solutionssupporting business and analytics needs.
  • Optimize and fine-tuneSQL queries, data models, and pipeline performance.
  • Build and managedata workflowsinDatabricksand integrate withAWS data services(S3, Redshift, Glue, Lambda, etc.).
  • Ensure data accuracy, consistency, and reliability throughdata quality checksandmonitoring frameworks.
  • Collaborate with Data Scientists, Analysts, and Product teams to enable self-service analytics and advanced data-driven insights.
  • Follow best practices fordata governance,security, andcompliance.
  • Continuously evaluate emerging data technologies and propose innovative solutions for process improvement.

Required Skills & Qualifications

  • Bachelor's or master's degree in computer science, Information Technology, or a related field.
  • 5+ years of hands-on experiencein Data Engineering or related roles.
  • Strong proficiency inSQLfor complex query development and data manipulation.
  • Expertise inPythonfor building data processing and automation scripts.
  • Experience withAWS ecosystem especially S3, Glue, Redshift, Lambda, and EMR.
  • Hands-on experience withDatabricksfor data processing, transformation, and analytics.
  • Experience working withstructured and unstructured datasetsin large-scale environments.
  • Solid understanding ofETL frameworks,data modeling, anddata warehousing concepts.
  • Excellent problem-solving, debugging, and communication skills.

Good to Have

  • Experience withAirflow,Snowflake, orKafka.
  • Knowledge ofCI/CD pipelinesandInfrastructure as Code (IaC)tools such as Terraform or CloudFormation.
  • Exposure todata governance,metadata management, anddata cataloguing tools.

Intrested Candidates Please Send Profile to [Confidential Information]

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 140697451