Search by job, company or skills

P

Data Engineer (PySpark, SQL, AWS, Databricks)

3-6 Years
Save
new job description bg glownew job description bg glow
  • Posted 20 hours ago
  • Over 200 applicants
Quick Apply

Job Description

Key Responsibilities :

  • Design, develop, and manage robust data pipelines using PySpark and SQL.
  • Work with AWS services to implement data solutions.
  • Utilize Databricks for data processing and analytics. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements.
  • Ensure data quality and integrity throughout the data lifecycle. Optimize and maintain existing data architectures. Troubleshoot and resolve data-related issues.

Requirements :

  • Proven experience as a Data Engineer or similar role. Strong proficiency in PySpark, SQL, AWS, and Databricks. Experience in building and optimizing big data pipelines and architectures. Solid understanding of data warehousing concepts and ETL processes. Familiarity with data governance and data security best practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Technical Skills :

  • PySpark, SQL, AWS, Databricks

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Job ID: 122722663

Similar Jobs

Bengaluru

Skills:

PysparkSqlAWSproblem-solving skills

Bengaluru, India

Skills:

Amazon Web ServicesPysparkPythonSqlELTEtl

Bengaluru, India

Skills:

TerraformRoute53DatabricksBashVpcPythonAWSDatabricks SQLIAM rolesUnity Catalog

Bengaluru, India

Skills:

sonnet JavaAgile MethodologiesPysparkScalaCloud TechnologiesSqlApache AirflowSoftware Development Life CycleDatabricksPythonAWSApplication ResiliencyOpusClaude CodeSecurityGitHub CopilotData orchestration toolsAgentic AIClaude modelsHaiku

Remote

Skills:

Sql QueryPL/SQLAWSDevopsKubernetesCloudwatchTroubleshooting