Search by job, company or skills

Mahindra Satyam

Data Engineer

5-7 Years
Save
new job description bg glownew job description bg glow
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Skill Required: AWS + Snowflake SQL + Kafka + Python + Pyspark

(strong in python)

Location : Pune Only

Notice : immediate joiner, currently serving who can join immediately

Role Summary:

We are looking for a highly skilled and proactive Data Engineer with deep expertise in AWS, Snowflake, SQL, Kafka, Python, and PySpark. This role is ideal for someone who thrives in building scalable data pipelines, optimizing cloud infrastructure, and enabling real-time data processing across enterprise systems.

Key Responsibilities:

  • Architect, develop, and maintain high-performance data pipelines using AWS services such as Glue, Lambda, EMR, ECS, and Step Functions.
  • Design and implement streaming data solutions using Apache Kafka, ensuring reliability, scalability, and fault tolerance.
  • Build and optimize ETL/ELT workflows for large-scale structured and semi-structured datasets.
  • Write clean, efficient, and reusable Python and PySpark code for data transformation and analytics.
  • Collaborate with cross-functional teams including Architecture, Business Analysts, and Application Support to deliver end-to-end data solutions.
  • Monitor and enhance cloud infrastructure performance using logs, metrics, and alerts.
  • Troubleshoot and resolve pipeline failures, data quality issues, and system errors.
  • Work closely with the Application Support team to implement automated monitoring and alerting solutions.
  • Leverage Snowflake for data warehousing and analytics, ensuring optimal query performance and cost efficiency.
  • Analyze, write, and optimize complex SQL queries for data extraction, reporting, and dashboarding.

Required Qualifications:

  • 5+ years of hands-on experience in data engineering and analytics.
  • Proven expertise in AWS cloud services, especially Glue, Lambda, EMR, ECS, and Step Functions.
  • Strong proficiency in Apache Kafka for real-time data streaming and event-driven architectures.
  • Advanced skills in Python and PySpark for big data processing.
  • Deep understanding of Snowflake architecture, performance tuning, and data modeling.
  • Expert-level knowledge of SQL – including query optimization and performance analysis.
  • Experience with CI/CD pipelines, version control, and infrastructure as code.
  • Excellent problem-solving skills and ability to work independently in a fast-paced environment.
  • Strong communication skills and ability to engage with stakeholders as a technology consultant.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147487941

Similar Jobs

Pune, India

Skills:

S3HadoopPysparkScalaBig DataHBaseJenkinsLambdaGitHiveSparkPythonAWSAthena

Pune, India

Skills:

BigQueryDataprocSqlApache AirflowJenkinsTerraformDataFlowPythondatastreamPub SubGitLab CICloud Composer

Pune, India

Skills:

Power BiPysparkNode.jsSqlReactJavascriptFlaskDatabricksFastAPIPythonscikit-learnAzure SQL Databasecleansing validation frameworksAzure Web AppsApplication InsightsRESTful API design and integrationData Quality profiling

Pune, India

Skills:

BigQueryApache SparkKafkaNumpyPandasGcpSparkDatabricksPythonAWSAirflowData PipelinesCopilotClaudeCode Cursor

Pune, India

Skills:

Informatica - Power CenterLsmwOracle SqlIdqShell scriptPythonIICSAI ML capabilitiesadvanced data analytics toolsLTMC