Search by job, company or skills

360DigiTMG

Senior Data Engineer & Cloud Data Platform Trainer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 7 days ago
  • Be among the first 20 applicants
Early Applicant

Job Description


Company overview

360DigiTMG is a pioneering EdTech organization known for upskilling professionals in Data Science, AI, and emerging technologies through industry-aligned certification programs. With Hyderabad as its main India office, the company focuses on practical, market-relevant training that prepares learners for real-world roles in a dynamic business environment.

Key Responsibilities

  • Deliver multicloud data engineering training (fundamentals to advanced) with a primary focus on Azure and exposure to AWS and GCP.
  • Teach batch, streaming, and CDC pipelines, as well as modern Lakehouse architectures.
  • Build and demonstrate realworld use cases in Azure, AWS, GCP, Databricks, and Snowflake.
  • Conduct handson labs, code reviews, architecture walkthroughs, and bestpractice sessions.
  • Guide students on data modeling, orchestration, monitoring, and CI/CD for data pipelines.
  • Mentor students for interviews, technical assessments, and industry project execution.
  • Design and deliver modules on Apache Kafka covering fundamentals, realtime streaming use cases, producers/consumers, topics/partitions, consumer groups, and integration with cloud data platforms and Spark.
  • Implement Kafkabased streaming pipelines endtoend, including reliability, scalability, and monitoring best practices.
  • Introduce and apply big data processing frameworks such as Apache Spark (with strong focus on PySpark) for ETL/ELT, batch and streaming workloads.
  • Teach PySpark for data transformations, optimization, and integration with Delta Lake, Kafka, and cloud storage.
  • Use Apache Airflow (and other orchestrators) to design, schedule, and monitor data workflows and demonstrate DAG best practices.
  • Incorporate dbt into the modern data stack to teach SQLbased transformations, modular data modeling, testing, and documentation in cloud DWH/Lakehouse environments.
  • Continuously update course content, labs, and projects to reflect evolving best practices in cloud, big data, and Lakehouse ecosystems.

Core Technical Expertise

  • Cloud: Azure (primary), with working exposure to AWS and GCP.
  • Azure: Azure Synapse Analytics, Azure Data Factory, Azure Databricks, Delta Lake, Unity Catalog, Event Hubs, ADLS Gen2, Key Vault, private links.
  • Streaming & Messaging: Apache Kafka (core architecture, producers/consumers, consumer groups, partitions, delivery semantics), Kafka integrations with Spark/Databricks and cloud services.
  • Big Data & Compute: Apache Spark with strong PySpark, exposure to big data ecosystems and performance tuning.
  • Lakehouse & DWH: Databricks Lakehouse, Snowflake (Streams, Tasks, governance, time travel), exposure to BigQuery / Redshift.
  • Data Modeling: Star schema, Snowflake schema, SCD2, Data Vault 2.0.
  • Orchestration: Azure Data Factory, AWS Glue Workflows, Apache Airflow, Databricks Workflows.
  • Transformation Layer: dbt for modular SQL transformations, testing, and documentation on top of DWH/Lakehouse.
  • CI/CD: Azure DevOps / GitHub Actions, secret management, and deployment workflows for data pipelines and analytics code.

Preferred Experience

  • Handson experience with Lakehouse and DWH platforms in production environments.
  • Experience mentoring, teaching, or delivering corporate training in data engineering, cloud, and big data technologies.
  • Experience building and maintaining Kafkabased streaming pipelines and Spark/PySpark workloads.
  • Experience using Airflow and/or dbt in realworld projects.
  • Relevant cloud and/or Kafka/Spark certifications are an added advantage.

Outcome

Students gain realworld, jobready skills in:

  • Cloud data pipelines (batch, ELT, CDC) across Azure and other clouds.
  • Lakehouse architectures with Databricks, Delta Lake, and Snowflake.
  • Realtime streaming with Kafka and Spark/PySpark.
  • Big data frameworks and orchestration using Airflow and other tools.
  • Analytics engineering and transformation best practices using dbt.
  • Governance, CI/CD, monitoring, and endtoend project execution, along with strong interview preparation tailored to data engineering and cloud roles.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 133841329