Search by job, company or skills

  • Posted a day ago
  • Be among the first 30 applicants
Early Applicant

Job Description

About WCT:

WaferWire Technology Solutions (WCT) specializes in delivering comprehensive Cloud, Data and AI solutions through Microsoft's technology stack. Our services include Strategic Consulting, Data/AI Estate Modernization, and Cloud Adoption Strategy. We excel in Solution Design encompassing Application, Data, and AI Modernization, as well as Infrastructure Planning and Migrations. Our Operational Readiness services ensure seamless DevOps, ML Ops, AI Ops, and Sec Ops implementation. We focus on Implementation and Deployment of modern applications, continuous Performance Optimization, and future-ready innovations in AI, ML, and security enhancements. Delivering from Redmond-WA, USA, Guadalajara, Mexico and Hyderabad, India, our scalable solutions cater precisely to diverse business requirements and multiple time zones (US time zone alignment).

Job Title: Data Engineer

Job Location: Hyderabad, India

About the Role:

A)Python + Data Engineering

  • Strong Python for ETL/ELT: requests, pandas, pyarrow, retry/backoff, logging, config management
  • API ingestion patterns: OAuth/keys, pagination, delta tokens, throttling, idempotency
  • File ingestion: CSV/JSON/Parquet handling; schema inference vs explicit schema; large file chunking/streaming

B) Microsoft Fabric Warehouse / Data Loading

Understanding of Fabric ingestion approaches such as pipelines/copy jobs into Warehouse; ability to implement full/incremental patterns.

SQL/TSQL for warehouse DDL/DML, loading strategies, and validation queries.

C) Data Warehouse Architecture (Scalable)

  • Dimensional modeling (facts/dimensions), star schema, SCD handling
  • Data lifecycle layers: raw/staging/curated; metadata-driven pipelines
  • Scalability concepts: parallelism, partitioning strategy, incremental loads, late-arriving data, CDC design
  • Understanding modern DW engine traits discussed for Fabric DW (e.g., compute/storage separation, optimized for open formats).

D) Observability + Reliability

  • Monitoring: job metrics, failure handling, alerting
  • Data quality checks: row counts, null checks, referential integrity, schema drift detection
  • CI/CD basics and version control best practices (often expected in Fabric/Azure data engineer roles).

4) Nice-to-have Skills

  • Familiarity with broader Fabric ecosystem (Lakehouse, semantic models, governance patterns)
  • DP-600 (Fabric) / Azure data certifications (commonly preferred in similar JDs)
  • Spark/PySpark experience for large-scale transformations (useful where ingestion requires heavy processing)

5) Experience & Qualifications (Typical)

  • 35+ years building data pipelines and data warehouse solutions (common baseline in Azure/Fabric data engineering JDs).
  • Strong SQL + Python, plus hands-on delivery on a modern analytics platform.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 143286541

Similar Jobs