Search by job, company or skills

CLOUDSUFI

Technical Architect

Save
new job description bg glownew job description bg glow
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Must-have

  • 5+ years of data engineering; at least 2 years working on connector or integration framework development
  • Deep Python expertise including PySpark, pyarrow, and an understanding of Spark's execution model (driver vs executor, serialization constraints, partition fan-out)
  • Hands-on experience with at least one SaaS ingestion platform — Fivetran, Airbyte, Google DTS, AWS Glue connectors, or equivalent — at the connector-build level, not just configuration
  • Strong understanding of OAuth 2.0 flows (auth code, PKCE, client credentials, JWT), rate limiting strategies (token bucket, leaky bucket, per-endpoint quotas), and incremental sync patterns (cursor, watermark, CDC)
  • Experience designing shared connector frameworks — reusable auth managers, rate governors, state stores — not just per-connector scripts
  • Ability to author and own TDDs and PRDs that can be handed to a junior engineer with minimal back-and-forth

Nice-to-have

  • Prior exposure to Databricks Asset Bundles / Declarative Automation Bundles or Lakeflow pipelines
  • Experience with the Databricks Python Data Source API (DBR 15.4 LTS+) — extremely rare, so treat practical Spark DSv2 Java/Scala background as equivalent
  • GCP DTS or Cloud Data Fusion connector experience (directly transferable — this is CloudSufi's advantage in screening)
  • Knowledge of the specific source systems in Raj's list, particularly Social Ads APIs (Meta, LinkedIn, X) or enterprise SaaS (Salesforce, Oracle)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147478173