About the role:
At PalTech, we don't just move data; we engineer future-ready intelligence. As a global IT consulting firm, we help our clients transition from legacy silos to high-performance, cloud-native ecosystems.
You will be the lead architect for large-scale data transformations, primarily focusing on building Lakehouse architectures that unify batch and streaming data. You will work closely with our engineering teams to deliver speed-to-market solutions for our global clients.
Key Responsibilities:
- Design and implement end-to-end Lakehouse and Mesh architectures using Snowflake, Databricks, or Google BigQuery.
- Lead the migration of complex legacy systems into scalable cloud-native environments, focusing on reducing analytics friction (a core PalTech service goal).
- Build resilient, automated ETL/ELT pipelines that handle massive volumes while maintaining sub-second latency for real-time reporting.
- Implement automated quality gates (using tools like Dataplex) and ensure end-to-end data lineage that is audit-ready for regulated industries like Healthcare and BFSI.
- Ensure data foundations are optimized for Generative AI and predictive modeling, enabling features like natural language querying for business users.
Technical Stack & Requirements:
- Expert knowledge of AWS, Azure, or GCP data ecosystems.
- Proven mastery of Snowflake or Databricks. (PalTech is a proud Snowflake and AWS Consulting Partner).
- Deep experience in area like dbt, Airflow, Kafka, and Spark.
- Hands-on experience with Master Data Management (MDM) and data privacy compliance (GDPR/HIPAA).
- Strong understanding of Enterprise Data systems.
Why PalTech
- Great Place to Work: Join an organization recognized for its people-centric culture, collaboration, and integrity.
- Impact-Driven: Work on mission-critical projects for notable clients (FDA, HHS, and global retail leaders).
- Growth Mindset: We encourage taking risks to improve services and offer a platform for continuous learning.