
Search by job, company or skills
We are looking for a skilled Senior Data Engineer to design, build, and optimize scalable, cloud-based data platforms. The ideal candidate brings strong experience with modern data warehouses such as Snowflake, Databricks, or Amazon Redshift, along with expertise in data migration, ETL/ELT pipelines, and distributed data processing.
This role also requires hands-on experience with data migration and replication tools (e.g., AWS DMS) to enable seamless movement of data across systems and support near real-time data integration use cases.
Key ResponsibilitiesDesign, build, and maintain scalable data pipelines and ETL/ELT workflows
Develop and optimize data models for analytics and business intelligence
Implement and manage data platforms using Snowflake / Databricks / Redshift
Lead data migration and replication initiatives using tools like AWS DMS, including full-load and CDC pipelines
Work with large-scale structured and unstructured datasets
Ensure data quality, integrity, and governance across systems
Monitor, troubleshoot, and optimize data pipeline performance and cost efficiency
Collaborate with Data Science, Product, and Business teams to deliver data solutions
Contribute to architecture decisions and best practices
Mentor junior engineers and provide technical leadership (for Lead role)
5+ years of experience in Data Engineering / Data Platform roles
Hands-on experience with Snowflake, Databricks, or Amazon Redshift
Strong expertise in SQL and data modeling
Experience building and orchestrating pipelines using Airflow, dbt, or similar tools
Proficiency in Python / PySpark / Scala
Hands-on experience with AWS DMS (or similar data migration tools) for:
Database migration (on-prem to cloud or cross-cloud)
Change Data Capture (CDC) and real-time/near real-time data replication
Performance tuning, monitoring, and troubleshooting of replication tasks
Experience with cloud platforms (AWS / Azure / GCP)
Strong understanding of data warehousing concepts and distributed systems
Experience with real-time data streaming (Kafka, Spark Streaming, etc.)
Familiarity with AWS SCT (Schema Conversion Tool) or similar migration utilities
Exposure to BI tools (Tableau, Power BI, Looker)
Knowledge of DevOps practices and CI/CD for data pipelines
Experience managing large-scale production data systems
Impact: Play a critical role in maintaining system uptime and delivering seamless userc experiences.
Culture: Thrive in a fast-paced, collaborative environment focused on operational excellence.
Growth: Opportunity to expand into SRE, DevOps, or platform engineering roles.
Benefits: Competitive compensation, flexible work options, and continuous learning opportunities.
Job ID: 145568979