
Search by job, company or skills
We are looking for a motivated Data Consultant with hands-on experience in modern data platforms and cloud ecosystems. In this role, you will contribute to building scalable data pipelines, enabling efficient data movement, and supporting analytics initiatives across platforms such as Databricks, Snowflake, and Redshift. The ideal candidate brings a strong foundation in data engineering along with exposure to data migration and replication tools like AWS DMS.
Key ResponsibilitiesDevelop and maintain scalable ETL/ELT pipelines using Databricks (PySpark / Spark SQL), Snowflake, or Amazon Redshift
Design and implement efficient data ingestion frameworks from multiple sources including APIs, databases, and cloud storage
Work with structured and semi-structured data formats such as JSON, Parquet, CSV, and Delta
Utilize AWS Database Migration Service (DMS) for database migration, replication, and Change Data Capture (CDC) pipelines
Configure and manage DMS replication instances, endpoints, and tasks for reliable data movement
Support schema conversion and migration activities using tools like AWS SCT
Assist in designing data models aligned with data warehouse and lakehouse architectures
Optimize data pipelines and queries for performance, scalability, and cost efficiency
Implement data quality checks, validation, and monitoring mechanisms
Collaborate with analytics, BI, and data science teams to deliver high-quality datasets
Participate in code reviews and follow engineering best practices
Bachelor's degree in Computer Science, IT, or a related field
2+ years of experience in Data Engineering or related roles
Hands-on experience with at least one: Databricks, Snowflake, or Amazon Redshift
Strong proficiency in SQL and Python (PySpark preferred)
Experience working with cloud platforms (AWS / Azure / GCP)
Hands-on exposure to AWS DMS for data migration and CDC-based pipelines
Understanding of data warehousing concepts, ETL/ELT processes, and data modeling
Familiarity with lakehouse architecture and Delta Lake concepts
Basic understanding of performance tuning and pipeline optimization
Experience with version control (Git) and CI/CD fundamentals
Experience with orchestration tools such as Airflow, Azure Data Factory, or AWS Glue
Exposure to real-time/streaming data (Kafka, Spark Streaming)
Experience with schema conversion tools like AWS SCT
Knowledge of data governance, security, and access control
Relevant certifications (Databricks / AWS / other cloud platforms)
Impact: Play a critical role in maintaining system uptime and delivering seamless userc experiences.
Culture: Thrive in a fast-paced, collaborative environment focused on operational excellence.
Growth: Opportunity to expand into SRE, DevOps, or platform engineering roles.
Benefits: Competitive compensation, flexible work options, and continuous learning opportunities.
Job ID: 145778577