Search by job, company or skills

B

Data Consultant

new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for a motivated Data Consultant with hands-on experience in modern data platforms and cloud ecosystems. In this role, you will contribute to building scalable data pipelines, enabling efficient data movement, and supporting analytics initiatives across platforms such as Databricks, Snowflake, and Redshift. The ideal candidate brings a strong foundation in data engineering along with exposure to data migration and replication tools like AWS DMS.

Key Responsibilities

  • Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark / Spark SQL), Snowflake, or Amazon Redshift

  • Design and implement efficient data ingestion frameworks from multiple sources including APIs, databases, and cloud storage

  • Work with structured and semi-structured data formats such as JSON, Parquet, CSV, and Delta

  • Utilize AWS Database Migration Service (DMS) for database migration, replication, and Change Data Capture (CDC) pipelines

  • Configure and manage DMS replication instances, endpoints, and tasks for reliable data movement

  • Support schema conversion and migration activities using tools like AWS SCT

  • Assist in designing data models aligned with data warehouse and lakehouse architectures

  • Optimize data pipelines and queries for performance, scalability, and cost efficiency

  • Implement data quality checks, validation, and monitoring mechanisms

  • Collaborate with analytics, BI, and data science teams to deliver high-quality datasets

  • Participate in code reviews and follow engineering best practices



Requirements

Required Skills & Qualifications

  • Bachelor's degree in Computer Science, IT, or a related field

  • 2+ years of experience in Data Engineering or related roles

  • Hands-on experience with at least one: Databricks, Snowflake, or Amazon Redshift

  • Strong proficiency in SQL and Python (PySpark preferred)

  • Experience working with cloud platforms (AWS / Azure / GCP)

  • Hands-on exposure to AWS DMS for data migration and CDC-based pipelines

  • Understanding of data warehousing concepts, ETL/ELT processes, and data modeling

  • Familiarity with lakehouse architecture and Delta Lake concepts

  • Basic understanding of performance tuning and pipeline optimization

  • Experience with version control (Git) and CI/CD fundamentals




Good to Have

  • Experience with orchestration tools such as Airflow, Azure Data Factory, or AWS Glue

  • Exposure to real-time/streaming data (Kafka, Spark Streaming)

  • Experience with schema conversion tools like AWS SCT

  • Knowledge of data governance, security, and access control

  • Relevant certifications (Databricks / AWS / other cloud platforms)

Signs You May Be a Great Fit

Impact: Play a critical role in maintaining system uptime and delivering seamless userc experiences.

Culture: Thrive in a fast-paced, collaborative environment focused on operational excellence.

Growth: Opportunity to expand into SRE, DevOps, or platform engineering roles.

Benefits: Competitive compensation, flexible work options, and continuous learning opportunities.


More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145778577

Similar Jobs