
Search by job, company or skills
Overview:
We are looking for a Data Engineer with strong experience in designing, building, and maintaining scalable data pipelines and data platforms. The ideal candidate will work closely with data scientists, analysts, and engineering teams to ensure reliable, high-quality data for analytics and AI/ML use cases.
Key Responsibilities:
Design, develop, and optimize scalable data pipelines (batch and real-time)
Build and maintain data warehouses, data lakes, and ETL/ELT workflows
Ensure data quality, integrity, and reliability across systems
Work with large-scale structured and unstructured datasets
Optimize data processing for performance and cost efficiency
Collaborate with Data Scientists and Analysts to support analytics and ML workloads
Implement data governance, security, and compliance best practices
Monitor, troubleshoot, and improve existing data infrastructure
Mentor junior data engineers and contribute to architectural decisions
Required Skills & Qualifications:
3+ years of experience in Data Engineering
Strong programming skills in Python and/or SQL
Experience with ETL/ELT tools (Airflow, dbt, Informatica, etc.)
Hands-on experience with data warehouses (BigQuery, Snowflake, Redshift, or similar)
Experience with cloud platforms (AWS / GCP / Azure)
Strong understanding of database concepts, data modeling, and schema design
Experience handling large-scale data pipelines
Familiarity with CI/CD and version control (Git)
Good to Have:
Experience with streaming technologies (Kafka, Spark Streaming, Flink)
Knowledge of Big Data frameworks (Spark, Hadoop)
Exposure to MLOps / analytics engineering
Experience with NoSQL databases (MongoDB, Cassandra)
Cloud certifications (AWS/GCP/Azure Data or Engineering track)
Soft Skills:
Strong problem-solving and analytical skills
Excellent communication and collaboration abilities
Ability to work independently and take ownership
Mentorship and leadership mindset
Job ID: 144556059