
Search by job, company or skills
We are seeking a Data Engineer with 4+ years of experience to join our Cloud Data & Engineering team. The ideal candidate has strong expertise in building scalable, cloud-native data platforms and is passionate about solving complex data challenges. This role involves working across platform development and data migration initiatives to deliver efficient, high-impact solutions.
In this role, you will design, build, and optimize modern cloud data architectures, including data pipelines, real-time and batch processing systems, and large-scale migration efforts. You will collaborate with cross-functional teams to deliver secure, high-performing, and cost-effective data solutions using modern tools and best practices.
Roles & ResponsibilitiesDesign and build scalable, cloud-native data platforms and pipelines aligned with modern data lake/lakehouse architectures.
Develop and optimize large-scale data processing solutions using frameworks such as Spark, PySpark, or equivalent technologies.
Implement real-time and event-driven data ingestion pipelines to support dynamic business use cases.
Lead and execute large-scale data migration and modernization initiatives, ensuring minimal disruption and optimal performance.
Perform advanced query tuning, schema design, and performance optimization for relational databases.
Develop automation frameworks and scripts to streamline data workflows, migration processes, and operational tasks.
Implement Infrastructure-as-Code (IaC) and integrate CI/CD pipelines to enable efficient and reliable deployments.
Ensure adherence to cloud security best practices, along with implementing HA/DR strategies and cost optimization techniques.
Work closely with engineering, product, and customer teams to deliver scalable, high-quality data solutions.
Bachelor's degree in Computer Science, Engineering, or a related technical field
4+ years of experience in Data Engineering or Database Engineering
2+ years of experience delivering cloud-based data solutions
Strong expertise in building cloud-native data platforms and pipelines
Deep understanding of modern data lake/lakehouse architectures
Hands-on experience with distributed processing frameworks (Spark, PySpark, or equivalent)
Experience with streaming or event-driven data ingestion patterns
Strong proficiency in Python with solid engineering principles (OOP, testing, clean coding)
Expertise in relational databases, schema design, and query optimization
Experience executing large-scale data migration and modernization projects
Knowledge of high availability (HA), disaster recovery (DR), and cloud security best practices
Experience with Infrastructure-as-Code (IaC) and CI/CD pipeline integration
Strong understanding of cost optimization and performance tuning in cloud environments
Excellent problem-solving, communication, and collaboration skills
Experience working in a fast-paced, client-facing or consulting environment is preferred
Impact: Play a key role in building modern, scalable data platforms that enable data-driven decision-making for customers.
Ownership: Take end-to-end responsibility for data solutions, from design to deployment and optimization.
Growth: Work with cutting-edge cloud and data technologies while continuously enhancing your technical expertise.
Culture: Thrive in a collaborative, fast-paced environment that values innovation, accountability, and continuous learning.
Job ID: 146342237