Overview of the requirement:
Random Trees is looking for a skilled Data Engineering Specialist to design and implement data solutions. The ideal candidate will have experience with Snowflake, SQL, DBT, Python/Pyspark or any Data modelling tools and Azure/AWS/GCP, along with a strong foundation of Cloud Platforms. You will be responsible for developing scalable, efficient data architectures that enable personalized customer experiences and advanced analytics.
Job Title: Lead Data Engineer
Experience: 10+ Years
Location:: Chennai/Hyderabad (3 days office in a week)
Employment Type: Full-Time
Shift Timings :2-11pm
Roles and Responsibility:
- Implement and maintain data warehousing solutions in Snowflake to handle large-scale data processing and analytics needs.
- Optimize workflows using DBT to streamline data transformation and modelling processes.
- Optimize workflows using any data modelling tools to streamline data transformation and modelling processes.
- Strong expertise in SQL with hands-on experience in querying, transforming, and analysing large datasets.
- Good experience in Python programming
- Expertise with cloud data platforms for large-scale data processing.
- Solid understanding of data profiling, validation, and cleansing techniques.
- Support both real-time and batch data integration, ensuring data is accessible for actionable insights and decision-making.
- Strong understanding of data modelling, ETL/ELT processes, and modern data architecture frameworks.
- Hands-on experience with Python for data engineering tasks and scripting.
- Collaborate with cross-functional teams to identify and prioritize project requirements.
- Develop and maintain large-scale data warehouses on Snowflake.
- Optimize database performance and ensure data quality.
- Troubleshoot and resolve technical issues related to data processing and analysis.
- Participate in code reviews and contribute to improving overall code quality.
Job Requirements:
- Strong understanding of data modeling and ETL concepts.
- Experience with Snowflake and Dany Data Modelling is highly desirable.
- Optimize workflows using DBT to streamline data transformation and modelling processes.
- Hands-on experience with Python for data engineering tasks and scripting.
- Strong expertise in SQL with hands-on experience in querying, transforming, and analysing large datasets.
- Expertise with cloud data platforms (Azure preferred) and Big Data technologies for large-scale data processing.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a team environment.
- Strong communication and interpersonal skills.
- Familiarity with agile development methodologies.