Key Skills: Snowflake, Data Engineer, Big Data, Data Engineering, Pyspark, Spark
Roles and Responsibilities:
- Design, develop, and maintain end-to-end big data solutions within Snowflake environments.
- Build scalable and efficient data pipelines using Spark and PySpark.
- Optimize Snowflake queries and data workflows for performance and cost efficiency.
- Monitor, troubleshoot, and enhance data pipeline performance in production environments.
- Ensure high data quality, consistency, and reliability across data systems.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Implement best practices for data engineering, including data modeling, partitioning, and optimization.
- Support production management activities, including incident resolution and performance tuning.
- Work with large datasets and distributed computing frameworks to handle complex data processing needs.
- Continuously improve system scalability, reliability, and maintainability.
Skills Required:
- Strong experience in Big Data technologies and data engineering practices is required.
- Hands-on expertise in Snowflake is essential.
- Experience with Spark and PySpark for large-scale data processing is required.
- Strong understanding of data pipeline development and optimization is expected.
- Experience in performance tuning, monitoring, and troubleshooting data systems is important.
- Knowledge of data modeling and data warehousing concepts is beneficial.
- Ability to work with complex, large-scale datasets in distributed environments is expected.
- Strong problem-solving and collaboration skills are important.
Education: Bachelor's degree in Computer Science, Engineering, or a related field is required.