We are seeking an experienced Data Engineer with a strong background in data engineering, cloud technologies, and modern data storage systems. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models that support high-performance analytics and reporting.
Key Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines for scalable and efficient data processing.
- Build and maintain data models, ensuring optimal performance and data quality for reporting and analytics.
- Implement and manage data integration workflows using tools such as Airflow, dbt, Kafka, or Spark.
- Collaborate with analytics and business teams to understand data needs and deliver solutions.
- Ensure robust data governance, including data quality, integrity, and security.
Required Skills and Experience:
- Strong expertise in SQL, including:
- Writing complex joins
- Developing stored procedures
- Executing certificate-auth-based queries
- Hands-on experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB.
- Proficiency in data warehousing solutions:
- Preferred: Google BigQuery
- Others: Amazon Redshift, Snowflake
- Programming skills in PySpark, Python, or Scala.
- Experience working with Google Cloud Platform (GCP).
- Strong understanding of data modeling principles and best practices.
Preferred/Good-to-Have:
- Experience with data visualization tools:
- Google Looker Studio, LookerML, Power BI, or Tableau
- Exposure to Master Data Management (MDM) systems.
- Interest or experience in Web3 data and blockchain analytics.