Job Overview:
- Design, develop, and maintain scalable data pipelines and systems using DBT and Big Data technologies.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
- Implement data models and transformations using DBT.
- Develop and maintain ETL processes to ingest and process large volumes of data from various sources.
- Optimize and troubleshoot data workflows to ensure high performance and reliability.
- Ensure data quality and integrity through rigorous testing and validation.
- Monitor and manage data infrastructure, ensuring security and compliance with best practices.
- Provide technical support and guidance to team members on data engineering best practices.
Requirements:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer or in a similar role.
- Strong proficiency in DBT for data modeling and transformations.
- Hands-on experience with Big Data technologies (e.g., Hadoop, Spark, Kafka).
- Proficient in Python for data processing and automation.
- Experience with SQL and database management.
- Familiarity with data warehousing concepts and best practices.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of data governance and security practices.
- Certification in relevant technologies (e.g., DBT, Big Data platforms).