Role Summary:
We are seeking a highly motivated Data Systems Engineer to design, build, and maintain scalable data infrastructure that supports efficient storage, processing, and retrieval of large datasets. The ideal candidate will play a key role in developing reliable data pipelines, ensuring data quality, and enabling analytics and data science teams with clean and accessible data.
Key Responsibilities:
- Design, develop, and maintain robust ETL/ELT data pipelines.
- Build scalable data processing solutions for structured and unstructured data.
- Integrate data from multiple sources including APIs, databases, and third-party systems.
- Ensure data quality, integrity, consistency, and governance standards.
- Monitor and optimize data workflows for performance and reliability.
- Manage and maintain big data platforms and distributed processing systems.
- Design and maintain data warehouse solutions to support reporting and analytics.
- Collaborate with analytics, BI, and data science teams to understand data requirements.
- Implement security and access controls for sensitive data.
- Document data architecture, data models, and operational procedures.
Required Skills & Qualifications:
- Hands-on experience with ETL tools such as Apache NiFi, Talend, or similar platforms.
- Strong knowledge of big data technologies such as Hadoop and Spark.
- Proficiency in SQL and experience working with relational and NoSQL databases.
- Experience with data warehousing solutions such as Snowflake or Amazon Redshift.
- Familiarity with cloud data platforms (AWS, Azure, or GCP).
- Understanding of data modeling concepts and data architecture best practices.
- Experience with workflow orchestration tools (Airflow is a plus).
- Strong problem-solving and analytical skills.
- Good communication and collaboration skills.
Preferred Qualifications (Optional):
- Experience with real-time data streaming technologies (Kafka, Kinesis).
- Knowledge of DevOps practices and CI/CD for data pipelines.
- Exposure to containerization tools (Docker, Kubernetes).
- Experience working in Agile/Scrum environments.
Experience:
- 12 years of experience in data engineering, data systems, or related roles.