Job Description:
We are looking for a Data Engineer with 15 years of hands-on experience to design, build, and optimize scalable data pipelines and analytics solutions. You will play a key role in evolving our data infrastructure, ensuring our data is accessible, reliable, and optimized for high-performance analytics across the organization.
System Designation: Data Engineer
Location: Remote
Key Responsibilities:
- Design, develop, and maintain robust data pipelines and scalable data models.
- Build and optimize ELT/ETL workflows to ingest data from various internal and external sources.
- Develop efficient SQL-based transformations and create analytics-ready data sets.
- Work with diverse datasets, including structured, semi-structured, and unstructured data.
- Optimize platform performance, monitoring storage, compute costs, and query efficiency.
- Collaborate with analytics, product, and business teams to understand data requirements.
- Ensure data quality, reliability, and governance throughout the data lifecycle.
- Participate in code reviews and maintain high standards for technical documentation.
Required Qualifications:
- 15 years of experience in Data Engineering or a similar backend role.
- Hands-on experience with modern cloud data warehouses (e.g., Snowflake, BigQuery, Redshift, or Databricks).
- Advanced SQL skills, including the ability to write and optimize complex queries.
- Experience with data orchestration and pipeline tools (e.g., Airflow, dbt, Fivetran, Matillion, etc.).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Strong understanding of data warehousing concepts, dimensional modeling, and schema design.
- Bachelor's degree in Engineering (Computer Science, IT, or a related technical field).
Nice to Have:
- Proficiency with dbt (Data Build Tool).
- Relevant Cloud or Data Engineering certifications.
- Python or other scripting experience for data automation.
- Exposure to BI tools (Power BI, Tableau, Looker).