Job description
- At least 5+ years of experience as a Data Engineer
- Hands-on and in-depth experience with Star / Snowflake schema design, data modeling, data pipelining and MLOps.
- Strong background in streaming technologies such as Kafka, AWS MSK, AWS Kinesis, AWS Data Firehose and Snowpipe programming
- Expertise for CloudOps setting up infrastructure using tools like Terraform and Pulumi
- Experience in Data Warehouse technologies (e.g. CDC in Snowflake, AWS Redshift, etc)
- Experience in AWS data pipelines (Lambda, AWS glue, Step functions, etc)
- Strong experience in building ETL pipelines
- Proficient in SQL
- At least one major programming language (Python / Java)
- Experience with Data Analysis Tools such as Looker or Tableau
- Experience with Pandas, Numpy, Scikit-learn, and Jupyter notebooks preferred
- Familiarity with Git, GitHub, and JIRA.
- Sound understanding of delta loading
- Ability to locate & resolve data quality issues
- Ability to demonstrate end to ed data platform support experience
Preferred Domain : Private Equity Domain / Venture Capital Domain
- Required Skills Star / Snowflake, Schema Design, Data Modeling, Kafka, AWS, AWS KInesis, CDC, AWS Redshift, AWS Glue, SQL, Git, Github, JIRA, pANDAS, Numpy, Scikit-learn, Private Equity