Job Description
Key Responsibilities
- Design, build, and optimize scalable ETL pipelines for large-scale data processing.
- Work on data aggregation, transformation, and cleaning for downstream analytics and AI/ML workloads.
- Implement data reporting and dashboarding solutions to support business insights.
- Collaborate with cross-functional teams including AI/ML engineers, solution architects, and business stakeholders to deliver
end-to-end data solutions.
- Ensure high data quality, reliability, and performance across platforms.
________________________________________
Required Skills
- Ab Initio hands-on experience in data processing and pipeline development.
- VortexAI exposure to AI/ML-driven data solutions or automation frameworks.
- SuperPlex Dashboarding ability to build dashboards for reporting and insights.
- ETL Development strong knowledge of ETL pipelines and workflow orchestration.
- Data Aggregation & Cleaning proven expertise in preparing and optimizing datasets.
- Data Reporting & Dashboarding experience with visualization tools and reporting frameworks.
- Strong SQL and database skills (RDBMS, NoSQL preferred).
- Programming experience in Python/Scala/Java for data processing.
- Familiarity with Cloud platforms (AWS/Azure/GCP) for data engineering preferred.
________________________________________
Good to Have
- Experience working with big data frameworks (Spark, Hadoop).
- Exposure to CI/CD practices and data pipeline automation.
- Knowledge of data governance, security, and compliance standards.
Qualifications
BE/BTech
Range Of Year Experience-Min Year
4
Range Of Year Experience-Max Year
8