We are seeking Data Consultants( 3 - 4 Headcounts) for one of the significant projects for one of our client. This role requires hands-on expertise in scalable web data engineering solutions. Your scope will be as below:
Key Responsibilities
- Define and design scalable scraping architecture, data ingestion pipelines, storage strategies, and monitoring frameworks.
- Establish engineering standards for resilience, observability, error handling, and performance optimization.
- Review and validate production-grade code, scraping logic, system design, and CI/CD practices.
- Implement monitoring and alerting frameworks to ensure data reliability and SLA adherence.
- Ensure structured delivery planning, sprint governance, and milestone alignment.
Required Experience & Skills
- 8+ years of experience in Data Engineering, Web Data Engineering, or Data Consulting roles.
- Deep hands-on expertise in Python-based scraping frameworks (e.g., Scrapy, Selenium, Playwright), APIs, headless browsers, and anti-bot handling techniques.
- Strong understanding of distributed systems, scalable pipeline design, cloud platforms (AWS/GCP/Azure), and containerized deployments.
- Experience building production-grade data pipelines (ETL/ELT), data lakes, and orchestration tools (e.g., Airflow or similar).
- Proven ability to review and govern system architecture and code quality.
- Experience leading multi-stakeholder technical programs in enterprise environments.
- Familiarity with data compliance, privacy regulations, and legal considerations in external data acquisition.
- Experience managing or collaborating with distributed engineering teams.
- Strong executive communication skills with the ability to translate complex technical concepts into business value.
Note: This role is a hybrid role, location - Bengaluru.