This role is for one of our clients
Company Name: EazyML
Industry: Software Development
Seniority level: Mid-Senior level
Min Experience: 5 years
Location: Remote (India)
JobType: full-time
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
EazyML, Recognized by Gartner, EazyML (www.EazyML.com) specializes in Responsible AI. Our solutions facilitate proactive compliance and sustainable automation and The company is associated with breakthrough startups like Amelia.ai.
This is a full-time Remote role for a Senior ETL Data Architect / Lead Data Engineer (Databricks, Microsoft Fabric Experience) THIS IS NOT AN ENTRY LEVEL JOB, DO NOT APPLY IF YOU DON'T HAVE AT LEAST 5 YEARS OF DATA ENGINEERING EXPERIENCE. NO EXCEPTIONS
You can work from home (anywhere in INDIA, job location is in India) and will be responsible for researching and developing EazyML platform, along with helping solve Customer problems.
EazyML, recognized by Gartner, (www.EazyML.com) specializes in Responsible AI. Our solutions enable proactive compliance and sustainable automation for enterprises adopting AI at scale. The company is also associated with breakthrough startups like Amelia.ai.
This is a Fully Remote position in INDIA, you can work from anywhere in India.
We're hiring a
Senior Data Architect to design and lead
scalable, cloud-native data platforms using
Databricks, Apache Airflow, Spark, and Delta Lake with experience in data architecture, ETL/ELT strategy, and orchestration, building high-performance pipelines in
Python and SQL, and partnering with ML teams on
LLM/BERT-based data workflows. This role emphasizes
architecture, governance, and best practices, with hands-on experience with data engineering.
In this role, you will architect and build
high-performance data engineering solutions, develop
robust ETL/ELT pipelines, and lead initiatives across
data architecture, feature engineering, and AI-driven data workflows. You will work extensively with
Databricks, Spark, Delta Lake, Airflow, and Python, while applying
modern GenAI tools to improve developer productivity and data quality.
Key Skills:
Databricks
- Apache Airflow
- Spark
- Delta Lake
- Data Architecture
- ETL/ELT
- Python
- SQL
- Azure
- LLMs/NLP
Experience: 5+ years | CS/IT degree preferred