
Search by job, company or skills
Location: Ahmedabad
Job Type: Full-Time
Experience: 3 to 5 years
Department: Data Engineering / Analytics
About Company: Zenithive delivers smart, data-driven solutions that empower businesses across industries. Our mission is to combine deep, domain-specific expertise with cutting-edge technology to drive meaningful impact. With a trusted team, consistent quality, and a growing global presence, we remain committed to delivering excellence while staying true to our core values: innovation, integrity, and client success. Be part of a team that's not just building solutions, but shaping the future with intelligence.
Role & Responsibilities: We are looking for a highly skilled and experienced Senior Databricks Data Engineer to join our data engineering team. In this role, you will be responsible for designing, building, and optimizing scalable data pipelines and solutions using the Databricks Lakehouse Platform. You will work closely with data scientists, analysts, and business stakeholders to ensure reliable and efficient data delivery across the organization.
• Design and develop scalable ETL/ELT pipelines using Apache Spark on Databricks.
• Build and maintain robust data lakes and data warehouses (Delta Lake, Lakehouse architecture).
• Optimize data flows for performance, scalability, and cost-efficiency on the Databricks platform.
• Collaborate with data analysts, scientists, and other engineers to integrate data from diverse sources.
• Implement data quality checks, monitoring, and alerting solutions.
• Develop CI/CD pipelines for data jobs and workflows.
• Support the migration of legacy data systems to the Databricks ecosystem.
• Contribute to data governance and security best practices Requirements Must-Have:
• 4+ years of experience in data engineering.
• Strong proficiency in Apache Spark, PySpark, SQL,Python and Delta Lake.
• Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure Databricks, Azure Data Factory, and Azure Synapse.
• Deep understanding of data modeling, data warehousing, and ETL processes.
• Hands-on experience with job orchestration tools like Databricks Workflows, Airflow, or similar.
• Solid understanding of CI/CD, version control (Git), and DevOps practices for data projects. Nice-to-Have: • Experience with streaming data (Structured Streaming, Kafka).
• Familiarity with tools like dbt, MLflow, or Power BI/Tableau.
• Experience in a regulated or enterprise environment (finance, healthcare, etc.). Education
• Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
Why Zenithive
• Be part of building something from the ground up in a high-growth, high-impact domain
• Work alongside passionate experts in AI, data, and industry consulting
• Competitive base + uncapped commission structure tied directly to performance
• Remote-first flexibility with real ownership and career growth potential Perks & Benefits
• Employees are entitled to flexible working hours to support work-life balance.
• The company operates on a 5-day work week.
• A healthy, inclusive, and collaborative work environment is maintained.
• The company organizes Fun Fridays and festive celebrations to foster team spirit.
• Employees have access to opportunities for continuous learning and career growth.
• An annual company trip is organized for team building and relaxation.
• Comprehensive medical insurance benefits are provided to employees.
• Performance-based bonuses and annual salary revisions are offered.
• A hybrid working model is available, allowing a mix of in-office and remote work as per company policy.
Job ID: 146715771