We are seeking a Mid Data Engineer with a strong background in big data technologies to join our dynamic team. The ideal candidate will be instrumental in designing, developing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. This role requires a professional who is passionate about building scalable and reliable data solutions and eager to contribute to our data-driven initiatives.
Key Responsibilities
- Design and develop robust data processing pipelines and analytics solutions primarily using Databricks, PySpark, and SQL.
- Architect scalable and efficient data models and storage solutions on the Databricks platform.
- Collaborate closely with architects and other teams to facilitate the migration of existing solutions to Databricks.
- Optimize the performance and reliability of Databricks clusters and jobs to consistently meet service level agreements (SLAs) and business requirements.
- Implement best practices for data governance, security, and compliance within the Databricks environment.
- Mentor junior engineers, providing technical guidance and fostering their growth.
- Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement.
- Contribute to a fast-paced environment, delivering scalable and reliable data solutions.
- Demonstrate strong communication and collaboration skills, working effectively within cross-functional teams.
Required Skills & Experience
- Total Years of Experience: 6+ years.
- Relevant Years of Experience: 5+ years in Data Engineering.
- Mandatory Skills: Databricks: 5+ Years of experience.
- PySpark: 5+ Years of experience.
- Spark: 5+ Years of experience.
- SQL: 5+ Years of experience.
- Strong understanding of distributed computing principles and extensive experience with big data technologies such as Apache Spark.
- Proven track record of delivering scalable and reliable data solutions.
- Cloud Platforms: 2+ Years of experience with AWS (or other cloud platforms such as Azure, GCP, and their associated data services).
- Problem-Solving: Excellent problem-solving skills and meticulous attention to detail.
Good to Have Skills
- Experience with containerization technologies such as Docker and Kubernetes.
- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Logistics & Compensation
- Vendor Billing Range: INR 8000/Day.
- Notice Period: 15 days.
- Work Model: Hybrid.
- Background Check: Post onboarding.
To Apply:
If you are a passionate Data Engineer looking for an exciting opportunity to work with cutting-edge technologies, please apply with your resume.