Position Title: Senior Data Engineer
Experience Required: 8 to 12 Years
Location: Remote
Job Summary:
We are looking for a highly skilled and experienced Data Engineer with strong expertise in Databricks, PySpark, and big data technologies. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and analytics solutions to support business decision-making.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Databricks and PySpark.
- Work with large-scale structured and unstructured data from various sources.
- Optimize data workflows for performance and reliability.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
- Implement data quality checks, monitoring, and alerting mechanisms.
- Ensure data security and compliance with governance policies.
- Participate in code reviews and contribute to best practices in data engineering.
Required Skills & Qualifications:
- 8 to 12 years of experience in data engineering or related roles.
- Strong hands-on experience with Databricks and Apache Spark (PySpark).
- Proficiency in Python and SQL.
- Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure).
- Solid understanding of ETL/ELT processes, data warehousing, and data modeling.
- Familiarity with Delta Lake and Parquet formats.
- Experience with CI/CD pipelines and version control (e.g., Git).
- Excellent problem-solving and communication skills.
Preferred Qualifications:
- Knowledge of data governance, security, and compliance standards.
- Exposure to real-time data processing using tools like Kafka or Spark Streaming.
For More Details
Contact: Magimai
Mobile: 8220698292
Email ID: [Confidential Information]