Key Responsibilities:
- Design, develop, and deploy big data solutions using technologies such as Hadoop, Spark, Hive, HBase, Kafka, and other related tools.
- Develop and maintain data pipelines for data ingestion, transformation, and loading (ETL/ELT).
- Perform data analysis and extract meaningful insights from large datasets.
- Build and optimize data models for data warehousing and data lakes.
- Develop and implement data quality checks and data governance policies.
- Collaborate with data scientists, data analysts, and business stakeholders to understand data requirements and translate them into technical solutions.
- Stay abreast of the latest advancements in big data technologies and best practices.
- Troubleshoot and resolve big data system issues.
Required Skills and Experience:
- Bachelor s degree in Computer Science, Computer Engineering, or a related field.
- [Number] years of professional experience in big data development.
- Strong proficiency in Java, Python, or Scala.
- Experience with big data technologies such as Hadoop, Spark, Hive, HBase, Kafka.
- Experience with SQL and NoSQL databases.
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and as part of a team.