Responsibilities
- Provide technical support and troubleshooting for Big Data applications and systems built on the Hadoop ecosystem
- Monitor system performance, analyze logs, and identify potential issues before they impact services
- Collaborate with engineering teams to deploy and configure Hadoop clusters and related components
- Assist in maintenance and upgrades of Hadoop environments to ensure optimum performance and security
- Develop and maintain documentation for processes, procedures, and system configurations
- Implement data backup and recovery procedures to ensure data integrity and availability
- Participate in on-call rotations to provide after-hours support as needed
- Stay up to date with Hadoop technologies and support methodologies
- Assist in the training and onboarding of new team members and users on Hadoop best practices
Requirements
- Bachelor's degree in Computer Science, Information Technology, or a related field
- 3+ years of experience in Big Data support or system administration, specifically with the Hadoop ecosystem
- Strong understanding of Hadoop components (HDFS, MapReduce, Hive, Pig, etc.)
- Experience with system monitoring and diagnostics tools
- Proficient in Linux/Unix commands and scripting languages (Bash, Python)
- Basic understanding of database technologies and data warehousing concepts
- Strong problem-solving skills and ability to work under pressure
- Excellent communication and interpersonal skills
- Ability to work independently as well as collaboratively in a team environment
- Willingness to learn new technologies and enhance skills
Skills: Hadoop, spark/scala, HDFS, SQL, Unix Scripting, Data Backup, System Monitoring
Benefits
Competitive salary and benefits package.
Opportunity to work on cutting-edge technologies and solve complex challenges.
Dynamic and collaborative work environment with opportunities for growth and career advancement.
Regular training and professional development opportunities.