
Search by job, company or skills
• Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
• Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
• Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
• Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
• Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
• Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
• Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
• Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
• Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Education and Experience:
• Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
• 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
Technical Skills:
• PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
• Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
• Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
• Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.
• Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
• Scripting and Automation: Strong scripting skills in Linux.
Soft Skills:
• Strong analytical and problem-solving skills.
• Excellent verbal and written communication abilities.
• Ability to work independently and collaboratively in a team environment.
• Attention to detail and commitment to data quality.
ValueLabs is a global technology company providing consulting, technology, and outsourcing services. Established in 1997, we focus on digital solutions that drive innovation and long-term partnerships. With over 7,000 employees in 30+ locations, we specialize in Product Development, Data Technology, DevOps, Cloud Computing, and Digital Transformation. Our emphasis on quality and client-centricity ensures exceptional customer experiences and business outcomes. We deliver scalable, cutting-edge solutions, helping businesses thrive in the digital age through collaboration and technology-driven excellence.
Job ID: 105049101