As a Senior Associate L1 in Data Engineering, you'll be instrumental in translating client requirements into technical designs and implementing critical components for data engineering solutions. This role demands a deep understanding of data integration and big data design principles to create custom solutions or implement package solutions effectively. You'll independently lead design discussions to ensure the overall health and robustness of the solution.
Your Impact: What You'll Achieve
- Contribute to Data Ingestion, Integration, and Transformation.
- Work with Data Storage and Computation Frameworks, focusing on performance optimizations.
- Support Analytics & Visualizations.
- Develop solutions related to Infrastructure & Cloud Computing.
- Engage with Data Management Platforms.
- Build functionality for data ingestion from multiple heterogeneous sources in both batch and real-time.
- Develop functionality for data analytics, search, and aggregation.
Qualifications: Your Skills & Experience
- Minimum 2 years of experience in Big Data technologies.
- Hands-on experience with the Hadoop stack, including HDFS, Sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, Hive, Oozie, Airflow, and other components necessary for building end-to-end data pipelines.
- Working knowledge of real-time data pipelines is an added advantage.
- Strong experience in at least one programming language: Java, Scala, or Python, with Java being preferable.
- Hands-on working knowledge of NoSQL and MPP data platforms like HBase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc.
- Well-versed and possess working knowledge of data platform-related services on Azure.
- Bachelor's degree and 4 to 6 years of work experience, or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position.
Set Yourself Apart With
- Good knowledge of traditional ETL tools (Informatica, Talend, etc.) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience.
- Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc.
- Knowledge of distributed messaging frameworks like ActiveMQ / RabbitMQ / Solace, search & indexing, and Microservices architectures.
- Experience in performance tuning and optimization of data pipelines.
- Cloud data specialty and other related Big Data technology certifications.