Search by job, company or skills

eucloid data solutions

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 15 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Position: Senior Data Engineer

Work location: Chennai

Experience needed: 3+

Must have experience: AWS / GCP; Lead experience; Python; pyspark; SQL

Job Description

Eucloid is looking for a Lead Data Engineer to join our Data Platform team supporting various business applications.

The ideal candidate will support development of data infrastructure for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem.

The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions

Key Skills:

B. Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines

3+ years of experience in data engineering, building scalable and reliable data pipelines in production environments.

Strong experience with cloud data platforms, preferably Google Cloud Platform, with familiarity in services like BigQuery, Cloud Storage, and Dataflow. Experience with Amazon Web Services is a plus.

Hands-on expertise with distributed data processing frameworks such as Apache Spark for large-scale batch and/or streaming data processing.

Strong experience with workflow orchestration tools such as Apache Airflow for designing, scheduling, and monitoring complex data pipelines.

Expert-level SQL skills with the ability to write optimized queries, perform complex transformations, and tune queries for large-scale analytical workloads.

Proficiency in Python for data engineering tasks including pipeline development, automation, data transformations, and integration with APIs or cloud services.

Deep understanding of data warehousing concepts and hands-on experience with platforms such as Google BigQuery or Amazon Redshift.

Strong knowledge of data modeling techniques, including dimensional modeling, star/snowflake schemas, and designing data models optimized for analytics and reporting.

Experience designing scalable ETL/ELT architectures, ensuring high data quality, reliability, and performance for large-scale data platforms.

Responsibilities

Design, implementation, and improvement of processes & automation of Data infrastructure

Tuning of Data pipelines for reliability & performance

Building tools and scripts to develop, monitor, troubleshoot ETLs

Perform scalability, latency, and availability tests on a regular basis.

Perform code reviews and QA data imported by various processes.

Investigate, analyze, correct and document reported data defects.

Create and maintain technical specification documentation.

Interested share your resume to [Confidential Information]

More Info

Job Type:
Industry:
Employment Type:

Job ID: 145646559

Similar Jobs