Search by job, company or skills

NStarX

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for a skilled Data Engineer who can design, build, and maintain end-to-end data pipelines on AWS. The ideal candidate will work closely with clients and internal teams to collect event-based data from continuously running APIs, preprocess and transform the information, and load it into analytical data stores such as Redshift, DynamoDB, or OpenSearch. The role involves building automated pipelines, ensuring data quality, and supporting analytical dashboards and reporting.

The Core Responsibilities For The Job Include The Following

Data Collection and Ingestion:

  • Work with continuously running AWS APIs to capture client/user events or keyword-based logs.
  • Develop AWS CLI scripts and automation logic to reliably collect and store logs.

Data Preprocessing

  • Clean, sanitize, and preprocess raw data (remove nulls, duplicates, and unwanted fields).
  • Standardize and validate incoming data for downstream processing.

Data Transformation

  • Convert raw event data into structured formats suitable for analytics.
  • Design transformation logic to derive meaningful metrics and KPIs.

Data Computation And Processing

  • Perform calculations, aggregations, and enrichments on preprocessed datasets.
  • Build scalable and optimized data processing workflows.

Data Storage

  • Load processed and structured data into relational and non-relational AWS databases.
  • Preferred databases include AWS Redshift, DynamoDB, OpenSearch/Elasticsearch, and other AWS-managed data stores.

Pipeline Automation

  • Build and maintain automated end-to-end pipelines for continuous data collection and processing.
  • Implement monitoring, error handling, and performance optimization.

Analytics And Dashboard Support

  • Collaborate with analytics teams to maintain dashboards, cards, and filters.
  • Exposure to BI tools like DOMO is a strong plus.

Client Interaction

  • Gather data requirements directly from clients.
  • Report status, findings, and outcomes effectively to stakeholders.

Requirements

  • Strong understanding of the AWS ecosystem, including AWS CLI, Lambda, S3 CloudWatch, API Gateway, Redshift, DynamoDB, and OpenSearch/Elasticsearch (any of these).
  • Experience with data ingestion from APIs, log collection, and event tracking.
  • Hands-on experience in data cleaning, preprocessing, and transformation; writing ETL/ELT logic; and structuring unstructured or semi-structured data (JSON, logs, and events).
  • Basic to intermediate Python programming for data workflows.
  • Proficiency with AWS CLI scripting and automation.
  • Knowledge of data modeling, warehouse concepts, and analytics workflows.
  • Experience with pipeline orchestration (Airflow, Step Functions, Lambda workflows, or any is fine).
  • Experience building dashboards or supporting BI tools.
  • (Knowledge of DOMO is a strong advantage. ).

Soft Skills / Additional Requirements

  • Strong analytical and problem-solving skills.
  • Ability to interact directly with clients to gather requirements.
  • Clear communication and documentation abilities.
  • Ownership mindset with attention to detail.
  • Ability to work in a fast-paced environment with minimal supervision.

This job was posted by Younus Shaik from NStarX.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 143881359

Similar Jobs