Search by job, company or skills

Muoro

Muoro - Data Engineer - Python/PySpark

new job description bg glownew job description bg glownew job description bg svg
  • Posted 17 days ago
  • Be among the first 30 applicants
Early Applicant

Job Description

Company Overview

Muoro is a rapidly growing data analytics and solutions provider, empowering businesses across various sectors, including finance, healthcare, and e-commerce, to make data-driven decisions. We specialize in building scalable data pipelines and advanced analytics platforms that unlock valuable insights from complex datasets. Our collaborative and innovative culture fosters continuous learning and professional growth.

Role Overview

We are seeking a highly motivated and experienced Data Engineer to join our dynamic team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and data warehousing solutions.

You will collaborate closely with data scientists, analysts, and other engineers to ensure the availability, reliability, and performance of our data infrastructure, enabling data-driven decision-making across the organization. Your work will directly impact our ability to deliver cutting-edge analytics solutions to our clients and drive business growth.

Key Responsibilities

  • Design and develop scalable ETL pipelines using PySpark, Python, and other relevant technologies to ingest, transform, and load data from various sources into our data warehouse.
  • Build and maintain data warehousing solutions on platforms like AWS, Azure, or Databricks to ensure data quality, consistency, and accessibility for analytical purposes.
  • Implement data quality checks and monitoring systems to proactively identify and resolve data issues, ensuring the integrity of our data assets.
  • Optimize data pipelines and queries for performance and efficiency to meet the growing demands of our data-intensive applications.
  • Collaborate with data scientists and analysts to understand their data requirements and provide them with the necessary data infrastructure and support.
  • Contribute to the development of data engineering best practices and standards to ensure consistency and maintainability across our data infrastructure.

Required Skillset

  • Demonstrated ability to design, develop, and maintain ETL pipelines using PySpark and Python.
  • Proven expertise in SQL and data warehousing concepts, with experience in building and optimizing data warehouses on platforms like AWS, Azure, or Databricks.
  • Experience with cloud-based data engineering tools and services, such as AWS Glue, Azure Data Factory, or Databricks Delta Lake.
  • Strong understanding of data modeling principles and techniques, with the ability to design efficient and scalable data schemas.
  • Excellent problem-solving and analytical skills, with the ability to troubleshoot data issues and identify root causes.
  • Effective communication and collaboration skills, with the ability to work effectively in a team environment.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Adaptable to work in a hybrid environment based in Delhi NCR, Gurgaon/Gurugram, or Bangalore.

(ref:hirist.tech)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144007973