Search by job, company or skills

Circana

Senior Data Engineer - Python, PySpark, SQL

new job description bg glownew job description bg glownew job description bg svg
  • Posted 12 days ago
  • Be among the first 30 applicants
Early Applicant

Job Description

Who are we

Circana is a leading provider of market research, data analytics, and consumer insights, serving over 7,000 global brands across 26 industries. Formed through the merger of IRI and NPD, Circana leverages decades of expertise and its proprietary Liquid Data platform to help businesses understand consumer behavior, measure market performance, and drive growth. Headquartered in Chicago, the company combines advanced technology, AI, and a vast data ecosystem to deliver actionable intelligence to clients in retail, manufacturing, and consumer packaged goods sectors

What will you be doing

We are seeking a seasoned Senior Data Engineer to lead the development of scalable data solutions in a cloud-native environment. This role involves designing robust data pipelines, optimizing data warehouse performance, and working with in-house tools to support advanced analytics and business intelligence initiatives.

Job Responsibilities

  • Design, develop, and optimize ETL/ELT pipelines using PySpark, Python, and SQL.
  • Architect and implement data warehouse solutions on Cloud platforms (AWS, Azure, GCP).
  • Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
  • Learn and work extensively with in-house data tools and frameworks, contributing to their enhancement and integration.
  • Ensure data quality, governance, and security across all data assets.
  • Mentor junior engineers and promote best practices in data engineering.
  • Drive performance tuning, cost optimization, and automation of data workflows.
  • Participate in Agile ceremonies, code reviews, and technical design discussions.

Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 510 years of experience in data engineering, with a strong focus on cloud-based data solutions.
  • Proficiency in Python, PySpark, and SQL for data processing and transformation.
  • Hands-on experience with Cloud platforms (AWS, Azure, or GCP) and their data services.
  • Solid understanding of data warehousing concepts, dimensional modeling, and data architecture.
  • Experience with version control (Git), CI/CD pipelines, and Agile methodologies.
  • Ability to quickly learn and adapt to internal tools and technologies.
  • Experience with orchestration tools like Apache Airflow,
  • Knowledge of data visualization tools (e.g., Power BI, Tableau).
  • Strong communication and leadership skills.

Experience: 5-10 Years

Tech Stack: Data Engineering, Python, Pyspark, SQL

Location: Whitefield, Bangalore (Hybrid Model)

More Info

Job Type:
Employment Type:

About Company

Job ID: 139735721