Search by job, company or skills

E

Software Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 8 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Business Unit:

When you're one of us, you get to run with the best. For decades, we've been helping marketers from the world's top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon's best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modelling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit epsilon.com/apac or our LinkedIn page.

The Data Engineering team at Epsilon plays a pivotal role in building the next-generation data platforms that empower Epsilon's ecosystem. We design, develop, and optimize scalable, high-performance data solutions that process terabytes of data every day, driving intelligent insights for clients across industries. This team of problem-solvers and innovators leverages modern big data, cloud, and automation technologies to deliver reliable and efficient data products.

Candidate will be a member of the Data Engineering Team and be responsible for developing, Unit Testing, and implementing applications for the Data engineering group predominantly in the Hadoop Ecosystem and Databricks.

Why we are looking for you:

  • You have good knowledge of Databricks implementations and are ready to leverage this expertise to migrate and modernize Hadoop ecosystem workloads on Databricks..
  • You are hands-on with big data technologies like Spark, PySpark, Hive, and Hadoop, and enjoy working with massive datasets.
  • You have experience working with AWS.
  • You have strong experience in building and optimizing large-scale data engineering pipelines.
  • You are determined, thrive in solving complex problems, and can mentor and guide junior engineers.
  • Ability to analyse, troubleshoot and resolve customer issues with strong customer focus.
  • You take pride in writing efficient, maintainable code and automating processes to improve efficiency.
  • You enjoy new challenges and are solution oriented.

What you will enjoy in this role:

  • Opportunity to design, build, and optimize large-scale data solutions that power Epsilon's core products.
  • Exposure to diverse data engineering challenges, from ingestion and transformation to performance optimization and data governance.
  • Hands-on experience with modern data platforms, cloud ecosystems, automation frameworks and GenAI.
  • A collaborative and agile work environment that values innovation, learning, and continuous improvement.
  • Being part of a global Data team that directly impacts data-driven decision-making for top-tier clients.

Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice.

Responsibilities:

  • Design, develop, and maintain data pipelines and ETL frameworks using Spark, PySpark, Hive, and SQL.
  • Design and develop custom data solutions in Databricks.
  • Develop efficient, reusable, and reliable code for data processing and transformation.
  • Optimize and tune Spark jobs for performance and scalability.
  • Work with Technical Leads, Architects and data platform teams to understand and implement robust data solutions.
  • Contribute to data modelling, quality, and governance initiatives to ensure trusted data delivery.
  • Perform detailed analysis, troubleshooting, and RCA for production issues and optimize system reliability.
  • Participate in code reviews, enforce best coding and design practices.
  • Collaborate with multi-functional teams to deliver high-quality software solutions.
  • Improve and optimize deployment challenges and help in delivering reliable solution.
  • Interact with technical leads and architects to discover solutions that help solve challenges faced by Data Engineering teams.
  • Contribute to building an environment where continuous improvement of the development and delivery process is in focus and our goal is to deliver outstanding software.

Qualifications:

  • BE / B.Tech. / MCA - No correspondence course.
  • 3 - 5 years of experience in Data Engineering.
  • Must have hands-on experience in building and optimizing data solutions on the Hadoop ecosystem leveraging PySpark.
  • Must have Good Knowledge and experience with Databricks.
  • Must have experience working with AWS.
  • 1 - 3 years of experience in Perl, Shell Scripting and SQL.
  • Experience with performance tuning for large data sets.
  • Experience with JIRA for user-story/bug tracking.
  • Experience with GIT/Bitbucket.
  • Experience of using GenAI in data processing will be a plus.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145461873

Similar Jobs