Search by job, company or skills

E

Senior Software Engineer

Save
new job description bg glownew job description bg glow
  • Posted 9 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Business Unit:

The Product team forms the crux of our powerful platforms and helps connect millions of customers worldwide with the brands that matter most to them. This team of innovative problem solvers develops and builds products that position Epsilon as a differentiator, encouraging an open and balanced marketplace built on respect for individuals, where every brand interaction holds value. Our full-cycle product engineering and data teams chart the future and set new benchmarks for our products, by using industry standard methodologies and sophisticated capabilities in data, machine learning, and artificial intelligence. Driven by a passion for delivering smart end-to-end solutions, this team plays a key role in Epsilon's success story.

The Data Engineering team at Epsilon plays a pivotal role in building the next-generation data platforms that empower Epsilon's ecosystem. We design, develop, and optimize scalable, high-performance data solutions that process terabytes of data every day, driving intelligent insights for clients across industries. This team of problem-solvers and innovators uses modern big data, cloud, and automation technologies to deliver reliable and efficient data products.

Candidate will be a member of the Data Engineering Team and be responsible for developing, Unit Testing, and implementing applications for the Data engineering group predominantly in the Hadoop Ecosystem and Databricks.

Why we are looking for you:

  • You have good knowledge of Databricks implementations and are ready to leverage this expertise to migrate and modernize Hadoop ecosystem workloads on Databricks..
  • You are hands-on with big data technologies like Spark, PySpark, Hive, and Hadoop, and enjoy working with massive datasets.
  • You have experience working with AWS.
  • You have strong experience in building and optimizing large-scale data engineering pipelines.
  • You are self-driven, thrive in solving complex problems, and can mentor and guide junior engineers.
  • You have experience with data design, data modeling, and performance tuning in distributed systems.
  • You take pride in writing efficient, maintainable code and automating processes to improve efficiency.
  • You enjoy new challenges and are solution oriented.

What you will enjoy in this role:

  • Opportunity to design, build, and optimize large-scale data solutions that power Epsilon's core products.
  • Exposure to diverse data engineering challenges, from ingestion and transformation to performance optimization and data governance.
  • Hands-on experience with modern data platforms, cloud ecosystems, and automation frameworks.
  • A collaborative and agile work environment that values innovation, learning, and continuous improvement.
  • Being part of a global Data team that directly impacts data-driven decision-making for top-tier clients.

Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice.

Responsibilities:

  • Design, develop, and maintain data pipelines and ETL frameworks using Spark, PySpark, Hive, and SQL.
  • Design and develop scalable data pipelines and processing frameworks on Databricks using PySpark, SQL, and Delta Lake to support large-scale data integration and analytics.
  • Develop efficient, reusable, and reliable code for data processing and transformation.
  • Optimize and tune Spark jobs for performance and scalability.
  • Work with Technical Leads, Architects and data platform teams to understand and implement robust data solutions.
  • Contribute to data modeling, quality, and governance initiatives to ensure trusted data delivery.
  • Perform detailed analysis, troubleshooting, and RCA for production issues and optimize system reliability.
  • Participate in code reviews, enforce best coding and design practices.
  • Collaborate with cross-functional teams to deliver high-quality software solutions.
  • Improve and optimize deployment challenges and help in delivering reliable solution.
  • Interact with technical leads and architects to discover solutions that help solve challenges faced by Data Engineering teams.
  • Contribute to building an environment where continuous improvement of the development and delivery process is in focus and our goal is to deliver outstanding software.

Qualifications:

  • BE / B.Tech / MCA – No correspondence course.
  • 5-8 years of experience.
  • Must have hands-on experience in building and optimizing data solutions on the Hadoop ecosystem leveraging PySpark.
  • Must have Good Knowledge and experience with Databricks
  • Must have experience working with AWS
  • Experience with performance tuning for large data sets.
  • Experience with JIRA for user-story/bug tracking.
  • Experience with GIT/Bitbucket.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147611321

Similar Jobs

Bengaluru, India

Skills:

Software DevelopmentmultithreadingStorageProgramming LanguagesSqlNetworksQuery OptimizationDistributed SystemsSynchronizationcompute technologiesDatabase InternalsIndexingcompilersdistributed databasestransactional systemslarge-scale infrastructureRelational Databasesquery processingconcurrency controlHardware Architecture

Bengaluru, India

Skills:

OsgiTomcatPostgreSQLHTMLReactJavascriptDockerMySQLElasticsearchBootstrapOracleAzure DevOpsAWSJavaJspJ2EEJenkinsGitjQueryGcpAzureKubernetesLiferay DXPGitHub ActionsLiferay 7.xReact hooks

Bengaluru, India

Skills:

OtbiPl SqlOracle Fusion HcmHdlAbsence ManagementApplication SecurityHcm ExtractsData MigrationWorkflowsApisSoapBi PublisherSqlRESTVBCSTalent ManagementFast FormulasExtensionsOracle Cloud Integration toolsLearningPayrollSaaS extensionsCompensationHelp DeskAlerts and NotificationsOICCore HrPage CustomizationsHSDL

Bengaluru, India

Skills:

DjangoReactMongoDBPythonAWS

Bengaluru, India

Skills:

JavaGolangPrometheusGrafanaAppdynamicsShellJmeterGcpJavascriptDockerPerformance TestingAzureKubernetesPythonAWSK6Locustsystem optimizationScalability analysisNomad