Search by job, company or skills

E

Staff Software Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

A bit about Epsilon:

We are the global leader in creating meaningful connections between people and brands. We work with 15 of the top 20 global brands and 8 of the top 10 Fortune 500 companies. How did we get this far It is because of our team of thinkers and doers who, together, create the perfect blend of data, technology and creativity. They are fearless go-getters and creative innovators who have passion, determination and support to make their ideas come to life every day.

To know more about us, please visit https://india.epsilon.com and follow us on Facebook, Twitter, LinkedIn, and Instagram.

A bit about who we are looking for:

At Epsilon, we run on our people's ideas. It's how we solve problems and exceed expectations. Our team is now growing, and we are on the lookout for talented individuals who always raise the bar by constantly challenging themselves and are experts in building customized solutions in the digital marketing space.

So, are you someone who wants to work with cutting-edge technology and enable marketers to create data-driven, omnichannel consumer experiences through data platforms Then you could be exactly who we are looking for.

Apply today and be part of a creative, innovative, and talented team that's not afraid to push boundaries or take risks.

What you'll do

We seek Software Engineers with experience building and scaling services in on-premises and cloud environments.

As a Lead/Principal Engineer in the Epsilon Attribution/Forecasting Product Development team, you will design, implement, and optimize data processing solutions using Scala, Spark, and Hadoop. Collaborate with cross-functional teams to deploy big data solutions on our on-premises and cloud infrastructure along with building, scheduling and maintaining workflows. Perform data integration and transformation, troubleshoot issues, Document processes, communicate technical concepts clearly, and continuously enhance our attribution engine/forecasting engine.

Strong written and verbal communication skills (in English) are required to facilitate work across multiple countries and time zones. Good understanding of Agile Methodologies SCRUM.

What you'll need

  • Strong experience (12+ years) in Scala programming language and extensive experience with Apache Spark for big data processing for developing and supporting both on-prem and cloud operations, especially on AWS and as needed with GCP cloud
  • Using Python to develop infrastructure modules. Hence, hands-on experience with python.
  • In-depth understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce.
  • Solid grasp of database systems and SQLs for writing efficient SQLs(RDBMS/Warehouse) to handle TBS of data.
  • Experience in building, scheduling and maintaining DAG workflows.
  • Familiarity with data warehousing concepts and technologies.
  • End-to-end ownership with definition, development, and documentation of software's objectives, business requirements, deliverables, and specifications in collaboration with stakeholders.
  • Experience in working on GIT (or equivalent source control)
  • Understand Unit and integration test frameworks
  • Must have the ability to collaborate with stakeholders/teams to understand requirements and develop a working solution
  • Ability to work within tight deadlines and effectively prioritize and execute tasks in a high-pressure environment.
  • Must be able to mentor junior staff

Advantageous to have experience on below:

  • Hands-on with Databricks for unified data analytics, including Databricks Notebooks, Delta Lake, and Catalogues.
  • Proficiency in using the ELK (Elasticsearch, Logstash, Kibana) stack for real-time search, log analysis, and visualization
  • Strong background in analytics, including the ability to derive actionable insights from large datasets and support data-driven decision-making
  • Experience with data visualization tools like Tableau, Power BI, or Grafana.
  • Familiarity with Docker for containerization and Kubernetes for orchestration.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138273625

Similar Jobs