Overview
About Business Unit:
The Product team forms the crux of our powerful platforms and helps connect millions of customers worldwide with the brands that matter most to them. This team of innovative problem solvers develops and builds products that position Epsilon as a differentiator, encouraging an open and balanced marketplace built on respect for individuals, where every brand interaction holds value. Our full-cycle product engineering and data teams chart the future and set new benchmarks for our products, by using industry standard methodologies and sophisticated capabilities in data, machine learning, and artificial intelligence. Driven by a passion for delivering smart end-to-end solutions, this team plays a key role in Epsilon's success story.
Why are we looking for you
At Epsilon, Senior Staff Engineers are force multipliers. This role goes beyond owning systemsyou will shape the long-term technical vision of the Cleanroom data platform, influence multiple teams, and partner with product and leadership to deliver scalable, reliable, and cost-efficient solutions at enterprise scale. If you are passionate about solving complex data problems, building platforms used by many teams, and setting engineering direction, this role is for you.
What will you enjoy in this role
As a Senior Staff Software Engineer, you will operate at a platform and organization-wide level, driving architecture, technical strategy, and execution for mission-critical data systems across AWS/GCP and Databricks. You will shape engineering standards, mentor senior engineers, and ensure the Cleanroom platform scales reliably to support sustained business growth.
Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice.
Responsibilities
- Architect and deliver large-scale data platforms using AWS, Databricks, and distributed processing frameworks, driving key technical decisions and implementing highly scalable, resilient architectures.
- Work hands-on across the full technology stackincluding Python, Apache Spark, AWS services, Databricks, event-driven systems, and SQL/NoSQL databasesto solve complex engineering challenges and maintain consistent platform excellence.
- Lead platform-wide technical initiatives to improve performance, reliability, security, governance, and cost efficiency.
- Collaborate effectively with global stakeholders, including engineering, product management, and architecture teams, to align technical solutions with business objectives and deliver impactful results.
- Take complete ownership of the software development lifecycle, from gathering and defining requirements to development, deployment, and thorough documentation of solutions.
- Demonstrate strong leadership qualities, with the ability to mentor, guide, and inspire junior engineers, fostering a culture of innovation, excellence, and teamwork within the organization.
Qualifications
- B.E/B.Tech/M.Tech/MCA in Computer Science, Information Technology or a related field.
- Bring 12 to 16+ years of extensive experience in software engineering and architectural design, with a significant focus on large-scale big data and data engineering projects.
- Demonstrate deep, practical expertise in Python, PySpark, and Apache Spark, showcasing hands-on capability in distributed data processing and high-performance analytics.
- Hold substantial experience working with Apache Spark and Databricks, effectively managing and processing massive datasets to support business intelligence and data-driven strategies.
- Showcase in-depth knowledge and practical experience with AWS services, including S3, Glue, Redshift, EMR, Athen and EventBridge for building scalable, reliable data solutions.
- Display proficiency in modern streaming technologies such as AWS Kinesis, AWS SQS, Kafka, and RabbitMQ, enabling real-time and near-real-time data processing capabilities.
- Possess a strong background in both NoSQL databases like MongoDB or DynamoDB and SQL-based systems such as SQL Server, AWS Aurora, and AWS RDS.
- Have comprehensive expertise in Big Data technologies, including Data Warehousing, Data Lakes, and Delta Lake architecture, to architect robust and scalable data ecosystems.
- Demonstrate hands-on experience with Infrastructure as Code (IaaS) tools such as Ansible or Terraform, enabling automated and consistent infrastructure deployments.
- Exhibit a proven track record in Continuous Integration/Continuous Deployment (CI/CD) and DevOps practices using Jenkins, GitLab, and BitBucket, ensuring streamlined and automated software delivery.
Advantageous to have experience in the following:
- AWS and Databricks Certifications
- Proficient in Azure or Google Cloud Platform (GCP)
- Experience with Generative AI, LLMs, RAGs, and Agentic AI, including designing, developing, and deploying AI-driven solutions across various platforms.