About Us:
MatchMove is a profitable Singapore-based fintech company and one of Asia's leading Banking-as-a-Service (BaaS) providers, enabling businesses to embed financial services directly into their digital ecosystems. Operating its proprietary, secure, and regulated Banking Wallet OS platform across Asia and beyond, MatchMove empowers enterprises to issue accounts, cards, payments, loans, and other financial products seamlessly within their own platforms.
The company is experiencing double-digit year-on-year growth and processes billions of dollars in transactions each year, underscoring its scale, resilience, and trust among partners and users. Recognized with multiple industry awards including the Frost & Sullivan's 2025 Singapore Enabling Technology Leadership Recognition for Excellence in Embedded Finance Innovation MatchMove has been celebrated for driving innovation across a wide range of embedded finance use cases.
By partnering with leading local banks and ecosystem players, MatchMove bridges the gap between traditional banking and modern digital commerce. Its mission is to deliver innovative, secure, and inclusive financial technology solutions that drive digital transformation for businesses while empowering millions of end users across the region.
With a strong commitment to innovation, regulatory excellence, and sustainable growth, MatchMove continues to pioneer new approaches to embedded finance, redefining how businesses and consumers access and interact with financial services in Asia and beyond.
Are You The One
As a Senior Software Engineer on our Data Platform team, you will build the backbone of our data infrastructure that powers critical business insights and automates regulatory reporting for our payments platform. You'll work on distributed, scalable, and highly reliable data pipelines, enabling efficient data processing at scale. You will play an integral role in shaping our data strategy, ensuring data governance and compliance while supporting key stakeholders across the organization.
You will get to:
- Design, build, and maintain high-performance data pipelines that integrate large-scale transactional data from our payments platform, ensuring data quality, reliability, and compliance with regulatory requirements.
- Develop and manage distributed data processing pipelines for both high-volume data streams and batch processing workflows in a cloud-native AWS environment.
- Implement observability and monitoring tools to ensure the reliability and scalability of the data platform, enabling stakeholders to make confident, data-driven decisions.
- Collaborate with cross-functional teams to gather requirements and deliver business-critical data solutions, including automation of payment transactions lifecycle management, regulatory reporting, and compliance.
- Design and implement data models across various storage paradigms to support payment transactions at scale while ensuring efficient data ingestion, transformation, and storage.
- Maintain data integrity by implementing robust validation, testing, and error-handling mechanisms within data workflows.
- Ensure that the data platform adheres to the highest standards for security, privacy, and governance.
- Provide mentorship and guidance to junior engineers, driving innovation, best practices, and continuous improvement across the team.
Requirements:
- 4-6 years of experience in backend development and/or data platform engineering.
- Proficiency in Python, with hands-on experience using data-focused libraries such as NumPy, Pandas, SQLAlchemy, and Pandera to build high-quality data pipelines.
- Strong expertise in AWS services (S3, Redshift, Lambda, Glue, Kinesis, etc.) for cloud-based data infrastructure and processing.
- Experience with multiple data storage models, including relational, columnar, and time-series databases.
- Proven ability to design and implement scalable, reliable, and high-performance data workflows, ensuring data integrity, performance, and availability.
- Experience with workflow orchestrators such as Apache Airflow or Argo Workflows for scheduling and automating data pipelines.
- Familiarity with Python-based data stack tools like DBT, Dask, Ray, Modin, and Pandas for distributed data processing.
- Hands-on experience with data ingestion, cataloging, and change-data-capture (CDC) tools.
- Understanding of DataOps and DevSecOps practices to ensure secure and efficient data pipeline development and deployment.
- Strong collaboration, communication, and problem-solving skills, with the ability to work effectively across multiple teams and geographies.
- Experience in payments or fintech platforms is a strong plus, particularly in processing high volumes of transactional data.
MatchMove Culture:
- We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We are AI-first in our approach. We embrace AI as a strategic tool that enhances decision-making, creativity, and productivity. Every team member is equipped and encouraged to integrate AI into their workflow, experiment with new tools, and contribute to our collective AI literacy.
- We focus on employee development, supporting continuous learning and growth through training programs, on the job learning and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Our team spans Asia, embracing diversity and fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to impact people's lives meaningfully.
- Grow with us and shape the future of fintech. Join us and be part of something bigger!
Personal Data Protection Act:
By submitting your application for this job, you are authorizing MatchMove to:
- collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and
- retain your personal data for one year for consideration of future job opportunities (where applicable).