Artificial intelligence in financial services has shifted dramatically over the past years and it continues to evolve with new use cases emerging to help organisations extract value and improve efficiency. According to a survey by Gartner, 58 percent of finance teams were using AI in 2024, up from 37 percent in 2023. What began as experimental pilots has evolved into strategic enterprise deployments, with financial institutions across the Asia Pacific region racing (and collaborating) to harness AI's transformative potential while managing its inherent risks.
Project MindForge – Expanding scope and translating principles into practice
Singapore's Project MindForge, the collaborative industry initiative launched in June 2023, has broadened the project’s scope beyond generative AI to encompass the full spectrum of AI technologies, including traditional AI, generative AI, and the emerging field of agentic AI. This expansion reflects the reality that financial institutions are deploying multiple forms of AI simultaneously and need comprehensive frameworks that address the entire AI ecosystem.
The AI Risk Management Executive Handbook released in November 2025, the first of three parts of the MindForge AI Risk Management Handbook, marked a pivotal moment in Project MindForge’s journey from framework development to practical implementation.
The MindForge AI Risk Management Handbook consists of the following:
- AI Risk Management Executive Handbook [released November 2025]: Resource intended for executives, providing considerations and implementation practices for governing AI.
- AI Risk Management Operationalisation Handbook [to be released]: Detailed guidance on the operationalisation of each of the implementation practices.
- AI Risk Management Handbook Implementation Examples [to be released]: Detailed case studies on individual financial institutions’ experiences implementing AI governance and risk management.
The MindForge AI Risk Management Handbook works in tandem with Monetary Authority of Singapore (MAS) Guidelines on AI Risk Management to provide a comprehensive approach to AI governance where the guidelines establish regulatory expectations across the AI lifecycle and the Handbook offers practical guidance on meeting these expectations.
How can financial institutions in Singapore build for the future
Moving one step beyond frameworks and handbooks, Singapore has launched practical initiatives to support financial institutions at different stages of their AI journey. Three new programmes announced in late 2025 demonstrate this commitment to moving from theory to practice.
- MAS Financial AI Builder Programme - BuildFin.ai: facilitates partnerships between technology providers, research institutes, and financial institutions to co-develop AI solutions that address real market needs whilst meeting regulatory requirements.
- MAS Pathfinder Programme - Pathfin.ai: provides a platform of industry-validated AI solutions and implementation best practices for financial institutions, sharing their experience implementing AI solutions while gaining insights from the collective experiences of their peers.
- Strategic UK-Singapore partnership on AI-in-Finance: enabling AI-in-Finance solution providers in Singapore and AI innovators in the UK to scale and operate across both markets more effectively, fostering cross-border innovation in responsible AI.
What are the implications of Project MindForge for financial crime compliance?
For financial crime compliance teams, these developments carry particular significance. AI has the potential to transform compliance operations by reducing false positives, accelerating investigation processes, and augmenting traditional rule-based systems to detect sophisticated threats.
However, realising this potential requires more than simply overlaying existing compliance systems with AI capabilities. The Project MindForge framework emphasises the importance of explainability, human oversight, and continuous monitoring – all critical elements for compliance applications where regulatory scrutiny is intense and consequences for non-compliance include reputational damage and fines.
The expansion to cover agentic AI is highly relevant for compliance. As AI systems gain greater autonomy to make decisions and take actions, compliance teams must ensure that these systems operate within regulatory boundaries, maintain appropriate human oversight, and produce auditable outcomes that can withstand regulatory review.
For financial crime compliance teams to make informed decisions based on AI-generated recommendations, they need to understand the AI process and outputs. An explainable AI approach where every output is paired with human-readable reasoning and supporting evidence helps to transform and deliver value in financial crime compliance when it’s applied responsibly.
Operationalising AI through a compliance-first approach
The Asia Pacific region continues to demonstrate leadership in responsible AI adoption for financial services and this is expected to continue in 2026. Project MindForge represents a model for how regulators and industry can collaborate to enable innovation while managing risk effectively.
Different countries and organisations may have varying degrees of risk appetite when it comes to AI, but it is undeniable that AI has the capability to adapt in real time to emerging threat patterns by deriving actionable insights from unstructured data. A compliance-first approach ensures that every AI capability is developed in line with regulatory expectations and tested for fairness, transparency, and auditability, and not just for AI’s sake.
AI governance is a fundamental requirement for scaling AI adoption. As each institution charts its own course based on its AI maturity, risk appetite, and strategic objectives, those that invest now in building strong AI governance foundations will be well-positioned to adapt as AI capabilities expand and regulatory expectations evolve.
Read more on how to harness AI responsibly in Asia Pacific in the Napier AI / AML Index.









