Artificial intelligence (AI) has quickly moved from theory into practice in financial services. But for many firms, the real question isn’t whether AI can transform compliance, it’s how to deploy it responsibly. That’s exactly where the United Kingdom’s Financial Conduct Authority (FCA) steps in with AI Live Testing, a new initiative designed to help firms innovate with confidence.
What is FCA AI live testing?
Set to begin in September 2025, FCA AI Live Testing offers a structured yet flexible environment where firms can trial AI-driven services in live markets with regulatory support. Unlike traditional sandbox programmes, this isn’t about compliance box-ticking or model approval. Instead, it’s a partnership between the regulator and firms to:
- Assess AI systems in real-world contexts, not just lab simulations.
- Spot risks early and adapt controls in-flight.
- Explore assurance methods rooted in operational reality.
- Share learnings that shape industry-wide best practices.
It complements the FCA’s Supercharged Sandbox, which is designed for earlier-stage experimentation. Together, they form a continuum that mirrors the spirit of innovation itself: test, learn, iterate, improve.
For firms already developing advanced AI solutions such as those in transaction monitoring, sanctions screening, or behavioural analytics, AI Live Testing creates a path to deploy innovation faster, with greater confidence and clarity on regulatory expectations.
Getting started with implementing AI: where to start
While FCA Live Testing offers a proving ground, financial institutions still need a structured internal approach to AI adoption.
1. Assess readiness and data maturity
AI thrives on high-quality historical data. A thorough maturity assessment reviews your organisation’s people, data, and processes. This helps stakeholders pinpoint strengths and gaps, enabling them to prioritise actions needed to achieve AI readiness.
Firms should evaluate whether their transaction, customer, and behavioural datasets are complete, accurate, and consistent. Poor data will only amplify false positives rather than reduce them. After identifying the relevant data, the next step is to validate it and ensure it is reliable and usable. Data quality and consistency must be checked to correct formatting issues and fill gaps, especially since sources may vary in format.
It is also important to assess auditability, making sure the data is fit for its intended purpose and up to date. Napier AI’s partnership with the FCA on synthetic data provides an example of how organisations can generate reliable, privacy-compliant datasets to overcome these challenges and ensure AI outputs are trustworthy.
2. Conduct a financial crime risk assessment
Before rushing into tools, firms need clarity on where AI can make the most impact. For some, this might be reducing noise in transaction monitoring alerts; for others, it might be real-time sanctions screening or analysing customer behaviour patterns.
It will also highlight the types of data needed to address those risks and suggest control measures to help mitigate them. Performing a risk assessment at this stage further guides your vendor selection process. By gaining a clear picture of your organisation’s financial crime threat landscape, you’ll be better equipped to understand your risk challenges and identify which of them an AI solution can address.
3. Integrate AI into existing workflows
AI doesn’t require ripping out legacy systems. Modern AML solutions powered by AI are designed as modular layers, sitting on top of current infrastructure to improve detection and efficiency. For instance:
- Using AI models to prioritise high-risk alerts in transaction monitoring.
- Applying natural language processing to unstructured sanctions or PEP data.
- Enhancing behavioural analytics without redesigning core banking systems.
4. Establish governance and documentation
In the absence of AI-specific FCA rules, firms should lean on established governance practices. Key elements include:
- Model risk management frameworks (independent validation, clear audit trails, explainability).
- AI policies and procedures covering fairness, bias mitigation, and human oversight.
- Risk assessments documenting the AI system’s intended purpose, key risks, mitigations, and acceptable residual risk.
5. Train people as much as the system
AI isn’t just a technology shift – it changes workflows and roles. Compliance teams need training on how to interpret and challenge AI outputs, ensuring human judgment remains central.
Measuring return on investment (ROI) and managing costs
Measuring the return on investment (ROI) requires a balanced view:
- Efficiency gains: fewer false positives, faster investigations, and reduced manual workload.
- Risk mitigation: fewer missed suspicious activities, reduced regulatory fines, and improved reputation.
- Scalability: ability to handle increasing transaction volumes without linearly increasing headcount.
To manage implementation costs, firms can:
- Start with pilot use cases in high-impact areas (e.g. sanctions screening).
- Use parallel running, testing AI against existing systems before full switch-over, lowering operational disruption risks.
- Partner with vendors who offer phased integration.
Financial crime threats are evolving at pace, and regulators are keeping close watch. FCA AI Live Testing offers firms a unique opportunity to de-risk innovation while influencing the development of future supervisory approaches.
For compliance leaders, this is the moment to shift from hesitation to action. By combining the FCA’s structured testing environment with a disciplined internal approach or partnering with participant vendors in such regulatory environments, firms can deploy AI in ways that are safe, explainable, and transformative.
Learn from our experts: Sign up to attend our regular breakfast briefings in London to talk all things AI and anti-money laundering with industry peers and thought leaders.

Photo by vackground.com on Unsplash