Summary of AI roundtables - February 2026
Introduction
As per the Bank’s approach to innovation in AI, DLT and quantum computing, we seek to engage with innovators and industry practitioners in various ways to better understand the latest technological developments and their implications for the financial sector. This includes via biennial AI surveys of the UK financial sector, the AI Consortium (a successor to the AI Public-Private Forum), the Cross Market Operational Resilience Group AI Taskforce, and the Bank’s Market Intelligence function.
To complement these initiatives, and in line with the Bank’s secondary growth objective, in late 2025, the Bank of England hosted three roundtables with participants from regulated firms to better understand the constraints firms may be facing in adopting AI, and how the Bank and PRA can support responsible AI adoption. Each roundtable was held with representatives from a different PRA-regulated sector: (1) challenger banks and UK-focussed larger banks; (2) global systemically important banks; and (3) insurers. Observers from the FCA and HMT were also present.
Below is a summary of the key points arising from the roundtable discussions, which were held under the Chatham House Rule.
Summary of key points
Across all three roundtables, participants from regulated firms expressed support for the PRA’s regulatory framework as it related to AI. Participants noted that the PRA’s principles-based, outcomes-based policy and supervisory statements gave firms sufficient space to innovate within clear regulatory guardrails. Supervisory Statement 1/23 on Model Risk Management in particular was noted by several participants as pragmatic in enabling responsible AI adoption. Most participants did not see the need yet for detailed AI-specific regulatory guidance or rules, and most couldn’t see a case for a Bank or PRA AI sandbox at this time; the FCA’s Supercharged Sandbox and AI Live Testing initiatives were seen as providing sufficient offerings for testing purposes.
Second-line risk functions continue to approach the use of AI with caution, which may delay AI deployment pipelines. There were mixed views on whether this was an optimal or inevitable level of caution. Drivers could include both (a) bottlenecks in AI skills and expertise, given the dynamic and highly complex nature of the technology, and the range of uses to which it was being put; and (b) a desire to ensure compliance with supervisory expectations could be comprehensively demonstrated. As an example, several participants noted that firms’ traditional model risk management approach to validation wouldn’t be sustainable in its current form as generative AI and agentic systems proliferated. The traditional emphasis on understanding the inner workings of a model – i.e. how inputs mapped to outputs –wasn’t tenable or fully effective for increasingly complex AI models. The concept of having a ‘human-in-the-loop’ was also challenged by the rise of agentic AI. Several participants suggested that risk management needed to evolve to put greater emphasis on testing, monitoring and setting guardrails around the outcomes of broader AI systems. Some participants suggested there would be value in sharing supervisory observations on good and bad practice, or convening industry experts to define, agree and share best practice.footnote [1]
Firms operating in multiple jurisdictions need to navigate different regulatory approaches to AI. Participants noted key differences between the UK’s regulatory approach, the US’s approach (e.g. Supervisory Letter SR11-7 on Guidance on Model Risk Management) and the EU AI Act. Fragmentation increases compliance costs, slowed AI adoption, and prevented firms from scaling AI use cases across borders. Several participants therefore encouraged the Bank to use its membership of various international fora to encourage global coordination and convergence.
Procurement and contract negotiations with third-party AI providers were slowed by inconsistent familiarity with regulated firms’ compliance requirements. Some participants thought the market would eventually solve that problem i.e. minimum standards would emerge over time, but that there was an opportunity cost in the meantime. Several participants therefore noted that the Bank could explore convening financial and technology firms to agree minimum standards for third party AI providers to the regulated financial sector. Some participants noted that as AI models become embedded in agentic systems throughout their firm’s core business processes, substituting between AI providers may become more challenging.
Data protection laws – along with emerging data sovereignty regimes in other jurisdictions – were a challenge to deploying and scaling AI use cases. Several participants noted that the legal requirement to complete Data Protection Impact Assessments in certain situations slowed their AI deployment pipeline.footnote [2] Participants noted that new data location requirements could prevent scaling AI solutions across borders.
Data quality can also be a barrier to the use of AI, particularly in some areas of insurance. Some insurers have relatively little data on their individual customers, owing to the infrequency of customer engagement (e.g. annual policy renewal, when a claim is submitted), in contrast to banks’ visibility of their customers’ transactions. Therefore prospects of e.g. hyperpersonalised insurance products using AI were limited in some areas in the near term.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.