Models have long been recognised by supervisors as a critical source of financial risk. Quantitative tools influence credit approval, pricing, valuation, stress testing, and compliance decisions, often with direct implications for capital adequacy, consumer outcomes, and financial stability. Supervisory concern over these tools led to the development of formal model risk management (MRM) frameworks, intended to ensure that models are conceptually sound, appropriately validated, and governed with clear accountability.[bis]
The growing adoption of artificial intelligence—particularly machine learning and generative models—has intensified these concerns. AI systems differ from traditional models in scale, opacity, adaptability, and dependence on large, continuously evolving datasets. These characteristics raise questions about whether existing supervisory frameworks can adequately capture AI‑related risks, or whether a distinct supervisory paradigm is required.[bis
Traditional Supervisory Foundations of Model Risk
The modern supervisory understanding of model risk is strongly shaped by SR 11‑7, issued by U.S. banking agencies and led by the Federal Reserve. SR 11‑7 defines a model broadly, encompassing not only quantitative algorithms but also the input data, assumptions, data transformations, and judgement embedded in decision frameworks. It establishes three core expectations: sound model development and implementation; independent and effective validation; and robust governance, policies, and oversight.[federalreserve]
Although SR 11‑7 predates widespread use of machine learning, supervisors have consistently interpreted its scope expansively. AI systems—despite their technical novelty—are treated as models when they influence material decisions and therefore fall within the remit of model risk management expectations. This interpretation has been reinforced by the Office of the Comptroller of the Currency’s Model Risk Management Comptroller’s Handbook, which discusses AI and machine‑learning use cases and clarifies that such activities typically fall within existing model risk management expectations, while requiring enhanced controls to address explainability, data quality, and third‑party dependencies.[occ.treas]
This continuity underscores an important supervisory principle: innovation does not reduce accountability. AI models are not exempt from validation or governance simply because their internal logic is complex or non‑linear.[ibm]
Expanding Model Risk to Address AI Complexity
Recognising the limitations of legacy, capital‑centric MRM frameworks, several supervisors have modernised their guidance to explicitly accommodate advanced analytics. The Office of the Superintendent of Financial Institutions (OSFI) in Canada has updated Guideline E‑23 on Model Risk Management, which reframes model risk as an enterprise‑wide concern extending beyond regulatory capital models. OSFI’s revised approach clarifies that the guideline applies to all models supporting material decision‑making, explicitly referencing AI and machine‑learning models and highlighting associated financial, operational, legal, and reputational risks.[blakes]
Similarly, the European Central Bank’s Guide to Internal Models—most recently revised with dedicated discussion of machine‑learning techniques—emphasises that all internal models, irrespective of methodology, are subject to governance, validation, and performance monitoring. While not an AI rulebook, this stance closes potential regulatory gaps by signalling that innovative methodologies cannot be used to circumvent existing supervisory expectations.[bankingsupervision.europa]
These developments indicate a shift from model risk as a niche technical issue to model risk as a strategic governance concern, particularly where AI systems influence high‑impact decisions at scale.[torys]
Emergence of AI‑Specific Supervisory Guidance
Some supervisors have moved beyond implicit inclusion of AI within MRM to issuing explicit AI‑focused risk guidance. The Monetary Authority of Singapore has published an Information Paper on Artificial Intelligence Model Risk Management, documenting observed good practices across the AI lifecycle following thematic reviews of financial institutions. The paper highlights risks such as data drift, model degradation, explainability failures, and over‑reliance on vendors, while reinforcing established supervisory expectations around governance, independent challenge, documentation, and post‑deployment monitoring.[allenandgledhill]
Likewise, the Swiss Financial Market Supervisory Authority (FINMA) has issued Guidance 08/2024 – Governance and risk management when using artificial intelligence, which sets out supervisory expectations for institutions deploying AI. FINMA’s supervisory observations stress that boards and senior management must maintain sufficient understanding of AI systems to exercise effective oversight, reinforcing the principle that decision‑making responsibility cannot be delegated to algorithms or external vendors.[mll-news]
These AI‑specific initiatives do not replace model risk frameworks; rather, they extend them, introducing new lenses through which traditional MRM principles are applied to AI architectures, data pipelines, and deployment practices.[bis]
Global Supervisory and Standard‑Setting Perspectives
At the global level, the Bank for International Settlements and its Financial Stability Institute (FSI) have explored how AI challenges conventional supervisory assumptions. FSI Insights on regulating AI in the financial sector identify explainability constraints, model uncertainty, and correlated model behaviour as key vulnerabilities in AI‑driven systems. FSI’s work on AI explainability further analyses the limitations of current explainability tools and how supervisors might calibrate explainability expectations to different risk profiles and use cases.[bis]
Traditional validation techniques may struggle to capture these risks, particularly where models adapt dynamically, are frequently re‑trained, or where multiple institutions rely on similar data sources, architectures, or third‑party vendors. BIS analysis also underscores a systemic dimension: AI may amplify herding, feedback loops, and procyclicality if widely adopted models respond similarly to market or macro‑financial stress, thereby blurring the boundary between micro‑prudential model risk and macro‑prudential financial stability concerns.[bis]
From Validation to Stress: Reframing Supervisory Expectations
Across jurisdictions, a common supervisory insight is emerging. While traditional stress testing often assumes model stability under adverse conditions, AI risk management increasingly requires supervisors and firms to ask whether the model itself remains reliable and well‑behaved under stress. This reframing shifts attention from outputs alone to model behaviour, data integrity, governance effectiveness, and human oversight during extreme but plausible scenarios.[occ.treas]
Supervisory discourse increasingly positions AI risk as a stress test of existing control frameworks. Where governance, validation, and accountability are weak, AI merely exposes these weaknesses more rapidly and at greater scale, given the speed, complexity, and interconnectedness of AI‑enabled decision‑making.[bis]
To sum up, Supervisory approaches to AI risk reveal a consistent and pragmatic trajectory. Rather than creating entirely new regulatory silos, supervisors are embedding AI within established model risk management frameworks—while sharpening expectations to address opacity, adaptability, third‑party concentration, and systemic amplification. AI risk is therefore best understood not as a departure from model risk, but as its logical extension in an increasingly algorithm‑driven financial system.[federalreserve]
The supervisory challenge lies not in defining AI as exceptional, but in ensuring that traditional principles—sound governance, independent challenge, transparency, and accountability—remain effective when models learn, evolve, and operate at unprecedented scale.[blakes]
References
-
Federal Reserve Board (2011). SR 11‑7: Supervisory Guidance on Model Risk Management. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm[federalreserve]
-
Office of the Comptroller of the Currency (2021). Model Risk Management, Comptroller’s Handbook. https://www.occ.treas.gov/publications-and-resources/publications/comptrollers-handbook/files/model-risk-management[occ.treas]
-
Office of the Superintendent of Financial Institutions (2024/2025). Guideline E‑23: Model Risk Management (revised). Practitioner summaries: https://www.mll-news.com/modernizing-financial-risk-management-osfis-draft-guideline-on-ai-model-risk-management; https://www.torys.com/our-latest-thinking/publications/2025/10/osfi-updates-and-expands-scope-of-guideline-e-23[torys]
-
European Central Bank (2025). ECB Guide to Internal Models (revised, including machine‑learning section). Press release: https://www.bankingsupervision.europa.eu/press/pr/date/2025/html/ssm.pr250728~2b36305822.en.html[bankingsupervision.europa]
-
Monetary Authority of Singapore (2024). Information Paper on Artificial Intelligence Model Risk Management. Overview: https://www.allenandgledhill.com/sg/publication/articles/29565/mas-publishes-information-paper-on-ai-model-risk-management[allenandgledhill]
-
Swiss Financial Market Supervisory Authority (2024). Guidance 08/2024 – Governance and risk management when using artificial intelligence. https://www.finma.ch/en/~/media/finma/dokumente/dokumentencenter/myfinma/4dokumentation/finma-aufsichtsmitteilungen/20241218-finma-aufsichtsmitteilung-08-2024[finma]
-
Financial Stability Institute (2024). Regulating AI in the financial sector: recent developments and main supervisory issues. FSI Insights No. 63. https://www.bis.org/fsi/publ/insights63.pdf[bis]
-
Financial Stability Institute (2025). Managing explanations: how regulators can address AI explainability. FSI Papers No. 24. https://www.bis.org/fsi/fsipapers24.htm[bis]
-
Bank for International Settlements (2024). Artificial intelligence and the economy: implications for central banks. BIS Annual Economic Report 2024, Chapter III. https://www.bis.org/publ/arpdf/ar2024e3.htm[bis]




Leave a Reply