至于银行服务方面，有少数通用的法律、法规、指南和监管方法与AI工具的使用息息相关。第一，美联储的"模型风险管理指南"（SR Letter 11-7）强调了通过开发、实施和使用包括AI等复杂算法的模型进行嵌入式关键分析的安全性和稳健性。指南还强调了与模型开发、实施和使用无关的公正专业人员监督模型的"有效挑战"。指南描述了对公司自主模型进行稳健独立审查的监督期望，以确认这些模型的用途和功能达到预期。如果审查人员无法全面评估模型或发现问题，他们可能会建议在使用模型时更谨慎或进行补偿控制。同样，如果我们自己的检查人员评估模型风险，他们通常会先评估公司开发和审查模型的流程，以及对模型缺陷的回应或审查缺陷的能力。更重要的是，指南指出，专有供应商模型等模型并非完全透明。银行可以使用这类模型，但指南强调，针对无法解释或不透明的模型，应使用其他工具隔离或以其他方式减轻风险，可以通过"断路器"或其他机制来减缓外部控制，进而抵消风险。需要注意的是，模型应始终在上下文中解释。
第二，我们关于供应商风险管理的指南（SR 13-19/CA 13-21），以及谨慎的监管机构针对技术服务提供商的指南，强调了公司在外包业务功能或活动时应权衡的因素，这也可应用于外包的基于AI的工具或服务。我们监督的大部分银行都依赖于非银行供应商的专业知识、数据和成品AI工具，以充分利用AI推动流程。不管这些工具是聊天机器人、反洗钱/客户合规监管产品还是信贷评估新工具，都属于面向银行的服务。供应商风险管理指南讨论了受监督公司在挑选外部供应商时进行尽职调查、选择和对比的最佳实践，指南还描述了公司监管和监控与供应商的关系的方式，以及公司在终止与供应商的关系时，应考虑的业务持续性和意外因素。
Although it is still early days, it is already evident that the application of artificial intelligence (AI) in financial services is potentially quite important and merits our attention.
Through our Fintech working group, we are working across the Federal Reserve System to take a deliberate approach to understanding the potential implications of AI for financial services, particularly as they relate to our responsibilities. In light of the potential importance of AI, we are seeking to learn from industry, banks, consumer advocates, researchers, and others, including through today's conference. I am pleased to take part in this timely discussion of how technology is changing the financial landscape.1
The Growing Use of Artificial Intelligence in Financial Services
My focus today is the branch of artificial intelligence known as machine learning, which is the basis of many recent advances and commercial applications.2 Modern machine learning applies and refines, or "trains," a series of algorithms on a large data set by optimizing iteratively as it learns in order to identify patterns and make predictions for new data.3 Machine learning essentially imposes much less structure on how data is interpreted compared to conventional approaches in which programmers impose ex ante rule sets to make decisions.
The three key components of AI--algorithms, processing power, and big data--are all increasingly accessible. Due to an early commitment to open-source principles, AI algorithms from some of the largest companies are available to even nascent startups.4 As for processing power, continuing innovation by public cloud providers means that with only a laptop and a credit card, it is possible to tap into some of the world's most powerful computing systems by paying only for usage time, without having to build out substantial hardware infrastructure. Vendors have made it easy to use these tools for even small businesses and non-technology firms, including in the financial sector. Public cloud companies provide access to pre-trained AI models via developer-friendly application programming interfaces or even "drop and drag" tools for creating sophisticated AI models.5 Most notably, the world is creating data to feed those models at an ever-increasing rate. Whereas in 2013 it was estimated that 90 percent of the world's data had been created in the prior two years, by 2016, IBM estimated that 90 percent of global data had been created in the prior year alone.6
The pace and ubiquity of AI innovation have surprised even experts. The best AI result on a popular image recognition challenge improved from a 26 percent error rate to 3.5 percent in just four years. That is lower than the human error rate of 5 percent.7 In one study, a combination AI-human approach brought the error rate down even further--to 0.5 percent.
So it is no surprise that many financial services firms are devoting so much money, attention, and time to developing and using AI approaches. Broadly, there is particular interest in at least five capabilities.8 First, firms view AI approaches as potentially having superior ability for pattern recognition, such as identifying relationships among variables that are not intuitive or not revealed by more traditional modeling. Second, firms see potential cost efficiencies where AI approaches may be able to arrive at outcomes more cheaply with no reduction in performance. Third, AI approaches might have greater accuracy in processing because of their greater automation compared to approaches that have more human input and higher "operator error." Fourth, firms may see better predictive power with AI compared to more traditional approaches--for instance, in improving investment performance or expanding credit access. Finally, AI approaches are better than conventional approaches at accommodating very large and less-structured data sets and processing those data more efficiently and effectively. Some machine learning approaches can be "let loose" on data sets to identify patterns or develop predictions without the need to specify a functional form ex ante.
What do those capabilities mean in terms of how we bank? The Financial Stability Board highlighted four areas where AI could impact banking.9 First, customer-facing uses could combine expanded consumer data sets with new algorithms to assess credit quality or price insurance policies. And chatbots could provide help and even financial advice to consumers, saving them the waiting time to speak with a live operator. Second, there is the potential for strengthening back-office operations, such as advanced models for capital optimization, model risk management, stress testing, and market impact analysis. Third, AI approaches could be applied to trading and investment strategies, from identifying new signals on price movements to using past trading behavior to anticipate a client's next order. Finally, there are likely to be AI advancements in compliance and risk mitigation by banks. AI solutions are already being used by some firms in areas like fraud detection, capital optimization, and portfolio management.
Current Regulatory and Supervisory Approaches
The potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection, or the financial system.10 The question, then, is how should we approach regulation and supervision? It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.
Regulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy. Likewise, it is important not to drive responsible innovation away from supervised institutions and toward less regulated and more opaque spaces in the financial system.11
Our existing regulatory and supervisory guardrails are a good place to start as we assess the appropriate approach for AI processes. The National Science and Technology Council, in an extensive study addressing regulatory activity generally, concludes that if an AI-related risk "falls within the bounds of an existing regulatory regime, . . . the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI."12 A recent report by the U.S. Department of the Treasury reaches a similar conclusion with regard to financial services.13
With respect to banking services, a few generally applicable laws, regulations, guidance, and supervisory approaches appear particularly relevant to the use of AI tools. First, the Federal Reserve's "Guidance on Model Risk Management" (SR Letter 11-7) highlights the importance to safety and soundness of embedding critical analysis throughout the development, implementation, and use of models, which include complex algorithms like AI.14 It also underscores "effective challenge" of models by a "second set of eyes"--unbiased, qualified individuals separated from the model's development, implementation, and use. It describes supervisory expectations for sound independent review of a firm's own models to confirm they are fit for purpose and functioning as intended. If the reviewers are unable to evaluate a model in full or if they identify issues, they might recommend the model be used with greater caution or with compensating controls. Similarly, when our own examiners evaluate model risk, they generally begin with an evaluation of the processes firms have for developing and reviewing models, as well as the response to any shortcomings in a model or the ability to review it. Importantly, the guidance recognizes that not all aspects of a model may be fully transparent, as with proprietary vendor models, for instance. Banks can use such models, but the guidance highlights the importance of using other tools to cabin or otherwise mitigate the risk of an unexplained or opaque model. Risks may be offset by mitigating external controls like "circuit-breakers" or other mechanisms. And importantly, models should always be interpreted in context.
Second, our guidance on vendor risk management (SR 13-19/CA 13-21), along with the prudential regulators' guidance on technology service providers, highlights considerations firms should weigh when outsourcing business functions or activities--and could be expected to apply as well to AI-based tools or services that are externally sourced.15 The vast majority of the banks that we supervise will have to rely on the expertise, data, and off-the-shelf AI tools of nonbank vendors to take advantage of AI-powered processes. Whether these tools are chatbots, anti-money-laundering/know your customer compliance products, or new credit evaluation tools, it seems likely that they would be classified as services to the bank. The vendor risk-management guidance discusses best practices for supervised firms regarding due diligence, selection, and contracting processes in selecting an outside vendor. It also describes ways that firms can provide oversight and monitoring throughout the relationship with the vendor, and considerations about business continuity and contingencies for a firm to consider before the termination of any such relationship.
Third, it is important to emphasize that guidance has to be read in the context of the relative risk and importance of the specific use-case in question. We have long taken a risk-focused supervisory approach--the level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used.16 That principle also applies generally to the attention that supervised firms devote to the different approaches they use: firms should apply more care and caution to a tool they use for major decisions or that could have a material impact on consumers, compliance, or safety and soundness.
For its part, AI is likely to present some challenges in the areas of opacity and explainability. Recognizing there are likely to be circumstances when using an AI tool is beneficial, even though it may be unexplainable or opaque, the AI tool should be subject to appropriate controls, as with any other tool or process, including how the AI tool is used in practice and not just how it is built. This is especially true for any new application that has not been fully tested in a variety of conditions. Given the large data sets involved with most AI approaches, it is vital to have controls around the various aspects of data--including data quality as well as data suitability. Just as with conventional models, problems with the input data can lead to cascading problems down the line. Accordingly, we would expect firms to apply robust analysis and prudent risk management and controls to AI tools, as they do in other areas, as well as to monitor potential changes and ongoing developments.
For example, let's take the areas of fraud prevention and cybersecurity, where supervised institutions may need their own AI tools to identify and combat outside AI-powered threats. The wide availability of AI's building blocks means that phishers and fraudsters have access to best-in-class technologies to build AI tools that are powerful and adaptable. Supervised institutions will likely need tools that are just as powerful and adaptable as the threats that they are designed to face, which likely entails some degree of opacity. While so far, most phishing attacks against consumers have relied on standard-form emails, likely due to the high cost of personalization, in the future, AI tools could be used to make internet fraud and phishing highly personalized.17 By accessing data sets with consumers' personally identifiable information and applying open-source AI tools, a phisher may be able to churn out highly targeted emails to millions of consumers at relatively low cost, containing personalized information such as their bank account number and logo, along with past transactions.18 In cases such as this, where large data sets and AI tools may be used for malevolent purposes, it may be that AI is the best tool to fight AI.
Let's turn to the related issue of the proverbial "black box"--the potential lack of explainability associated with some AI approaches. In the banking sector, it is not uncommon for there to be questions as to what level of understanding a bank should have of its vendors' models, due to the balancing of risk management, on the one hand, and protection of proprietary information, on the other. To some degree, the opacity of AI products can be seen as an extension of this balancing. But AI can introduce additional complexity because many AI tools and models develop analysis, arrive at conclusions, or recommend decisions that may be hard to explain. For instance, some AI approaches are able to identify patterns that were previously unidentified and are intuitively quite hard to grasp. Depending on what algorithms are used, it is possible that no one, including the algorithm's creators, can easily explain why the model generated the results that it did.
The challenge of explainability can translate into a higher level of uncertainty about the suitability of an AI approach, all else equal. So how does, or even can, a firm assess the use of an approach it might not fully understand? To a large degree, this will depend on the capacity in which AI is used and the risks presented. One area where the risks may be particularly acute is the consumer space generally, and consumer lending in particular, where transparency is integral to avoiding discrimination and other unfair outcomes, as well as meeting disclosure obligations.19 Let me turn briefly to this topic.
The potential for the application of AI tools to result in new benefits to consumers is garnering a lot of attention. The opportunity to access services through innovative channels or processes can be a potent way to advance financial inclusion.20 Consider, for instance, consumer credit scoring. There are longstanding and well-documented concerns that many consumers are burdened by material errors on their credit reports, lack sufficient credit reporting information necessary for a score, or have credit reports that are unscorable.21 As noted earlier, banks and other financial service providers are using AI to develop credit-scoring models that take into account factors beyond the usual metrics. There is substantial interest in the potential for those new models to allow more consumers on the margins of the current credit system to improve their credit standing, at potentially lower cost. As noted earlier, AI also has the potential to allow creditors to more accurately model and price risk, and to bring greater speed to decisions.
AI may offer new consumer benefits, but it is not immune from fair lending and other consumer protection risks, and compliance with fair lending and other consumer protection laws is important.22 Of course, it should not be assumed that AI approaches are free of bias simply because they are automated and rely less on direct human intervention. Algorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, AI tools can reflect or "learn" the biases of the society in which they were created. A 2016 Treasury Department report noted that while "data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations."23
A recent example illustrates the risk of unwittingly introducing bias into an AI model. It was recently reported that a large employer attempted to develop an AI hiring tool for software developers that was trained with a data set of the resumes of past successful hires, which it later abandoned. Because the pool of previously hired software developers in the training data set was overwhelmingly male, the AI developed a bias against female applicants, going so far as to exclude resumes of graduates from two women's colleges.24
Importantly, the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) include requirements for creditors to provide notice of the factors involved in taking actions that are adverse or unfavorable for the consumer.25 These requirements help provide transparency in the underwriting process, promote fair lending by requiring creditors to explain why they reached their decisions, and provide consumers with actionable information to improve their credit standing. Compliance with these requirements implies finding a way to explain AI decisions. However, the opacity of some AI tools may make it challenging to explain credit decisions to consumers, which would make it harder for consumers to improve their credit score by changing their behavior. Fortunately, AI itself may play a role in the solution: The AI community is responding with important advances in developing "explainable" AI tools with a focus on expanding consumer access to credit.26 I am pleased that this is one of the topics on your agenda today.
Perhaps one of the most important early lessons is that not all potential consequences are knowable now--firms should be continually vigilant for new issues in the rapidly evolving area of AI. Throughout the history of banking, new products and processes have been an area where problems can arise. Further, firms should not assume that AI approaches are less susceptible to problems because they are purported to be able to "learn" or less prone to human error. There are plenty of examples of AI approaches not functioning as expected--a reminder that things can go wrong. It is important for firms to recognize the possible pitfalls and employ sound controls now to prevent and mitigate possible future problems.
For our part, we are still learning how AI tools can be used in the banking sector. We welcome discussion about what use cases banks and other financial services firms are exploring with AI approaches and other innovations, and how our existing laws, regulations, guidance, and policy interests may intersect with these new approaches. 27 When considering financial innovation of any type, our task is to facilitate an environment in which socially beneficial, responsible innovation can progress with appropriate mitigation of risk and consistent with applicable statutes and regulations.