最有看点的互联网金融门户

最有看点的互联网金融门户
传统金融的互联网化全新的互联网金融模式国际资讯基于互联网平台的金融业务

美联储:浅谈AI在金融服务领域的应用与监管(附演讲全文)

传统金融的互联网化全新的互联网金融模式国际资讯基于互联网平台的金融业务

美联储:浅谈AI在金融服务领域的应用与监管(附演讲全文)

虽然发展时间还不长,但很明显,人工智能(AI)在金融服务领域的应用已经愈发重要,值得我们关注。

我们的金融科技工作组正通过联邦储备系统主动了解AI在金融服务领域的潜在影响,特别是与我们职责相关的部分。考虑到AI的潜在价值,我们正尝试通过各种方法学习和借鉴行业、银行、消费者保护团体、研究人员和其他各方的经验,今天的会议也是我们学习的内容。我很高兴能够参与这场与时俱进的讨论,与大家共同探讨技术如何影响金融领域。

AI在金融服务领域的应用日益增加

我今天演讲的重点是AI的一个分支,机器学习,这是最近许多发展和商业应用的基础。现代机器学习根据大数据集来应用、完善或"训练"一系列算法,并随着学习反复优化,以便识别模式并针对新数据做出预测。从根本上来说,机器学习针对数据解译方式规定的结构较少,而在传统方法中,编程人员通常会实施预先规则以进行决策。

算法、处理能力和大数据是AI的三大关键要素,这三者现在的获取难度都有所降低。鉴于早期的开源承诺,即使是刚开始创业的初创企业也可以获取一些大公司的AI算法。至于处理能力,得益于公共云提供商的持续创新,现在只需一台笔记本和一张信用卡,按使用付费,就可以访问全球几家功能最强大的计算系统,而无需构建大量硬件基础架构。通过供应商,包括金融领域在内的各领域小企业和非技术公司都能轻松使用这些工具,他们既可以通过公共云公司提供的开发人员友好的应用编程界面访问预先训练的AI模型,也可以通过"下拉拖拽"工具创造更复杂的AI模型。最明显的是,全球数据产生并馈送至这些模型的速度在不断提高。2013年的时候,全球约90%的数据是之前两年产生的,而到了2016年,据IBM估算,全球约90%的数据是之前一年产生的。

AI创新的速度和范围让业内专家都为之侧目。仅仅四年,AI在热门图像识别挑战中的最好成绩已从26%的错误率下降到了3.5%,而人类最好的成绩是5%。在一项研究中,AI与人类合作甚至将错误率进一步降低到了0.5%。

所以众多金融服务公司重视并投入大量资金和时间来开发和使用AI方法。总体来说,AI在以下5个方面的优势最能够吸引企业进行投资研究:

  1. 企业认为AI具备出色的模式识别能力,如识别传统建模中不明显或未能揭示的变量关系。
  2. 企业认为AI可以在保持性能的前提下降低成本,性价比更高。
  3. AI自动化程度更高,与人工输入和"操作人员错误"更多的方法相比,处理精确度更高。
  4. 与更传统的方法相比,AI的预测能力更强。例如,可以提高投资业绩或扩大信贷准入范围。
  5. 与传统方法相比,AI能够更好地容纳结构较少的大型数据集,并更高效地处理此类数据。部分机器学习方法在数据集方面可以更"轻松"地识别模式或做出预测,而无需预先指定功能。

这些功能对银行工作有什么影响?金融稳定理事会重点介绍了AI可能会影响的四个银行领域。

  1. 客户直接服务可以将扩大的消费者数据集与新算法相结合,用来评估信贷质量或价格保险政策。聊天机器人可以为消费者提供帮助,甚至是金融建议,为消费者节省排队等候人工接线员的时间。
  2. 可以加强内部运营,例如用于资本优化、模型风险管理、压力测试和市场影响分析的先进模型。
  3. AI可以应用于交易和投资策略,从识别价格变动新信号到使用过往交易行为来预测客户下一个订单,都是AI的用武之地。
  4. AI发展还可以减少银行面临的风险。目前已有部分公司在使用AI解决方案,涉及领域包括欺诈检测、资本优化和投资组合管理。

当前监管和监督方式

这些AI新应用功能强大,应用范围广泛,必然会带来与银行安全稳健、消费者保护或金融系统潜在风险相关的问题。我们该如何进行监管和监督?监管机构应该负责审查AI可能造成的影响,包括可能的风险,并考虑平衡受监督企业的AI使用。

监管和监督应小心谨慎,确保适当减缓风险,不要阻碍可帮助消费者和小企业扩大获取范围和提高便捷性的负责任创新,同时提高效率、风险检测能力和精确度。同样,不要将负责任的创新驱离受监督的机构,避免创新转向监管程度更低、更不透明的金融系统领域,这点也至关重要。

我们现有的监管和监督机构非常适合开始评估AI流程的适当方式。在一项针对监管活动的全面调查中,国家科学技术委员会总结认为,如果与AI相关的风险"属于现有监管体制范围,...政策讨论应先考虑现有监管是否能够解决风险,或者是否需要调整以增加AI相关内容。"美国财政部最近的一份报告在金融服务领域也得出了类似的结论。

至于银行服务方面,有少数通用的法律、法规、指南和监管方法与AI工具的使用息息相关。第一,美联储的"模型风险管理指南"(SR Letter 11-7)强调了通过开发、实施和使用包括AI等复杂算法的模型进行嵌入式关键分析的安全性和稳健性。指南还强调了与模型开发、实施和使用无关的公正专业人员监督模型的"有效挑战"。指南描述了对公司自主模型进行稳健独立审查的监督期望,以确认这些模型的用途和功能达到预期。如果审查人员无法全面评估模型或发现问题,他们可能会建议在使用模型时更谨慎或进行补偿控制。同样,如果我们自己的检查人员评估模型风险,他们通常会先评估公司开发和审查模型的流程,以及对模型缺陷的回应或审查缺陷的能力。更重要的是,指南指出,专有供应商模型等模型并非完全透明。银行可以使用这类模型,但指南强调,针对无法解释或不透明的模型,应使用其他工具隔离或以其他方式减轻风险,可以通过"断路器"或其他机制来减缓外部控制,进而抵消风险。需要注意的是,模型应始终在上下文中解释。

第二,我们关于供应商风险管理的指南(SR 13-19/CA 13-21),以及谨慎的监管机构针对技术服务提供商的指南,强调了公司在外包业务功能或活动时应权衡的因素,这也可应用于外包的基于AI的工具或服务。我们监督的大部分银行都依赖于非银行供应商的专业知识、数据和成品AI工具,以充分利用AI推动流程。不管这些工具是聊天机器人、反洗钱/客户合规监管产品还是信贷评估新工具,都属于面向银行的服务。供应商风险管理指南讨论了受监督公司在挑选外部供应商时进行尽职调查、选择和对比的最佳实践,指南还描述了公司监管和监控与供应商的关系的方式,以及公司在终止与供应商的关系时,应考虑的业务持续性和意外因素。

第三,需要强调的是,指南应结合相关风险和正在考虑的具体用例的价值进行解读。一直以来,我们采取的是以风险为重点的监督方式,监管程度与所使用方式、工具、模型或流程带来的潜在风险成正比,这种原则通常也适用于受监督公司对所使用的不同方式的关注:公司应更关注用于进行重大决策的工具,因为这会对消费者、合规性、安全性和稳健性带来重大影响。

就其本身而言,AI很可能会带来一些透明度和可解释性方面的问题。不能否认的是,尽管AI很难解释或不透明,但在某些情况下使用AI工具依然大有裨益。应对AI工具和其他工具或流程进行适当控制,包括AI工具的实际使用方式,而不仅仅是它的构建方式,对于未在各种情况下进行全面测试的新应用来说,这点尤为关键。考虑到大部分AI方法中都涉及到大数据集,因此需要对数据质量和数据适用性等数据不同方面进行控制,这点至关重要。就像传统模型,输入数据的问题可能会导致各方面出现大量问题。相应的,我们也希望公司能够针对AI工具进行稳健的分析和谨慎的风险管理和控制,就像其他领域一样,以便监控潜在变化和持续发展。

以欺诈预防和网络安全为例,受监督的机构可能需要采用自己的AI工具来识别和解决外部AI带来的威胁。AI构建模块的广泛传播意味着网络钓鱼者和欺诈者可以获取一流的技术来构建强大灵活的AI工具。受监督的机构所需的工具应与他们所面对的威胁同样强大灵活,这很可能会加剧不透明度。虽然到目前为止,由于个性化成本较高,消费者面对的大部分钓鱼攻击都是标准形式的电子邮件,但未来可能会使用AI工具高度个性化网络欺诈和钓鱼,通过获取消费者个人信息的数据集并采用开源AI工具,网络钓鱼者可以向数百万消费者大量发送有针对性的电子邮件,并且成本较低,这些邮件中可能会包括消费者的银行账号和标志以及过往交易等私人信息。为防止此类情况发生,防止大数据集和AI工具被用于恶意目的,最好的办法是以AI制AI。

再看看众所周知的"黑盒"问题,就是部分AI方法缺乏可解释性的问题。就银行领域而言,出于风险管理和专有信息保护之间的平衡原因,无法确定银行对自己供应商模型的了解是否足够,这种情况很普遍。某种程度上来说,AI产品的不透明度可以看作是这种平衡的延伸,但是AI可能会带来更多问题,因为许多AI工具和模型生成分析、得出结论或推荐决策的过程都很难解释。例如,某些AI能够识别出之前未能识别且很难发现的模式。根据所使用的算法,可能没人能轻松解释为什么模型得出这些结果,即使是算法创造者自己也很难解释。

可解释性的问题也可看成是更高难度的不确定性,在其他条件相等的情况下,不确定AI方法的适用性。所以,公司要如何评估甚至是使用自己还未完全了解的方法?在很大程度上,这取决于AI应用的领域及其可能带来的风险。风险特别大的领域通常是消费空间,尤其是消费借贷,对于避免歧视和其他不公平结果以及履行披露义务来说,透明度不可或缺。这方面我简单介绍一下。

AI工具应用可能为消费者带来的新优势受到普遍关注。通过创新渠道或流程获取服务的机会可以有效推动金融包容性,以消费信贷评分为例,一直以来,许多消费者都面临着信贷报告存在重大错误、缺乏评分所需的信贷报告信息或信贷报告评分较低等难题。就像前面提到的,银行和其他金融服务提供商正使用AI来开发信贷评分模型,所考虑的因素并不局限于常用的数据指标,这些新模型能够让更多处于现有信贷系统边缘的消费者改善自己的信用状况,这样做不仅优势众多,且成本更低。而且AI也能为信贷机构提供更精确的模型和价格风险,从而加快决策速度。

AI可能会为消费者提供新优势,但它仍受公平借贷和其他消费者保护规定的约束,需要遵守这些法律法规。AI是自动化流程且对直接人工干预的依赖较少,但这并不代表它毫无偏见。算法和模型可以反映开发人员和训练数据的目标和观点,同样,AI工具也会反映或"学习"创建环境的偏见。2016年财政部一份报告指出"虽然数据驱动的算法可以加快信贷评估并降低成本,但也存在信贷结果差异性影响和违反公平借贷法规的风险。"

最近的一个例子就说明了将偏见无意识引入AI模型的风险。据报道,最近有一家大型公司开发了一款AI招聘工具并通过之前成功应聘人员的简历数据集进行了训练,用于招聘软件开发人员,但随后不久,该工具便被弃用。因为训练数据集里之前招聘的软件开发人员大多是男性,所以AI形成了对女性申请人的偏见,甚至排除了两所女子学院毕业生的简历。

更重要的是,《平等信贷机会法》(ECOA)和《公平信用调查报告法案》(FCRA)要求信贷机构在针对消费者采取不良或不利行动时发出通知。这些要求有助于提高承保流程的透明度,通过要求信贷机构解释决策原因来推动公平借贷,同时为消费者提供可执行信息,帮助他们改善信用状况。遵守这些要求意味着要找到解释AI决策的方式,但是,某些AI工具的不透明度导致很难向消费者解释信贷决策,这样一来,消费者就无法通过改变自己的行为来提高信用评分。幸运的是,AI本身也给出了对应的解决方案:AI社区开发出了"可解释"的AI工具作为回应,重点帮助消费者扩大信贷准入范围。很高兴你们今天的议程中也有这个主题。

展望未来

早期经验教训中最重要的一个就是,我们现在并不能预知所有潜在结果,公司应持续关注AI快速演变过程中出现的新问题。纵观银行业发展历史,新产品和流程一直都是问题频发地带。进一步说,公司并不能因为AI的设计主旨是"学习"或人工错误较少就认为它出问题的可能性更低。AI未能按预期运行的例子有很多,这也提醒我们记住,意外时有发生。公司要了解AI可能存在的缺陷并采取适当的控制措施,以预防和减少未来可能出现的问题。

就我们而言,我们还在了解AI在银行领域的应用。欢迎各位共同探讨AI和其他创新在银行和其他金融服务领域的应用,以及我们现有的法律、法规、指南和政策如何与这些新方法相互影响。我们的任务是为各类金融创新营造一个环境,方便对社会有益、负责任的创新可以适当减轻风险并持续满足适用法律法规的要求。

至于其他技术发展,监管机构需要借鉴其他领域的经验并有针对性地履行自己的职责。在后续探索AI相关的政策和监管问题方面,我们希望能与更多利益相关方通力合作。

Although it is still early days, it is already evident that the application of artificial intelligence (AI) in financial services is potentially quite important and merits our attention.

Through our Fintech working group, we are working across the Federal Reserve System to take a deliberate approach to understanding the potential implications of AI for financial services, particularly as they relate to our responsibilities. In light of the potential importance of AI, we are seeking to learn from industry, banks, consumer advocates, researchers, and others, including through today's conference. I am pleased to take part in this timely discussion of how technology is changing the financial landscape.1

The Growing Use of Artificial Intelligence in Financial Services

My focus today is the branch of artificial intelligence known as machine learning, which is the basis of many recent advances and commercial applications.2 Modern machine learning applies and refines, or "trains," a series of algorithms on a large data set by optimizing iteratively as it learns in order to identify patterns and make predictions for new data.3 Machine learning essentially imposes much less structure on how data is interpreted compared to conventional approaches in which programmers impose ex ante rule sets to make decisions.

The three key components of AI--algorithms, processing power, and big data--are all increasingly accessible. Due to an early commitment to open-source principles, AI algorithms from some of the largest companies are available to even nascent startups.4 As for processing power, continuing innovation by public cloud providers means that with only a laptop and a credit card, it is possible to tap into some of the world's most powerful computing systems by paying only for usage time, without having to build out substantial hardware infrastructure. Vendors have made it easy to use these tools for even small businesses and non-technology firms, including in the financial sector. Public cloud companies provide access to pre-trained AI models via developer-friendly application programming interfaces or even "drop and drag" tools for creating sophisticated AI models.5 Most notably, the world is creating data to feed those models at an ever-increasing rate. Whereas in 2013 it was estimated that 90 percent of the world's data had been created in the prior two years, by 2016, IBM estimated that 90 percent of global data had been created in the prior year alone.6

The pace and ubiquity of AI innovation have surprised even experts. The best AI result on a popular image recognition challenge improved from a 26 percent error rate to 3.5 percent in just four years. That is lower than the human error rate of 5 percent.7 In one study, a combination AI-human approach brought the error rate down even further--to 0.5 percent.

So it is no surprise that many financial services firms are devoting so much money, attention, and time to developing and using AI approaches. Broadly, there is particular interest in at least five capabilities.8 First, firms view AI approaches as potentially having superior ability for pattern recognition, such as identifying relationships among variables that are not intuitive or not revealed by more traditional modeling. Second, firms see potential cost efficiencies where AI approaches may be able to arrive at outcomes more cheaply with no reduction in performance. Third, AI approaches might have greater accuracy in processing because of their greater automation compared to approaches that have more human input and higher "operator error." Fourth, firms may see better predictive power with AI compared to more traditional approaches--for instance, in improving investment performance or expanding credit access. Finally, AI approaches are better than conventional approaches at accommodating very large and less-structured data sets and processing those data more efficiently and effectively. Some machine learning approaches can be "let loose" on data sets to identify patterns or develop predictions without the need to specify a functional form ex ante.

What do those capabilities mean in terms of how we bank? The Financial Stability Board highlighted four areas where AI could impact banking.9 First, customer-facing uses could combine expanded consumer data sets with new algorithms to assess credit quality or price insurance policies. And chatbots could provide help and even financial advice to consumers, saving them the waiting time to speak with a live operator. Second, there is the potential for strengthening back-office operations, such as advanced models for capital optimization, model risk management, stress testing, and market impact analysis. Third, AI approaches could be applied to trading and investment strategies, from identifying new signals on price movements to using past trading behavior to anticipate a client's next order. Finally, there are likely to be AI advancements in compliance and risk mitigation by banks. AI solutions are already being used by some firms in areas like fraud detection, capital optimization, and portfolio management.

Current Regulatory and Supervisory Approaches

The potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection, or the financial system.10 The question, then, is how should we approach regulation and supervision? It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.

Regulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy. Likewise, it is important not to drive responsible innovation away from supervised institutions and toward less regulated and more opaque spaces in the financial system.11

Our existing regulatory and supervisory guardrails are a good place to start as we assess the appropriate approach for AI processes. The National Science and Technology Council, in an extensive study addressing regulatory activity generally, concludes that if an AI-related risk "falls within the bounds of an existing regulatory regime, . . . the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI."12 A recent report by the U.S. Department of the Treasury reaches a similar conclusion with regard to financial services.13

With respect to banking services, a few generally applicable laws, regulations, guidance, and supervisory approaches appear particularly relevant to the use of AI tools. First, the Federal Reserve's "Guidance on Model Risk Management" (SR Letter 11-7) highlights the importance to safety and soundness of embedding critical analysis throughout the development, implementation, and use of models, which include complex algorithms like AI.14 It also underscores "effective challenge" of models by a "second set of eyes"--unbiased, qualified individuals separated from the model's development, implementation, and use. It describes supervisory expectations for sound independent review of a firm's own models to confirm they are fit for purpose and functioning as intended. If the reviewers are unable to evaluate a model in full or if they identify issues, they might recommend the model be used with greater caution or with compensating controls. Similarly, when our own examiners evaluate model risk, they generally begin with an evaluation of the processes firms have for developing and reviewing models, as well as the response to any shortcomings in a model or the ability to review it. Importantly, the guidance recognizes that not all aspects of a model may be fully transparent, as with proprietary vendor models, for instance. Banks can use such models, but the guidance highlights the importance of using other tools to cabin or otherwise mitigate the risk of an unexplained or opaque model. Risks may be offset by mitigating external controls like "circuit-breakers" or other mechanisms. And importantly, models should always be interpreted in context.

Second, our guidance on vendor risk management (SR 13-19/CA 13-21), along with the prudential regulators' guidance on technology service providers, highlights considerations firms should weigh when outsourcing business functions or activities--and could be expected to apply as well to AI-based tools or services that are externally sourced.15 The vast majority of the banks that we supervise will have to rely on the expertise, data, and off-the-shelf AI tools of nonbank vendors to take advantage of AI-powered processes. Whether these tools are chatbots, anti-money-laundering/know your customer compliance products, or new credit evaluation tools, it seems likely that they would be classified as services to the bank. The vendor risk-management guidance discusses best practices for supervised firms regarding due diligence, selection, and contracting processes in selecting an outside vendor. It also describes ways that firms can provide oversight and monitoring throughout the relationship with the vendor, and considerations about business continuity and contingencies for a firm to consider before the termination of any such relationship.

Third, it is important to emphasize that guidance has to be read in the context of the relative risk and importance of the specific use-case in question. We have long taken a risk-focused supervisory approach--the level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used.16 That principle also applies generally to the attention that supervised firms devote to the different approaches they use: firms should apply more care and caution to a tool they use for major decisions or that could have a material impact on consumers, compliance, or safety and soundness.

For its part, AI is likely to present some challenges in the areas of opacity and explainability. Recognizing there are likely to be circumstances when using an AI tool is beneficial, even though it may be unexplainable or opaque, the AI tool should be subject to appropriate controls, as with any other tool or process, including how the AI tool is used in practice and not just how it is built. This is especially true for any new application that has not been fully tested in a variety of conditions. Given the large data sets involved with most AI approaches, it is vital to have controls around the various aspects of data--including data quality as well as data suitability. Just as with conventional models, problems with the input data can lead to cascading problems down the line. Accordingly, we would expect firms to apply robust analysis and prudent risk management and controls to AI tools, as they do in other areas, as well as to monitor potential changes and ongoing developments.

For example, let's take the areas of fraud prevention and cybersecurity, where supervised institutions may need their own AI tools to identify and combat outside AI-powered threats. The wide availability of AI's building blocks means that phishers and fraudsters have access to best-in-class technologies to build AI tools that are powerful and adaptable. Supervised institutions will likely need tools that are just as powerful and adaptable as the threats that they are designed to face, which likely entails some degree of opacity. While so far, most phishing attacks against consumers have relied on standard-form emails, likely due to the high cost of personalization, in the future, AI tools could be used to make internet fraud and phishing highly personalized.17 By accessing data sets with consumers' personally identifiable information and applying open-source AI tools, a phisher may be able to churn out highly targeted emails to millions of consumers at relatively low cost, containing personalized information such as their bank account number and logo, along with past transactions.18 In cases such as this, where large data sets and AI tools may be used for malevolent purposes, it may be that AI is the best tool to fight AI.

Let's turn to the related issue of the proverbial "black box"--the potential lack of explainability associated with some AI approaches. In the banking sector, it is not uncommon for there to be questions as to what level of understanding a bank should have of its vendors' models, due to the balancing of risk management, on the one hand, and protection of proprietary information, on the other. To some degree, the opacity of AI products can be seen as an extension of this balancing. But AI can introduce additional complexity because many AI tools and models develop analysis, arrive at conclusions, or recommend decisions that may be hard to explain. For instance, some AI approaches are able to identify patterns that were previously unidentified and are intuitively quite hard to grasp. Depending on what algorithms are used, it is possible that no one, including the algorithm's creators, can easily explain why the model generated the results that it did.

The challenge of explainability can translate into a higher level of uncertainty about the suitability of an AI approach, all else equal. So how does, or even can, a firm assess the use of an approach it might not fully understand? To a large degree, this will depend on the capacity in which AI is used and the risks presented. One area where the risks may be particularly acute is the consumer space generally, and consumer lending in particular, where transparency is integral to avoiding discrimination and other unfair outcomes, as well as meeting disclosure obligations.19 Let me turn briefly to this topic.

The potential for the application of AI tools to result in new benefits to consumers is garnering a lot of attention. The opportunity to access services through innovative channels or processes can be a potent way to advance financial inclusion.20 Consider, for instance, consumer credit scoring. There are longstanding and well-documented concerns that many consumers are burdened by material errors on their credit reports, lack sufficient credit reporting information necessary for a score, or have credit reports that are unscorable.21 As noted earlier, banks and other financial service providers are using AI to develop credit-scoring models that take into account factors beyond the usual metrics. There is substantial interest in the potential for those new models to allow more consumers on the margins of the current credit system to improve their credit standing, at potentially lower cost. As noted earlier, AI also has the potential to allow creditors to more accurately model and price risk, and to bring greater speed to decisions.

AI may offer new consumer benefits, but it is not immune from fair lending and other consumer protection risks, and compliance with fair lending and other consumer protection laws is important.22 Of course, it should not be assumed that AI approaches are free of bias simply because they are automated and rely less on direct human intervention. Algorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, AI tools can reflect or "learn" the biases of the society in which they were created. A 2016 Treasury Department report noted that while "data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations."23

A recent example illustrates the risk of unwittingly introducing bias into an AI model. It was recently reported that a large employer attempted to develop an AI hiring tool for software developers that was trained with a data set of the resumes of past successful hires, which it later abandoned. Because the pool of previously hired software developers in the training data set was overwhelmingly male, the AI developed a bias against female applicants, going so far as to exclude resumes of graduates from two women's colleges.24

Importantly, the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) include requirements for creditors to provide notice of the factors involved in taking actions that are adverse or unfavorable for the consumer.25 These requirements help provide transparency in the underwriting process, promote fair lending by requiring creditors to explain why they reached their decisions, and provide consumers with actionable information to improve their credit standing. Compliance with these requirements implies finding a way to explain AI decisions. However, the opacity of some AI tools may make it challenging to explain credit decisions to consumers, which would make it harder for consumers to improve their credit score by changing their behavior. Fortunately, AI itself may play a role in the solution: The AI community is responding with important advances in developing "explainable" AI tools with a focus on expanding consumer access to credit.26 I am pleased that this is one of the topics on your agenda today.

Looking Ahead

Perhaps one of the most important early lessons is that not all potential consequences are knowable now--firms should be continually vigilant for new issues in the rapidly evolving area of AI. Throughout the history of banking, new products and processes have been an area where problems can arise. Further, firms should not assume that AI approaches are less susceptible to problems because they are purported to be able to "learn" or less prone to human error. There are plenty of examples of AI approaches not functioning as expected--a reminder that things can go wrong. It is important for firms to recognize the possible pitfalls and employ sound controls now to prevent and mitigate possible future problems.

For our part, we are still learning how AI tools can be used in the banking sector. We welcome discussion about what use cases banks and other financial services firms are exploring with AI approaches and other innovations, and how our existing laws, regulations, guidance, and policy interests may intersect with these new approaches. 27 When considering financial innovation of any type, our task is to facilitate an environment in which socially beneficial, responsible innovation can progress with appropriate mitigation of risk and consistent with applicable statutes and regulations.

用微信扫描可以分享至好友和朋友圈

扫描二维码或搜索微信号“iweiyangx”
关注未央网官方微信公众号,获取互联网金融领域前沿资讯。

发表评论

发表评论

您的评论提交后会进行审核,审核通过的留言会展示在下方留言区域,请耐心等待。

评论

您的个人信息不会被公开,请放心填写! 标记为的是必填项

取消

渺渺

38
总文章数

TA还没写个人介绍。。。

谷歌斥资百万美元投资日本人工智能创企ABEJA

Pymnts 12-06

英国金融服务AI创企Axyon获130万英镑A轮融资

George Ged... | ALTFI 12-04

AI也能生成指纹,生物识别系统该如何应对?

Bryan Clar... 12-03

BAT,十年磨一“见”

令诸侯 | 一号财经 12-02

高通公司推出AI专项风投基金

pymnts | PYMNTS 11-30

版权所有 © 清华大学五道口金融学院互联网金融实验室 | 京ICP备17044750号-1