最有看点的互联网金融门户

最有看点的互联网金融门户
其他国际资讯

打开人工智能的黑箱,IBM推出云工具检测AI偏见

IBM推出了一项软件服务,可以扫描人工智能系统的工作情况,以便检测偏见并为正在做出的自动决策提供解释,一定程度的透明度可能是合规目的所必需的,而不仅仅是公司自己的尽职调查。

新的信任和透明度系统运行在IBM云上,并与从IBM作为各种流行的机器学习框架和AI构建环境所构建的模型一兼容,包括自己的Watson技术,以及Tensorflow、SparkML、AWS SageMaker和AzureML。

它表示,可以通过编程来定制服务以满足特定的组织需求,以考虑“任何业务工作流程的独特决策因素”。

完全自动化的SaaS解释了决策并在运行时检测AI模型中的偏见,这意味着它正在捕获“发生时可能产生的不公平结果”。

它还会自动建议添加到模型中的数据,以帮助减少已检测到的任何偏见。

人工智能决策的解释包括显示哪些因素加重了某个方向的决策、对建议的信心和信心背后的因素。

IBM还表示该软件会记录AI模型的准确性、性能和公平性,以及人工智能系统的系谱,这意味着它们可以“以客户服务、监管或合规原因轻松跟踪和召回”。

关于合规方面的一个例子,欧盟的GDPR隐私框架参考了自动决策,并且包括让人们获得有关算法在某些情况下如何工作的详细解释的权利,这意味着企业可能需要能够审核他们的AI。

IBM AI扫描仪工具通过可视化仪表板提供自动决策的细分,这是一种减少对“专业AI技能”依赖的方法。

然而,它也打算让自己的专业服务人员与企业合作使用新的软件服务。因此,它既可以销售人工智能,也可以解决人工智能的不完美问题,当企业试图修复他们的AI时,专家可以帮助消除任何问题。这表明虽然AI确实会删除一些工作,但自动化将忙于创建其他类型的工作。

IBM也不是第一家发现AI偏见商业机会的专业服务公司。几个月前,埃森哲公开了一个公平的工具,用于识别和修复不公平AI。

因此,随着对多个行业自动化的大力推动,设置和销售服务以解决因增加AI的使用而出现的任何问题,看起来也相当庞大。

事实上,鼓励更多企业对进入自动化更有信心。(在这方面,IBM引用了它所进行的研究,发现虽然82%的企业正在考虑人工智能部署,但60%担心责任问题,63%缺乏内部人才来自信地管理技术。)

除了推出自己的付费AI审计工具之外,IBM表示,其研究部门将开放采购AI偏见检测和缓解工具包,旨在鼓励“围绕解决AI中的偏见进行全球合作”。

“IBM引领行业建立信任和透明度原则,以开发新的AI技术。现在是时候将原则付诸于实践了,”IBM认知解决方案高级副总裁David Kenny在一份声明中表示。“我们正在为使用人工智能并面临任何有缺陷的决策产生最大潜在风险的企业提供新的透明度和控制权。”

IBM  has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”


用微信扫描可以分享至好友和朋友圈

扫描二维码或搜索微信号“iweiyangx”
关注未央网官方微信公众号,获取互联网金融领域前沿资讯。

发表评论

发表评论

您的评论提交后会进行审核,审核通过的留言会展示在下方留言区域,请耐心等待。

评论

您的个人信息不会被公开,请放心填写! 标记为的是必填项

取消

AI理财时代来了

茉莉 | 人民周刊 23小时前

MIT新建计算机学院,重点关注人工智能与数据科学

高旭 1天前

与AI成为“同事”?从流程自动化到数字转型

株式会社野村综合研究所 1天前

微软战略投资新加坡打车应用Grab

Kirsten Ko... | 猎云网 10-09

活在AI时代

吴晨 | 经济观察网 10-07

版权所有 © 清华大学五道口金融学院互联网金融实验室 | 京ICP备17044750号-1