Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
参考中译:可解释人工智能(XAI):我们知道什么,还剩下什么才能实现值得信赖的人工智能


          

刊名:Information Fusion
作者:Sajid Ali(Information Laboratory (InfoLab), Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University)
Tamer Abuhmed(Information Laboratory (InfoLab), Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University)
Shaker El-Sappagh(Information Laboratory (InfoLab), Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University)
Khan Muhammad(Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), Department of Applied Artificial Intelligence, College of Computing and Informatics, Sungkyunkwan University)
Jose M. Alonso-Moral(Centro Singular de Investigacion en Tecnoloxias Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Rua de Jenaro de la Fuente Dominguez)
Roberto Confalonieri(Department of Mathematics 'Tullio Levi-Civita', University of Padua)
Riccardo Guidotti(Department of Computer Science, University of Pisa)
Javier Del Ser(TECNALIA, Basque Research and Technology Alliance (BRTA))
Natalia Diaz-Rodriguez(Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada)
Francisco Herrera(Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada)
刊号:738LB033
ISSN:1566-2535
出版年:2023
年卷期:2023, vol.99
页码:101805-1--101805-52
总页数:52
分类号:TP3
关键词:Explainable artificial intelligenceInterpretable machine learningTrustworthy AIAI principlesPost-hoc explainabilityXAI assessmentData fusionDeep learning
参考中译:可解释的人工智能可解释的机器学习;值得信赖的人工智能; AI原则;事后可解释性; XAI评估;数据融合深度学习
语种:eng
文摘:Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model's decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.
参考中译:人工智能(AI)目前正被广泛地应用于各种复杂的应用中,但由于其黑箱性质,许多AI模型的结果很难理解和信任。通常,理解人工智能模型S决策背后的推理是至关重要的。因此,出现了对用于提高对AI模型的信任的可解释AI(XAI)方法的需求。近年来,XAI已经成为人工智能领域的一个热门研究主题。现有的调查论文处理了XAI的概念、它的一般术语和后解释方法,但还没有任何评论关注评估方法、可用的工具、XAI数据集和其他相关方面。因此,在这项综合性研究中,我们通过一个案例研究实例,为读者提供在这一迅速崛起的领域的研究现状和趋势的概述。本研究首先解释了XAI的背景,常见的定义,并总结了XAI中最近提出的有监督机器学习的技术。本研究使用一个层次分类系统将XAI技术划分为四个轴:(I)数据可解释性,(Ii)模型可解释性,(Iii)特殊后可解释性,和(Iv)解释评估。我们还介绍了可用的评估指标以及开源的包和数据集,并指出了未来的研究方向。然后,概述了可解释性在法律需求、用户观点和应用导向方面的意义,称为XAI关注点。本白皮书主张根据特定的用户类型定制解释内容。通过查看2016年1月至2022年10月发表在知名期刊上的410篇评论文章,并使用广泛的研究数据库作为信息来源,对XAI技术和评估进行了审查。这篇文章针对的是那些对让自己的人工智能模型更值得信赖感兴趣的XAI研究人员,以及来自其他学科的研究人员,他们正在寻找有效的XAI方法来自信地完成任务,同时从数据中交流意义。