《国际科技文献速递:智能制造》(2023年11月)


总第 23 期
本期共收录论文100篇,以下为部分内容,如需查看全部内容请进行注册,并联系010-88379895成为高级会员。

【标题】Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

【参考中译】可解释人工智能(XAI):我们知道什么,还剩下什么才能实现值得信赖的人工智能

【类型】 期刊

【关键词】 Explainable artificial intelligence; Interpretable machine learning; Trustworthy AI; AI principles; Post-hoc explainability; XAI assessment; Data fusion; Deep learning

【参考中译】 可解释的人工智能可解释的机器学习;值得信赖的人工智能; AI原则;事后可解释性; XAI评估;数据融合深度学习

【作者】 Sajid Ali; Tamer Abuhmed; Shaker El-Sappagh; Khan Muhammad; Jose M. Alonso-Moral; Roberto Confalonieri; Riccardo Guidotti; Javier Del Ser; Natalia Diaz-Rodriguez; Francisco Herrera

【摘要】 Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model's decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.

【参考中译】 人工智能(AI)目前正被广泛地应用于各种复杂的应用中,但由于其黑箱性质,许多AI模型的结果很难理解和信任。通常,理解人工智能模型S决策背后的推理是至关重要的。因此,出现了对用于提高对AI模型的信任的可解释AI(XAI)方法的需求。近年来,XAI已经成为人工智能领域的一个热门研究主题。现有的调查论文处理了XAI的概念、它的一般术语和后解释方法,但还没有任何评论关注评估方法、可用的工具、XAI数据集和其他相关方面。因此,在这项综合性研究中,我们通过一个案例研究实例,为读者提供在这一迅速崛起的领域的研究现状和趋势的概述。本研究首先解释了XAI的背景,常见的定义,并总结了XAI中最近提出的有监督机器学习的技术。本研究使用一个层次分类系统将XAI技术划分为四个轴:(I)数据可解释性,(Ii)模型可解释性,(Iii)特殊后可解释性,和(Iv)解释评估。我们还介绍了可用的评估指标以及开源的包和数据集,并指出了未来的研究方向。然后,概述了可解释性在法律需求、用户观点和应用导向方面的意义,称为XAI关注点。本白皮书主张根据特定的用户类型定制解释内容。通过查看2016年1月至2022年10月发表在知名期刊上的410篇评论文章,并使用广泛的研究数据库作为信息来源,对XAI技术和评估进行了审查。这篇文章针对的是那些对让自己的人工智能模型更值得信赖感兴趣的XAI研究人员,以及来自其他学科的研究人员,他们正在寻找有效的XAI方法来自信地完成任务,同时从数据中交流意义。

【来源】 Information Fusion 2023, vol.99

【入库时间】 2023/11/28

 

【标题】Academic Smart Chatbot to Support Emerging Artificial Intelligence Conversation

【参考中译】学术智能聊天机器人支持新兴的人工智能对话

【类型】 会议

【关键词】 Chatbot reliability; Academic services; Artificial intelligence chatbot; Technology influence; Artificial intelligence conversation

【参考中译】 聊天机器人可靠性;学术服务;人工智能聊天机器人;技术影响;人工智能对话

【作者】 Richki Hardi; Vicente A Pitogo; Ahmad Nairn Che Pee; Agung Sakti Pribadi; Muhammad Haziq Lim Bin Abdullah; Jack Febrian Rusdi

【摘要】 The increasing workload and the number of academic information needs in the campus environment make the academic part of the stress. The addition of students and plans for additional study programs that will continue to be carried out by universities make academic staff tired and slow to work because they cannot provide fast and appropriate services to students and the academic community. Artificial Intelligence (AI) based chatbot technology is present to help perform specific tasks similar to academic officers. With the K-Nearest Neighbor (K-NN) method, the highest K value in the third class is 55.70 percent. This value explains that problems related to text classification can be solved on K-NN and give good results. In this evaluation, customer response times were significantly shorter when using chatbot technology compared to response times before using them. In addition, the work done by academic staff began to decrease, while the accuracy of chatbots remained at 100 percent in tests that compared chatbots with academic staff. Therefore, the evaluation results show that chatbots effectively increase efficiency in handling customer inquiries. There were 62 respondents consisting of 13.8 percent of lecturers, 9.2 percent of staff, and 76.9 percent of students who implemented chatbots. Testing of chatbot technology includes ability, consistency, responsibility, and performance. The validity test uses a significance level of 5 percent. The test results found that the level of influence of the use of chatbot technology by users in obtaining academic information was more reliable, with the acquisition of the Cronbach Alpha value of 0.82. A solution offered for the academic community and the academic community to access services more quickly and practically by using a chatbot. Chatbots can also reduce the workload of academic staff and affect the quality of service in universities to be more optimal.

【参考中译】 校园环境中不断增加的工作量和越来越多的学术信息需求使学术成为压力的一部分。学生的增加和大学将继续开展的额外学习项目的计划使学术人员感到疲惫和工作缓慢,因为他们无法为学生和学术界提供快速和适当的服务。基于人工智能(AI)的聊天机器人技术被用来帮助执行类似于学术官员的特定任务。用K-近邻(K-NN)方法,第三类的K值最高为55.70%。该值说明可以在K-NN上解决与文本分类相关的问题,并给出较好的结果。在这项评估中,与使用Chatbot技术之前的响应时间相比,使用Chatbot技术的客户响应时间显著缩短。此外,学术人员所做的工作开始减少,而在将聊天机器人与学术人员进行比较的测试中,聊天机器人的准确率保持在100%。因此,评估结果表明,聊天机器人有效地提高了处理客户查询的效率。62名受访者中,13.8%的讲师、9.2%的工作人员和76.9%的学生使用了聊天机器人。对聊天机器人技术的测试包括能力、一致性、责任感和性能。有效性测试使用5%的显著性水平。测试结果发现,用户使用聊天机器人技术获取学术信息的影响程度更可靠,获得的Cronbach Alpha值为0.82。通过使用聊天机器人,为学术界和学术界提供了一种更快、更实用地访问服务的解决方案。聊天机器人还可以减轻教职员工的工作量,影响大学的服务质量更加优化。

【来源】 3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022) 3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022)

【入库时间】 2023/11/28

 

【标题】Explainable Artificial Intelligence (XAI) on Hoax Detection Using Decision Tree C4.5 Method for Indonesian News Platform

【参考中译】印尼新闻平台基于决策树C4.5方法的可解释人工智能(XAI)恶作剧检测

【类型】 会议

【关键词】 Hoax detection; Explainable artificial intelligence; Indonesian fake news; Decision tree; C4.5 algorithm

【参考中译】 恶作剧检测;可解释人工智能;印度尼西亚假新闻;决策树;C4.5算法

【作者】 Jason Imanuel; Lusia Kintanswari; Vincent; Henry Lucky; Andry Chowanda

【摘要】 Hoax news can be defined as false information about events that tricks the readers into believing it as a genuine information. Explainable Artificial Intelligence or XAI is a simple algorithm that easy to understand. The example of XAI is Decision Tree. The methodology of this paper is Data Gathering to collect the data, preprocessing to give label to the parameters, Attribution Selection to determine which parameter to use by calculating the entropy and information gain result, then Decision Tree Construction using training datasets as a model for hoax detection, and calculating Evaluation Metric. This paper contains a total of 200 row of data (100 hoax and 100 fact). This research shows that the highest parameters for detecting hoax news are giving restless or panic emotion, containing hatred or angers, suggestion to share, and promising reward. By using 20% of the dataset for testing, the accuracy of this model is 82%.

【参考中译】 恶作剧新闻可以被定义为关于事件的虚假信息,欺骗读者相信它是真实的信息。可解释人工智能或XAI是一种简单的算法,很容易理解。XAI的例子是决策树。本文的方法是通过数据采集收集数据,对参数进行预处理,通过计算信息熵和信息增益结果进行属性选择以确定使用哪个参数,然后使用训练数据集作为恶作剧检测的模型构建决策树,并计算评估指标。这篇论文共包含200行数据(100个恶作剧和100个事实)。这项研究表明,识别恶作剧新闻的最高参数是给出不安或恐慌的情绪,包含仇恨或愤怒,建议分享,以及承诺奖励。使用20%的数据集进行测试,该模型的准确率为82%。

【来源】 3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022) 3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022)

【入库时间】 2023/11/28

 



来源期刊
Information Fusion《信息融合》

来源会议文集
3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022)《第三届国际智能行政管理中的科学与信息技术会议》