Explainable Artificial Intelligence (XAI) on Hoax Detection Using Decision Tree C4.5 Method for Indonesian News Platform
参考中译:印尼新闻平台基于决策树C4.5方法的可解释人工智能(XAI)恶作剧检测


     

文集名:3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022)
作者:Jason Imanuel(Computer Science Department School of Computer Science Bina Nusantara University)
Lusia Kintanswari(Computer Science Department School of Computer Science Bina Nusantara University)
Vincent(Computer Science Department School of Computer Science Bina Nusantara University)
Henry Lucky(Computer Science Department School Of Computer Science Bina Nusantara University)
Andry Chowanda(Computer Science Department School Of Computer Science Bina Nusantara University)
会议名:3rd International Conference of Science and Information Technology in Smart Administration (ICSINTESA 2022)
会议日期:10-12 November 2022
会议地点:Denpasar, Bali, Indonesia
出版年:2022
页码:63-68
总页数:6
馆藏号:347586
分类号:TP3-53/I59/(3rd)
关键词:Hoax detectionExplainable artificial intelligenceIndonesian fake newsDecision treeC4.5 algorithm
参考中译:恶作剧检测;可解释人工智能;印度尼西亚假新闻;决策树;C4.5算法
语种:eng
文摘:Hoax news can be defined as false information about events that tricks the readers into believing it as a genuine information. Explainable Artificial Intelligence or XAI is a simple algorithm that easy to understand. The example of XAI is Decision Tree. The methodology of this paper is Data Gathering to collect the data, preprocessing to give label to the parameters, Attribution Selection to determine which parameter to use by calculating the entropy and information gain result, then Decision Tree Construction using training datasets as a model for hoax detection, and calculating Evaluation Metric. This paper contains a total of 200 row of data (100 hoax and 100 fact). This research shows that the highest parameters for detecting hoax news are giving restless or panic emotion, containing hatred or angers, suggestion to share, and promising reward. By using 20% of the dataset for testing, the accuracy of this model is 82%.
参考中译:恶作剧新闻可以被定义为关于事件的虚假信息,欺骗读者相信它是真实的信息。可解释人工智能或XAI是一种简单的算法,很容易理解。XAI的例子是决策树。本文的方法是通过数据采集收集数据,对参数进行预处理,通过计算信息熵和信息增益结果进行属性选择以确定使用哪个参数,然后使用训练数据集作为恶作剧检测的模型构建决策树,并计算评估指标。这篇论文共包含200行数据(100个恶作剧和100个事实)。这项研究表明,识别恶作剧新闻的最高参数是给出不安或恐慌的情绪,包含仇恨或愤怒,建议分享,以及承诺奖励。使用20%的数据集进行测试,该模型的准确率为82%。

注:参考中译为机器自动翻译,仅供参考。