This Title All WIREs
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 7.250

A historical perspective of explainable Artificial Intelligence

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Explainability in Artificial Intelligence (AI) has been revived as a topic of active research by the need of conveying safety and trust to users in the “how” and “why” of automated decision‐making in different applications such as autonomous driving, medical diagnosis, or banking and finance. While explainability in AI has recently received significant attention, the origins of this line of work go back several decades to when AI systems were mainly developed as (knowledge‐based) expert systems. Since then, the definition, understanding, and implementation of explainability have been picked up in several lines of research work, namely, expert systems, machine learning, recommender systems, and in approaches to neural‐symbolic learning and reasoning, mostly happening during different periods of AI history. In this article, we present a historical perspective of Explainable Artificial Intelligence. We discuss how explainability was mainly conceived in the past, how it is understood in the present and, how it might be understood in the future. We conclude the article by proposing criteria for explanations that we believe will play a crucial role in the development of human‐understandable explainable systems. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence
Explanation capability as a problem‐solving activity (left) and example of explanatory knowledge (right) (Wick & Thompson, 1992)
[ Normal View | Magnified View ]
Trepan tree extracted from a trained neural network predicting diabetes risk based on the Pima Indians dataset (Smith, Everhart, Dickson, Knowler, & Johannes, 1988)
[ Normal View | Magnified View ]
Illustration of the neural‐symbolic cycle
[ Normal View | Magnified View ]
Example of an explanation interface visualizing a User style explanation using the explainability power of nearest neighbors for a target user (Coba, Symeonidis, et al., 2019)
[ Normal View | Magnified View ]
Example of counterfactual explanations with DiCE (Mothilal et al., 2020). In this example, a neural network was trained to predict the income of a person based on the above eight features (age, work‐class, etc.). The first table represents the original query, where the model computed a negative outcome. The second table represents the counterfactual examples
[ Normal View | Magnified View ]
Local explanation extracted through LIME in the Boston dataset (Harrison & Rubinfeld, 1978). The dataset contains information collected by the U.S Census Service concerning housing in the area of Boston, Massachusetts. On the left, the median value of owner‐occupied homes in $1000's (the predicted value), is explained using a linear regression model using 5 over 14 features (RM, average number of rooms per dwelling; TAX, full‐value property‐tax rate per $10; 000; NOX, nitric oxides concentration; LSTAT, % lower status of the population; PTRATIO, pupil‐teacher ratio by town). On the right, the local explanation in the form of a linear regression using the mentioned features can be appreciated
[ Normal View | Magnified View ]
Explanations as partial dependence plots—PDPs (left) and Accumulated Local Effect—ALE (right) showing how temperature, humidity, and wind speed affect the predicted number of rented bicycles on a given day (Molnar, 2019). Due to correlation between temperature and humidity, the PDP shows a smaller decrease in predicted number of bikes for high temperature or high humidity compared to the ALE plots. The example shows that when features of a machine learning model are correlated, PDPs are not very accurate and cannot be trusted (Apley & Zhu, 2016)
[ Normal View | Magnified View ]
Example of a line of explanation in Rex (Wick & Thompson, 1992)
[ Normal View | Magnified View ]

Browse by Topic

Technologies > Artificial Intelligence
Fundamental Concepts of Data and Knowledge > Explainable AI

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts