Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 2.541

Causability and explainability of artificial intelligence in medicine

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction
The best performing statistical approaches today are black‐boxes and do not foster understanding, trust and error correction (above). This implies an urgent need not only for explainable models, but also for explanation interfaces—and as a measure for the human‐AI interaction we need the concept of causability—analogous to usability in classic human‐computer interaction
[ Normal View | Magnified View ]
Features in a histology slide annotated by a human expert pathologist
[ Normal View | Magnified View ]
An overview of how deep learning models can be probed for information regarding uncertainty, attribution, and prototypes
[ Normal View | Magnified View ]

Browse by Topic

Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts