Explainable artificial intelligence (AI) is attracting currently much interest in the AI world an particularly in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). In our recent paper we argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In our recent article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of Deep Learning interpretation and of human explanation on a use case in histopathology. The main contribution of our recent article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system.
The article is categorized under fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction and can be found online (full open access) here: https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312
Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312.