The FWF project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain” has been granted a total volume of EUR 392,773,50. The progress of statistical machine learning methods has made AI increasingly successful. Deep learning exceeds human performance even in the medical domain. However, their full potential is limited by their difficulty to generate underlying explanatory structures, hence they lack an explicit declarative knowledge representation. A motivation for this project are rising legal and privacy issues – to understand and retrace machine decision processes. Transparent algorithms could appropriately enhance trust of medical professionals, thereby raising acceptance AI solutions generally. This project will provide important contributions to the international research community in the following ways: 1) evidence in various methods of explainability, patterns of explainability, and explainability measurements. Based on empirical studies we will develop a library of explanatory patterns and a novel grammar how these can be combined. Finally, we will define criteria/benchmarks for explainability and provide answers to the question “What is a good explanation?”. 2) Principles to measure effectiveness of explainability and explainability guidelines and 3) Mapping human understanding with machine explanations and deploying an open explanatory framework along with a set of benchmarks and open data to stimulate and inspire further research among the international machine learning community. All outcomes of this project will be made openly available to the international research community.

  • Project period

    2020 – 2023 (Project start: 4th November 2019)

  • Keywords

    explainable AI, transparent machine learning, causality

  • Recent Publications

    Andreas Holzinger, Andre Carrington & Heimo Müller 2020. Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations. KI – Künstliche Intelligenz (German Journal of Artificial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt, 34, (2), doi:10.1007/s13218-020-00636-z., online available via https://link.springer.com/article/10.1007/s13218-020-00636-z
    In this paper we introduce the System Causability Scale (SCS) to measure the quality of explanations. It is based on the notion of Causability (Holzinger et al., 2019) combined with concepts adapted from the widely accepted System Usability Scale (SUS). In the same way as usability measures the quality of use, Causability measures the quality of explanations. [xAI]

    Andreas Holzinger, Michael Kickmeier-Rust & Heimo Mueller 2019. KANDINSKY Patterns as IQ-Test for machine learning. Springer Lecture Notes LNCS 11713. Cham (CH): Springer Nature Switzerland, pp. 1-14, doi:10.1007/978-3-030-29726-8_1 AI follows the notion of human intelligence which is not a clearly defined term, according to cognitive science includes abilities to think abstract, to reason, and to solve problems from the real world. A hot topic in current AI/machine learning research is to find whether and to what extent algorithms are able to learn abstract thinking and reasoning similarly as humans can do, or whether the learning remains on purely statistical correlations. In this paper we propose to use our Kandinsky Patterns as an IQ-Test for machines and to study concept learning which is a fundamental problem for future AI/ML. [Paper] [exploration enviroment] [TEDx]