Edited by
Andreas HOLZINGER, Randy GOEBEL, Ruth FONG, Taesup MOON, Klaus-Robert MÜLLER, Wojciech SAMEK

Following the success of our XXAI workshop at ICML 2020 we are preparing a Springer Lecture Notes on Artificial Intelligence (LNAI) as an archival benefit for the international explainable AI research community as a perfect extension of LNAI 11700.

Within the last years, statistical machine learning (ML) has become very successful and has triggered a renaissance of artificial intelligence (AI). At the same time, the most successful ML models, including Deep Neural Networks (DNN), have enormously gained in predictivity. However, at the same time such models have steadily increased in complexity. Unfortunately, this often happened at the expense of human comprehensibility and interpretability (correlation vs. causality). Consequently, an active field of research called explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. The increasingly growing xAI community has already achieved important advances, such as robust heatmap-based explanations of DNN classifiers. From the application in digital transformation (e.g. agriculture, climate, forest operations, medical and health applications, cyber-physical systems, automation tools and robotics, sustainable living, sustainable cities, smart farm, etc.), there is now a need to massively engage in new scenarios, such as explaining unsupervised and intensified learning and creating explanations that are optimally structured for human decision makers. While explainable AI fundamentally deals with the implementation of transparency and traceability of statistical black-box ML methods, there is an urgent need to go beyond explainable AI, e.g. to extend explainable AI with causability, to measure the quality of explanations and to find solutions about how we can build efficient human-AI interfaces for these novel interactions between artificial intelligence and human intelligence. For certain tasks, interactive machine learning with the human-in-the-loop can be advantageous because a human domain expert can sometimes complement the AI with implicit knowledge. Such a human-in-the-loop can sometimes – not always of course – contribute to an artificial intelligence with experience, conceptual understanding, context awareness and causal reasoning. Humans are robust, can generalize from a few examples and are able to understand the context even from few data. Formalized, this human knowledge can be used to create structural causal models of human decision making, and features can be traced back to train AI – and thus contribute to making current AI even more successful – beyond the current state-of-the-art.

The field of explainable AI has received exponential interest in the international machine learning and AI research community. Awareness of the need to explain ML models has grown in similar proportions in industry, academia and governments. With the substantial explainable AI research community that has been formed, there is now a great opportunity to make this push towards successful explainable AI applications. With this volume of Springer Lecture Notes in Artificial Intelligence (LNAI), we will contribute to help the international research community to accelerate this process, promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately help to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed.

Contact: andreas.holzinger AT human-centered.ai

Editors

Andreas HOLZINGER, Human-Centered AI Lab, University of Natural Resources and Life Sciences Vienna, Austria

Randy GOEBEL, xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton (AB), Canada

Ruth FONG, Department of Computer Science, Princeton University, Princeton (NJ), USA

Taesup MOON, Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea

Klaus-Robert MÜLLER, TU Berlin, BIFOLD Berlin, Germany, and Korea University, Seoul, Korea

Wojeciech SAMEK, Department for Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany

International Scientific Advisory Board

Osbert BASTANI, Trustworthy Machine Learning Group, University of Pennsylvania, Philadelphia, PA, USA

Tarek R. BESOLD, Neurocat.ai, Artificial Intelligence Safety, Security and Privacy, Berlin, Germany

Przemyslaw BIECEK, Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland

Alexander BINDER, Information Systems Technology and Design, Singapore University of Technology and Design, Singapore

John M. CARROLL, Penn State’s University Center for Human-Computer Interaction, University Park, PA, USA

Sanjoy DASGUPTA, Artificial Intelligence Group, School of Engineering, University of California, San Diego, CA, USA

Amit DHURANDHAR, Machine Learning and Data Mining Group, Thomas J. Watson Research Center, Yorktown Heights, NY, USA

David EVANS, Department of Computer Science, University of Virigina, Charlottesville, VA, USA

Alexander FELFERNIG, Applied Artificial Intelligence (AIG) research group, Graz University of Technology, Graz, Austria

Aldo FAISAL, Brain and Behaviour Lab, Imperial College London, UK

Hani HAGRAS, School of Computer Science and Electronic Engineering, University of Essex, UK

Sepp HOCHREITER, Institute for Machine Learning, Johannes Kepler University Linz, Austria

Xiaowei HUANG, Department of Computer Science, University of Liverpool, UK

Krishnaram KENTHAPADI, Amazon AWS Artificial Intelligence, Fairness Transparency and Explainability Group, Sunnyvale, CA, USA

Gitta KUTYNIOK, Mathematical Data Science and Artificial Intelligence, Ludwig Maximilians Universität München, Germany

Himabindu LAKKARAJU, AI4LIFE Group and TrustML, Department of Computer Science, Harvard University, USA

Gregoire MONTAVON, Machine Learning & Intelligent Data Analysis Group, Fac. of Electrical Engineering & Computer Sciene, TU Berlin, Germany

Sang Min PARK, Data Science Lab, Department of Biomedical Science, Seoul National Unviersity, Seoul, Korea

Natalia DIAZ-RODRIGUEZ, Autonomous Systems and Robotics Lab, École Nationale Supérieure de Techniques Avancées, Paris, France

Lior ROKACH, Dep. of Software & Information Systems Engineering, Faculty of Engineering Sciences, Ben-Gurion University of the Negev, Israel

Ribana ROSCHER, Institute for Geodesy and Geoinformation, University of Bonn, Germany

Kate SAENKO, Computer Vision and Learning Group, Boston University, MA, USA

Sameer SINGH, Department of Computer Science, University of California, Irvine, CA, USA

Ankur TALY, Google Research, Mountain View, CA, USA

Andrea VEDALDI, Visual Geometry Group, Engineering Science Department, University of Oxford, UK

Ramakrishna VEDANTAM, Facebook AI Research (FAIR), New York, NYC, USA

Bolei ZHOU, Department of Information Engineering, The Chinese University of Hong Kong, China

Jianlong ZHOU, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia

Table of Contents

Editorial

xxAI – Beyond explainable Artificial Intelligence

Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon,
Klaus-Robert Müller, Wojciech Samek… 1-8 (8p.)

Part 1 Current Methods and Challenges

Ch1: Explainable AI Methods – A Brief Overview

Andreas Holzinger, Anna Saranti, Christoph Molnar,
Przemyslaw Biecek, Wojciech Samek… 9-35 (26p.)

Ch2: Bhatt et al., Challenges in Deploying Explainable Machine Learning …35-54 (20p)

Ch3: Molnar et al., General Pitfalls of Model-Agnostic Interpretation

Methods for Machine Learning Models …55-84(30p)

Ch4: Salewski et al., CLEVR-X: A Visual Reasoning Dataset for Natural

Language Explanations … 85-104  (20p)

    Part 2 New Developments in Explainable AI

Ch5: Kolek et al., A Rate-Distortion Framework for Explaining Black-box

Model Decisions …105-128 (24p)

Ch6: Montavon et al., Explaining the Predictions of Unsupervised

Learning Models … 129-150  (22p)

Ch7: Karimi et al., Towards Causal Algorithmic Recourse … 151-180 (30p)

Ch8: Zhou, Interpreting Generative Adversarial Networks for Interactive

Image Generation …181-188 (8p)

Ch9: Dinu et al., XAI and Strategy Extraction via Reward Redistribution …189-218 (30p)

Ch10: Bastani et al., Interpretable, Verifiable, and Robust

Reinforcement Learning via Program Synthesis …219-240 (22p)

Ch11: Singh et al., Interpreting and improving deep-learning models with

reality checks …241-266 (26p)

Ch12: Bargal et al., Beyond the Visual Analysis of Deep Model Saliency …267- 282 (16p)

Ch13: Becking et al., ECQ^2: Quantization for Low-Bit and Sparse DNNs …283-308 (26p)

Ch14: Marcos et al., A whale’s tail – Finding the right whale in an

uncertain world …309-324 (16p)

Ch15: Mamalakis et al., Explainable Artificial Intelligence in

Meteorology and Climate Science: Model fine-tuning, calibrating trust

and learning new science …325-350 (26p)

    Part 3 An Interdisciplinary Approach to Explainable AI

Ch16: Hacker and Passoth, Varieties of AI Explanations under the Law.

From the GDPR to the AIA, and beyond … 351-382 (32p)

Ch17: Zhou et al., Towards Explainability for AI Fairness …383-394 (12p)

Ch18: Tsai and Carroll, Logic and Pragmatics in AI Explanation …395-404 (10p)

 Total pages: 404

INSTRUCTIONS FOR AUTHORS:

A) VOLUME SCOPE

We call for contributions that focus on, but are not limited to the following topics with cross-domain applications:

– Explanations beyond the DNN classifiers: Random forests, unsupervised learning, reinforcement learning
– Explanations beyond heat maps: structured explanations, Q/A and dialog systems, human-in-the-loop
– Explanation beyond explanation : improve ML models and algorithms, verify ML, gain insights

specifically including, but not limited to (alphabetically, not prioritized):

  • Adversarial attacks explainability
  • Believability and manipulability of explantions (especially in contexts where they need to meet a legal evidence standard)
  • Explainability, Causality, Causability (Causa-bi-lity is not a typo, see definitions below *)
  • Counterfactual explanations
  • Dialogue Systems for xAI
  • Graph Neural Networks
  • Human-AI interfaces
  • Human-centered AI and responsibility
  • Human interpretability
  • Interactive Machine Learning with the human-in-the-loop
  • Interpretable Models (vs. post-hoc explanations)
  • Intelligent User Interfaces
  • Knowledge Graphs
  • Multi-Classifier Systems
  • Ontologies and xAI
  • Question/Answering Dialog Systems
  • Explainability and Recommender systems

*) Please discriminate:

  • Explainability := technically highlights decision relevant parts of machine representations and machine models i.e., parts which contributed to model accuracy in training, or to a specific prediction. It does NOT refer to a human model !
  • Causality : = relationship between cause and effect in the sense of Pearl
  • Causability := the measureable extent to which an explanation to a human achieves a specified level of causal understanding, see Holzinger) It does refer to a human model !

Papers which deal with fundamental research questions and theoretical aspects are very welcome.

N.B..: This Volume will be indexed by SCI (as it is a pastproceedings from our ICML Workshop) and will be made gold open access, i.e. the copyright CC-BY remains with the authors!

B) SCHEDULE

We will ensure the highest possible quality, to provide a clear benefit to potential readers; this needs careful reviewing and revision phases. For the schedule please see above the “quick facts”.

Please send your paper proposal directly to the contact given above or contact one of the editors directly. After a first thematic inspection and quality check you get an invitation for submission. Prepare your paper following the Springer llncs2e style (llncs.cls, splncs.bst),  the template can be found comfortably on Overleaf:
https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer-science/kzwwpvhwnvfj

The ideal paper lenght is between 10 and 20 pages but we are not strict on that, the only request is,
please produce even pages to ensure smooth page breaks, e.g. 10, 12, 14, 16, 18, 20, 22 pages.

Here you find the general Springer LNCS information page

C) CHAPTER SUBMISSION

Please submit your paper directly to the Microsoft CMT using this link:

https://cmt3.research.microsoft.com/XXAI2021/

D) REVIEW PHASE

Your paper will be assigned to at least three reviewers from the international xAI community, so that you will receive useful feedback on how to further improve your paper. You will get notified in due course to prepare the final version. For full transparency of the review process, you can find the review template here (scroll down to the middle of the page):

REVIEW-TEMPLATE-2020-XXXX

E) REVISION PHASE

Please revise your paper according to the reviewer requests.

Upon acceptance please send the following three items
directly to the address given above
1) Your paper as pdf (please ensure even page numbers, e.g. … 14, 16, 18, 20, 22 … pages)
2) Your source files (LaTeX preferred – pack all source files in one single zip-folder)

THIS VOLUME WILL BE MADE FULLY GOLD OPEN ACCESS, so there is no letter fo consent necessary.

F) PRODUCTION PHASE

Your files will be carefully checked and send into Springer production. Authors will be contacted for checking the page proofs directly by the Springer production team.  As a gratitude you will receive one hard copy of the printed volume fresh from the press. The electronic edition will be made available freely open access to the international research community. We are grateful for the sponsors of the open access fee – link – (tba.)