Explainable AI Methods – A brief overview (open access)

open access paper available – free to the international research community

The Next Frontier – AI we can really Trust

Robustness and Explainability are the two ingredients to ensure trustworthy artificial intelligence – talk at ECML 2021

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI has been published on 27 January 2021 in the Journal Information Fusion, Q1, IF=13,669, rank 2/137 in the field of Computer Science, Artificial Intelligence:

https://doi.org/10.1016/j.inffus.2021.01.008

We are grateful for the valuable comments of the anonymous reviewers. Parts of this work have received funding from the EU Project FeatureCloud. The FeatureCloud project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. Parts of this work have been funded by the Austrian Science Fund (FWF) , Project: P-32554 “explainable Artificial Intelligence”.

Measuring the Quality of Explanations just exceeded 5k downloads

In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

August 25-28, 2020, Machine Learning & Knowledge Extraction, LNCS 12279 published !

Our Lecture Notes in Computer Sciene LNCS 12279 of our CD-MAKE Machine Learning & Knowledge Extraction conference have been published

https://link.springer.com/book/10.1007/978-3-030-57321-8

and are available online:

Content at a glance:

Explainable Artificial Intelligence: Concepts, Applications, Research
Challenges and Visions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg,
and Andreas Holzinger
The Explanation Game: Explaining Machine Learning Models
Using Shapley Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Luke Merrick and Ankur Taly
Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. . . . . 39
Andrea Campagner and Federico Cabitza
Explain Graph Neural Networks to Understand Weighted Graph
Features in Node Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Xiaoxiao Li and João Saúde
Explainable Reinforcement Learning: A Survey . . . . . . . . . . . . . . . . . . . . . 77
Erika Puiutta and Eric M. S. P. Veith
A Projected Stochastic Gradient Algorithm for Estimating Shapley Value
Applied in Attribute Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Grah Simon and Thouvenot Vincent
Explaining Predictive Models with Mixed Features Using Shapley Values
and Conditional Inference Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Annabelle Redelmeier, Martin Jullum, and Kjersti Aas
Explainable Deep Learning for Fault Prognostics in Complex Systems:
A Particle Accelerator Use-Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Lukas Felsberger, Andrea Apollonio, Thomas Cartier-Michaud,
Andreas M
üller, Benjamin Todd, and Dieter Kranzlmüller
eXDiL: A Tool for Classifying and eXplaining Hospital Discharge Letters. . . 159
Fabio Mercorio, Mario Mezzanzanica, and Andrea Seveso
Cooperation Between Data Analysts and Medical Experts: A Case Study. . . . 173
Judita Rokošná, František Babič, Ljiljana Trtica Majnarić,
and L
udmila Pusztová
A Study on the Fusion of Pixels and Patient Metadata in CNN-Based
Classification of Skin Lesion Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Fabrizio Nunnari, Chirag Bhuvaneshwara,
Abraham Obinwanne Ezema, and Daniel Sonntag
The European Legal Framework for Medical AI . . . . . . . . . . . . . . . . . . . . . 209
David Schneeberger, Karl Stöger, and Andreas Holzinger
An Efficient Method for Mining Informative Association Rules
in Knowledge Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Parfait Bemarisika and André Totohasina
Interpretation of SVM Using Data Mining Technique to Extract Syllogistic
Rules: Exploring the Notion of Explainable AI in Diagnosing CAD . . . . . . . 249
Sanjay Sekar Samuel, Nik Nailah Binti Abdullah, and Anil Raj
Non-local Second-Order Attention Network for Single Image
Super Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Jiawen Lyn and Sen Yan
ML-ModelExplorer: An Explorative Model-Agnostic Approach to Evaluate
and Compare Multi-class Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Andreas Theissler, Simon Vollert, Patrick Benz, Laurentius A. Meerhoff,
and Marc Fernandes
Subverting Network Intrusion Detection: Crafting Adversarial Examples
Accounting for Domain-Specific Constraints. . . . . . . . . . . . . . . . . . . . . . . . 301
Martin Teuffenbach, Ewa Piatkowska, and Paul Smith
Scenario-Based Requirements Elicitation for User-Centric Explainable AI:
A Case in Fraud Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Douglas Cirqueira, Dietmar Nedbal, Markus Helfert,
and Marija Bezbradica
On-the-fly Black-Box Probably Approximately Correct Checking
of Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Franz Mayr, Ramiro Visca, and Sergio Yovine
Active Learning for Auditory Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . 365
William Coleman, Charlie Cullen, Ming Yan, and Sarah Jane Delany
Improving Short Text Classification Through Global
Augmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Vukosi Marivate and Tshephisho Sefara
Interpretable Topic Extraction and Word Embedding Learning
Using Row-Stochastic DEDICOM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Lars Hillebrand, David Biesner, Christian Bauckhage,
and Rafet Sifa
A Clustering Backed Deep Learning Approach for Document
Layout Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Rhys Agombar, Max Luebbering, and Rafet Sifa
Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity
and Transparency on Algorithmic Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Philipp Schmidt and Felix Biessmann
Applying AI in Practice: Key Challenges and Lessons Learned. . . . . . . . . . . 451
Lukas Fischer, Lisa Ehrlinger, Verena Geist, Rudolf Ramler,
Florian Sobieczky, Werner Zellinger, and Bernhard Moser
Function Space Pooling for Graph Convolutional Networks . . . . . . . . . . . . . 473
Padraig Corcoran
Analysis of Optical Brain Signals Using Connectivity Graph Networks . . . . . 485
Marco Antonio Pinto-Orellana and Hugo L. Hammer
Property-Based Testing for Parameter Learning of Probabilistic
Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Anna Saranti, Behnam Taraghi, Martin Ebner, and Andreas Holzinger
An Ensemble Interpretable Machine Learning Scheme for Securing
Data Quality at the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Anna Karanika, Panagiotis Oikonomou, Kostas Kolomvatsos,
and Christos Anagnostopoulos
Inter-space Machine Learning in Smart Environments . . . . . . . . . . . . . . . . . 535
Amin Anjomshoaa and Edward Curry

The International Cross Domain Conference for MAchine Learning & Knowledge Extraction (CD-MAKE) is a joint effort of IFIP TC 5 (IT), TC 12 (Artificial Intelligence), IFIP WG 8.4 (E-Business), IFIP WG 8.9 (Information Systems), and IFIP WG 12.9 (Computational Intelligence) and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES), see: 

https://www.ares-conference.eu/

The 4th conference is organized at the University College Dublin, Ireland and held as a virtual event, due to the Corona pandemic. A few words about the International Federation for Information Processing (IFIP):

IFIP is the leading multi-national, non-governmental, apolitical organization in Information and Communications Technologies and Computer Sciences, is recognized by the United Nations (UN), and was established in the year 1960 under the auspices of the UNESCO as an outcome of the first World Computer Congress held in Paris in
1959.

 

AI and Machine Learning for Digital Pathology

Artificial-Intelligence-and-Machine-Learning-for-Digital-Pathology

The Springer Lecture Notes in Artificial Intelligence LNAI 12090 have been published and are available online.

Causability is important. Why?

Effective (future) Human-AI interaction must take into account a context specific mapping between explainable-AI and human understanding.

Explainability vs. Causability of Artificial Intelligence in Medicine

In our recent highly cited paper we define the notion of causability, which is different from explainability in that causability is a property of a person, while explainability is a property of a system!

Interactive Machine Learning: Experimental Evidence for the human-in-the-loop

Recent advances in automatic machine learning (aML) allow solving problems without any human intervention, which is excellent in certain domains, e.g. in autonomous cars, where we want to exclude the human from the loop and want fully automatic learning. However, sometimes a human-in-the-loop can be beneficial – particularly in solving computationally hard problems. We provide new experimental insights [1] on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, an Ant Colony Optimization (ACO) framework was used, because this fosters multi-agent approaches with human agents in the loop. We propose unification between the human intelligence and interaction skills and the computational power of an artificial system. The ACO framework is used on a case study solving the Traveling Salesman Problem, because of its many practical implications, e.g. in the medical domain. We used ACO due to the fact that it is one of the best algorithms used in many applied intelligence problems. For the evaluation we used gamification, i.e. we implemented a snake-like game called Traveling Snakesman with the MAX–MIN Ant System (MMAS) in the background. We extended the MMAS–Algorithm in a way, that the human can directly interact and influence the ants. This is done by “traveling” with the snake across the graph. Each time the human travels over an ant, the current pheromone value of the edge is multiplied by 5. This manipulation has an impact on the ant’s behavior (the probability that this edge is taken by the ant increases). The results show that the humans performing one tour through the graphs have a significant impact on the shortest path found by the MMAS. Consequently, our experiment demonstrates that in our case human intelligence can positively influence machine intelligence. To the best of our knowledge this is the first study of this kind and it is a wonderful experimental platform for explainable AI.

[1] Holzinger, A. et al. (2018). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Springer/Nature: Applied Intelligence, doi:10.1007/s10489-018-1361-5.

Read the full article here:
https://link.springer.com/article/10.1007/s10489-018-1361-5