Our Lecture Notes in Computer Sciene LNCS 12279 of our CD-MAKE Machine Learning & Knowledge Extraction conference have been published
https://link.springer.com/book/10.1007/978-3-030-57321-8
and are available online:
Content at a glance:
Explainable Artificial Intelligence: Concepts, Applications, Research
Challenges and Visions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg,
and Andreas Holzinger
The Explanation Game: Explaining Machine Learning Models
Using Shapley Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Luke Merrick and Ankur Taly
Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. . . . . 39
Andrea Campagner and Federico Cabitza
Explain Graph Neural Networks to Understand Weighted Graph
Features in Node Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Xiaoxiao Li and João Saúde
Explainable Reinforcement Learning: A Survey . . . . . . . . . . . . . . . . . . . . . 77
Erika Puiutta and Eric M. S. P. Veith
A Projected Stochastic Gradient Algorithm for Estimating Shapley Value
Applied in Attribute Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Grah Simon and Thouvenot Vincent
Explaining Predictive Models with Mixed Features Using Shapley Values
and Conditional Inference Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Annabelle Redelmeier, Martin Jullum, and Kjersti Aas
Explainable Deep Learning for Fault Prognostics in Complex Systems:
A Particle Accelerator Use-Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Lukas Felsberger, Andrea Apollonio, Thomas Cartier-Michaud,
Andreas Müller, Benjamin Todd, and Dieter Kranzlmüller
eXDiL: A Tool for Classifying and eXplaining Hospital Discharge Letters. . . 159
Fabio Mercorio, Mario Mezzanzanica, and Andrea Seveso
Cooperation Between Data Analysts and Medical Experts: A Case Study. . . . 173
Judita Rokošná, František Babič, Ljiljana Trtica Majnarić,
and L’udmila Pusztová
A Study on the Fusion of Pixels and Patient Metadata in CNN-Based
Classification of Skin Lesion Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Fabrizio Nunnari, Chirag Bhuvaneshwara,
Abraham Obinwanne Ezema, and Daniel Sonntag
The European Legal Framework for Medical AI . . . . . . . . . . . . . . . . . . . . . 209
David Schneeberger, Karl Stöger, and Andreas Holzinger
An Efficient Method for Mining Informative Association Rules
in Knowledge Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Parfait Bemarisika and André Totohasina
Interpretation of SVM Using Data Mining Technique to Extract Syllogistic
Rules: Exploring the Notion of Explainable AI in Diagnosing CAD . . . . . . . 249
Sanjay Sekar Samuel, Nik Nailah Binti Abdullah, and Anil Raj
Non-local Second-Order Attention Network for Single Image
Super Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Jiawen Lyn and Sen Yan
ML-ModelExplorer: An Explorative Model-Agnostic Approach to Evaluate
and Compare Multi-class Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Andreas Theissler, Simon Vollert, Patrick Benz, Laurentius A. Meerhoff,
and Marc Fernandes
Subverting Network Intrusion Detection: Crafting Adversarial Examples
Accounting for Domain-Specific Constraints. . . . . . . . . . . . . . . . . . . . . . . . 301
Martin Teuffenbach, Ewa Piatkowska, and Paul Smith
Scenario-Based Requirements Elicitation for User-Centric Explainable AI:
A Case in Fraud Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Douglas Cirqueira, Dietmar Nedbal, Markus Helfert,
and Marija Bezbradica
On-the-fly Black-Box Probably Approximately Correct Checking
of Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Franz Mayr, Ramiro Visca, and Sergio Yovine
Active Learning for Auditory Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . 365
William Coleman, Charlie Cullen, Ming Yan, and Sarah Jane Delany
Improving Short Text Classification Through Global
Augmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Vukosi Marivate and Tshephisho Sefara
Interpretable Topic Extraction and Word Embedding Learning
Using Row-Stochastic DEDICOM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Lars Hillebrand, David Biesner, Christian Bauckhage,
and Rafet Sifa
A Clustering Backed Deep Learning Approach for Document
Layout Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Rhys Agombar, Max Luebbering, and Rafet Sifa
Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity
and Transparency on Algorithmic Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Philipp Schmidt and Felix Biessmann
Applying AI in Practice: Key Challenges and Lessons Learned. . . . . . . . . . . 451
Lukas Fischer, Lisa Ehrlinger, Verena Geist, Rudolf Ramler,
Florian Sobieczky, Werner Zellinger, and Bernhard Moser
Function Space Pooling for Graph Convolutional Networks . . . . . . . . . . . . . 473
Padraig Corcoran
Analysis of Optical Brain Signals Using Connectivity Graph Networks . . . . . 485
Marco Antonio Pinto-Orellana and Hugo L. Hammer
Property-Based Testing for Parameter Learning of Probabilistic
Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Anna Saranti, Behnam Taraghi, Martin Ebner, and Andreas Holzinger
An Ensemble Interpretable Machine Learning Scheme for Securing
Data Quality at the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Anna Karanika, Panagiotis Oikonomou, Kostas Kolomvatsos,
and Christos Anagnostopoulos
Inter-space Machine Learning in Smart Environments . . . . . . . . . . . . . . . . . 535
Amin Anjomshoaa and Edward Curry
The International Cross Domain Conference for MAchine Learning & Knowledge Extraction (CD-MAKE) is a joint effort of IFIP TC 5 (IT), TC 12 (Artificial Intelligence), IFIP WG 8.4 (E-Business), IFIP WG 8.9 (Information Systems), and IFIP WG 12.9 (Computational Intelligence) and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES), see:
https://www.ares-conference.eu/
The 4th conference is organized at the University College Dublin, Ireland and held as a virtual event, due to the Corona pandemic. A few words about the International Federation for Information Processing (IFIP):
IFIP is the leading multi-national, non-governmental, apolitical organization in Information and Communications Technologies and Computer Sciences, is recognized by the United Nations (UN), and was established in the year 1960 under the auspices of the UNESCO as an outcome of the first World Computer Congress held in Paris in
1959.
Visual Feature Concepts of Intestinal Glands – Darmdrüsen Survey
/in experiments/by Andreas HolzingerPlease take part in our study in verbal descriptions/explanations of medical concepts relevant for concept machine learning in medical ai
Please take part in our “human explanation survey”
/in experiments, Explainability, Projects/by Andreas HolzingerThe Human-Centered AI Lab invites the international research community to take part in a human explanation survey
Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI
/in Explainability, HCAI success, Recent Publications, Science News/by Andreas HolzingerOur paper Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI has been published on 27 January 2021 in the Journal Information Fusion, Q1, IF=13,669, rank 2/137 in the field of Computer Science, Artificial Intelligence:
https://doi.org/10.1016/j.inffus.2021.01.008
We are grateful for the valuable comments of the anonymous reviewers. Parts of this work have received funding from the EU Project FeatureCloud. The FeatureCloud project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. Parts of this work have been funded by the Austrian Science Fund (FWF) , Project: P-32554 “explainable Artificial Intelligence”.
Research Seminar Wednesday, January, 13, 2021, 15:00 CET
/in HCAI-event, Lectures/by Andreas HolzingerHCAI research seminar
Research Seminar, Wednesday, December, 9, 2020
/in HCAI-event, Lectures/by Andreas HolzingerHCAI research seminar: “Towards Games in explainable AI” and “simultaneous neural nets and synthetiziced literate-logic-programs”
Measuring the Quality of Explanations just exceeded 5k downloads
/in HCAI success, Recent Publications, Science News/by Andreas HolzingerIn this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
August 25-28, 2020, Machine Learning & Knowledge Extraction, LNCS 12279 published !
/in Recent Publications/by Andreas HolzingerOur Lecture Notes in Computer Sciene LNCS 12279 of our CD-MAKE Machine Learning & Knowledge Extraction conference have been published
https://link.springer.com/book/10.1007/978-3-030-57321-8
and are available online:
Content at a glance:
Explainable Artificial Intelligence: Concepts, Applications, Research
Challenges and Visions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg,
and Andreas Holzinger
The Explanation Game: Explaining Machine Learning Models
Using Shapley Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Luke Merrick and Ankur Taly
Back to the Feature: A Neural-Symbolic Perspective on Explainable AI. . . . . 39
Andrea Campagner and Federico Cabitza
Explain Graph Neural Networks to Understand Weighted Graph
Features in Node Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Xiaoxiao Li and João Saúde
Explainable Reinforcement Learning: A Survey . . . . . . . . . . . . . . . . . . . . . 77
Erika Puiutta and Eric M. S. P. Veith
A Projected Stochastic Gradient Algorithm for Estimating Shapley Value
Applied in Attribute Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Grah Simon and Thouvenot Vincent
Explaining Predictive Models with Mixed Features Using Shapley Values
and Conditional Inference Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Annabelle Redelmeier, Martin Jullum, and Kjersti Aas
Explainable Deep Learning for Fault Prognostics in Complex Systems:
A Particle Accelerator Use-Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Lukas Felsberger, Andrea Apollonio, Thomas Cartier-Michaud,
Andreas Müller, Benjamin Todd, and Dieter Kranzlmüller
eXDiL: A Tool for Classifying and eXplaining Hospital Discharge Letters. . . 159
Fabio Mercorio, Mario Mezzanzanica, and Andrea Seveso
Cooperation Between Data Analysts and Medical Experts: A Case Study. . . . 173
Judita Rokošná, František Babič, Ljiljana Trtica Majnarić,
and L’udmila Pusztová
A Study on the Fusion of Pixels and Patient Metadata in CNN-Based
Classification of Skin Lesion Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Fabrizio Nunnari, Chirag Bhuvaneshwara,
Abraham Obinwanne Ezema, and Daniel Sonntag
The European Legal Framework for Medical AI . . . . . . . . . . . . . . . . . . . . . 209
David Schneeberger, Karl Stöger, and Andreas Holzinger
An Efficient Method for Mining Informative Association Rules
in Knowledge Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Parfait Bemarisika and André Totohasina
Interpretation of SVM Using Data Mining Technique to Extract Syllogistic
Rules: Exploring the Notion of Explainable AI in Diagnosing CAD . . . . . . . 249
Sanjay Sekar Samuel, Nik Nailah Binti Abdullah, and Anil Raj
Non-local Second-Order Attention Network for Single Image
Super Resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Jiawen Lyn and Sen Yan
ML-ModelExplorer: An Explorative Model-Agnostic Approach to Evaluate
and Compare Multi-class Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Andreas Theissler, Simon Vollert, Patrick Benz, Laurentius A. Meerhoff,
and Marc Fernandes
Subverting Network Intrusion Detection: Crafting Adversarial Examples
Accounting for Domain-Specific Constraints. . . . . . . . . . . . . . . . . . . . . . . . 301
Martin Teuffenbach, Ewa Piatkowska, and Paul Smith
Scenario-Based Requirements Elicitation for User-Centric Explainable AI:
A Case in Fraud Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Douglas Cirqueira, Dietmar Nedbal, Markus Helfert,
and Marija Bezbradica
On-the-fly Black-Box Probably Approximately Correct Checking
of Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Franz Mayr, Ramiro Visca, and Sergio Yovine
Active Learning for Auditory Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . 365
William Coleman, Charlie Cullen, Ming Yan, and Sarah Jane Delany
Improving Short Text Classification Through Global
Augmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Vukosi Marivate and Tshephisho Sefara
Interpretable Topic Extraction and Word Embedding Learning
Using Row-Stochastic DEDICOM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Lars Hillebrand, David Biesner, Christian Bauckhage,
and Rafet Sifa
A Clustering Backed Deep Learning Approach for Document
Layout Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Rhys Agombar, Max Luebbering, and Rafet Sifa
Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity
and Transparency on Algorithmic Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Philipp Schmidt and Felix Biessmann
Applying AI in Practice: Key Challenges and Lessons Learned. . . . . . . . . . . 451
Lukas Fischer, Lisa Ehrlinger, Verena Geist, Rudolf Ramler,
Florian Sobieczky, Werner Zellinger, and Bernhard Moser
Function Space Pooling for Graph Convolutional Networks . . . . . . . . . . . . . 473
Padraig Corcoran
Analysis of Optical Brain Signals Using Connectivity Graph Networks . . . . . 485
Marco Antonio Pinto-Orellana and Hugo L. Hammer
Property-Based Testing for Parameter Learning of Probabilistic
Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Anna Saranti, Behnam Taraghi, Martin Ebner, and Andreas Holzinger
An Ensemble Interpretable Machine Learning Scheme for Securing
Data Quality at the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Anna Karanika, Panagiotis Oikonomou, Kostas Kolomvatsos,
and Christos Anagnostopoulos
Inter-space Machine Learning in Smart Environments . . . . . . . . . . . . . . . . . 535
Amin Anjomshoaa and Edward Curry
The International Cross Domain Conference for MAchine Learning & Knowledge Extraction (CD-MAKE) is a joint effort of IFIP TC 5 (IT), TC 12 (Artificial Intelligence), IFIP WG 8.4 (E-Business), IFIP WG 8.9 (Information Systems), and IFIP WG 12.9 (Computational Intelligence) and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES), see:
https://www.ares-conference.eu/
The 4th conference is organized at the University College Dublin, Ireland and held as a virtual event, due to the Corona pandemic. A few words about the International Federation for Information Processing (IFIP):
IFIP is the leading multi-national, non-governmental, apolitical organization in Information and Communications Technologies and Computer Sciences, is recognized by the United Nations (UN), and was established in the year 1960 under the auspices of the UNESCO as an outcome of the first World Computer Congress held in Paris in
1959.
Information Fusion on rank 2 out of 136 in the field of Artificial Intelligence > open call on xAI
/in Calls for Papers, Science News/by Andreas HolzingerThe Journal Information Fusion made it to rank 2 out of 136 journals in the field of Artificial Intelligence, congrats to Francisco Herrera, this is excellent for our special issue on rAI – which goes beyond xAI towards accountability, privacy, safety and security.
Artificial-Intelligence-and-Machine-Learning-for-Digital-Pathology
/in Recent Publications/by Andreas HolzingerThe Springer Lecture Notes in Artificial Intelligence LNAI 12090 have been published and are available online.
ICML Workshop interpretable machine learning, July, 18, 2020
/in Calls for Papers, Conferences/by Andreas HolzingerWelcome to our XXAI ICML 2020 workshop: extending explainable ai beyond deep models and classifiers