>>> visit the 2020 Workshop in Dublin <<<

MAKE-Explainable AI (MAKE – eXAI) Workshop

Canterbury, Kent (UK), August, 26-29, 2019

CD-MAKE 2019 Workshop on explainable Artificial Intelligence supported by IFIP and Springer/Nature
in the context of the CD-MAKE conference and the
14th International Conference on Availability, Reliability and Security ARES 2019

see the published papers: https://link.springer.com/book/10.1007/978-3-030-29726-8

see the main conference Webpage: https://cd-make.net

This page is current as of 20.09.2019 20:00 MST

HISTORY of this workshop:

After the success of our 1st international workshop on explainable AI in Hamburg, see:

and see our output in Springer Lecture Notes in Computer Science LNCS 11015

we organize our 2nd international workshop on explainable AI at the IFIP CD-MAKE 2019 in Canterbury/Kent (UK)
during August, 26 and August, 29, 2019

GOAL of this workshop:

In this cross-disciplinary workshop we aim to bring together international cross-domain experts interested in artificial intelligence/machine learning to stimulate research, engineering and evaluation in and for explainable AI – towards making machine decisions transparent, re-enactive, comprehensible, interpretable, thus explainable, re-traceable and reproducible – towards causality research, one of the cornerstone of scientific research per se.

SUBMISSION to this workshop:

All submissions will be peer reviewed by three members of our international scientific comittee. Accepted papers will be presented at the workshop orally or as poster and published in the IFIP CD-MAKE Volume of Springer Lecture Notes (LNCS), see LNCS 11015 as example.

Additionally there is also the opportunity to submit to our thematic collection “explainable AI in Medical Informatics and Decision Making” in Springer/Nature BMC Medical Informatics and Decision Making (MIDM), SCI-Impactfactor 2,134 (see link below).


See our main conference page: https://cd-make.net

or the journal special issue page respectively:

BACKGROUND on this workshop:

Explainable AI is not a new field. Actually the problem of explainability is as old as AI and maybe the result of AI itself. While early expert systems consisted of handcrafted knowledge, which enabled reasoning over at least a narrowly well-defined domain, such systems had no learning capabilities and were poor in handling of uncertainties when (trying to) solving real-world problems.  This was one reason for the bitter AI-winter in the 1980ies. The big success of current AI solutions and ML algorithms is due to the practical capabilities of statistical machine learning. However, despite the practical success even on super-human level on certain problems, their effectiveness is still limited by their inability to ”explain” decisions in an human understandable  way. Even if we understand the underlying mathematical theories, it is complicated and often impossible to get insight into the internal workings of the models, algorithms and tools and to explain why a result was achieved. However, the responsibility remains with the human expert, thus affects trust and acceptance of such systems. Future AI needs contextual adaptation, i.e. systems that help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence (human-in-the-loop, see the Obama-Interview).

TOPICS of this workshop:

In line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas –  Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work including but not limited to:

  • Novel methods, algorithms, tools, procedures for supporting explainability in AI/ML
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into workflows and industrial processes
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability
  • Work on causality machine learning
  • Theoretical approaches of explainability (“What makes a good explanation?”)
  • Philsophical approaches of explainability (“When is it enough, do we have a degree of saturation?”)
  • Towards argumentation theories of explanation and issues of cognition
  • Comparison Human intelligence vs. Artificial Intelligence (HCI — KDD)
  • Interactive machine learning with human(s)-in-the-loop (crowd intelligence)
  • Explanatory User Interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Novel Intelligent User Interfaces and affective computing approaches
  • Fairness, accountability and trust
  • Ethical aspects and law, legal issues and social responsibility
  • Business aspects of explainable AI
  • Self-explanatory agents and decision support systems
  • Explanation agents and recommender systems
  • Combination of statistical learning approaches with large knowledge repositories (ontologies)

MOTIVATION for this workshop:

The grand goal of future explainable AI is to make results understandable and transparent  and to answer questions of how and why a result was achieved. In fact: “Can we explain how and why a specific result was achieved by an algorithm?” In the future it will be essential not only to answer the question “Which of these animals is a cat?”, but  to answer Why is it a cat [Youtube Video]” – “What are the underlying explanatory facts that the machine learning algorithms made this decison”.  This highly relevant emerging area is important for all application areas, ranging from health informatics [1]  to cyber defense [2], [3]. A partiuclar focus is on novel Human-Computer Interaction and intelligent user interfaces for interactive machine learning [4].

[1] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell (2017). What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
[2] David Gunning (2016)  DARPA program on explainable artificial intelligence
[3] Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger (2018). Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.
[4] Todd Kulesza, Margaret Burnett, Weng-Keen Wong & Simone Stumpf (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI 2015), 2015 Atlanta. ACM, 126-137, doi:10.1145/2678025.2701399.

Example: One motivation is the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018, and affects practically all machine learning and artificial intelligence applied to business. For example it will be difficult to apply black-box approaches for professional use in certain business applications, because they are not re-traceable and rarely able to explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology.


Randy GOEBEL, University of Alberta, Edmonton, CA (workshop co-chair)
Yoichi HAYASHI, Meiji University, Kawasaki, JP (workshop co-chair)
Freddy LECUE, Accenture Artificial Intelligence Technology Labs, Dublin, IE and INRIA Sophia Antipolis, FR
Peter KIESEBERG, Secure Business Austria, SBA-Research Vienna, AT
Andreas HOLZINGER, Medical University Graz, AT  (workshop co-chair)

Inquiries please directly to a.holzinger AT human-centered.ai


in progress, see also the main scientific committee:

Note: All workshop committee members will also be listed in the main conference to a) be included in the EasyChair conference management system, and b) in the frontmatter of the Springer LNCS (see the pdf from last year here).

Jose Maria ALONSO, Centro Singular de Investigación en Tecnoloxías da Información, CiTiUS, University of Santiago de Compostela, ES
Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis, IAIS, Sankt Augustin, and University of Bonn, DE
Vaishak BELLEBelle Lab, Centre for Intelligent Systems and their Applications, School of Informatics, University of Edinburgh, UK
Frenay BENOIT, Universite de Namur, BE
Enrico BERTINI, New York University, Tandon School of Engineering, US
Tarek R. BESOLD, Alpha Health AI Lab, Trustworthy AI leader, Telefonica Barcelona, ES
Guido BOLOGNA, Computer Vision and Multimedia Lab, Université de Genève, Geneva, CH
Federico CABITZA,  Università degli Studi di Milano-Bicocca, DISCO, Milano, IT
Ajay CHANDER, Computer Science Department, Stanford University and Fujitsu Labs of America, US
David EVANS, Computer Science Department, University of Virginia, US
Aldo FAISAL, Department of Computing, Brain and Behaviour Lab, Imperial College London, UK
Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Hani HAGRAS, Computational Intelligence Centre, School of Computer Science & Electronic Engineering, University of Essex, UK
Barbara HAMMER, Machine Learning Group, Bielefeld University, DE
Pim HASELAGER, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
Shunjun LI, Cyber Security Group, University of Kent, Canterbury, UK
Brian Y. LIM, National University of Singapore, SG
Luca LONGO, Knowledge & Data Engineering Group, Trinity College, Dublin, IE
Tim MILLER, School of Computing and Information Systems, The University of Melbourne, AU
Huamin QU, Human-Computer Interaction Group HKUST VIS, Hong-Kong University of Science & Technology, CN
Daniele MAGAZZENI, Trusted Autonomous Systems Hub, King’s College London, UK
Stephen K. REED, Center for Research in Mathematics and Science Education, San Diego State University, US
Marco Tulio RIBEIRO, Guestrin Group, University of Washington, Seattle, WA, US
Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Wojciech SAMEK, Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, DE
Gerhard SCHURZ, Düsseldorf Center for Logic and Philosophy of Science, University Düsseldorf, DE
Marco SCUTARI,  Instituto Dalle Molle di Studi sull’Intelligenza Artificiale, Lugano, CH and Department of Statistics, University of Oxford, UK
Sameer SINGH, University of California UCI, Irvine, CA, US
Alison SMITH, University of Maryland, MD, US
Mohan SRIDHARAN, University of Auckland, NZ
Simone STUMPF,  City, University London, UK
Ramya Malur SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US
Andrea VEDALDI, Visual Geometry Group, University of Oxford, UK
Janusz WOJTUSIAK, Machine Learning and Inference Lab, George Mason University, Fairfax, US
Jianlong ZHOU, Faculty of Engineering and Informtion Technology (FEIT), University of Technology, Sydney, AU


2019-09-02 Remember that the Springer Lecture Notes are available to you as a participant for free via this link: https://cd-make.net/proceedings/

2019-09-01 We had a great conference, thanks all for taking part and see you all in Dublin (Ireland) from 26.August to 29.August 2020.

2019-05-01 We are happy to confirm our keynote speaker: Janet BASTIMAN from MMC Londoh, UK. Janet studied at Oxford University and received her PhD form Sussex University at the Centre for computational neuroscience and robotics.  Janets career has focussed in industry where she has worked for large multinationals and smaller enterprises as Chief Technical Officer and continues to champion new technologies. She co-founded the Tech Women London meetup group and is treasurer of the IEEE UK STEM committee. Since 2018 she has also been a judge for Awards.AI, showcasing the best UK AI start-ups. Her interests are predominantly in combinatorial techniques in computer vision and the intersection of machine learning with biological inspiration.

2019-04-21 We are happy to confirm our keynote speaker: Yoichi HAYASHI from Meiji University Tokyo, Japan. Yoichi received the Dr. Eng. degree in systems engineering from Tokyo University of Science, Tokyo, and in 1986 he joined the Computer Science Department of Ibaraki University, Japan. Since 1996, he has been a Full Professor at the Computer Science Department, Meiji University, Tokyo. He has also been a Visiting Professor at the University of Alabama in Birmingham, USA, and the University of Canterbury in New Zealand and has authored over 230 published computer science papers. His research interests include deep learning (DBN, CNN), especially the black box nature of deep neural networks and shallow neural networks, transparency, interpretability and explainability of deep neural networks, XAI, rule extraction, high-performance classifiers, data mining, big data analytics, and medical informatics. He has been the Action Editor of Neural Networks and the Associate Editor of IEEE Transactions of Fuzzy Systems and Artificial Intelligence in Medicine. Overall, he has served as an associate editor, guest editor, or reviewer for 50 academic journals. He has been a senior member of the IEEE since 2000.

2019-03-01 We are happy to welcome our keynote speaker: Wojciech SAMEK from the Fraunhofer Heinrich Hertz Institute in Berlin. Wojciech is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied Computer Science at Humboldt University of Berlin as a scholar of the German National Academic Foundation, and received his PhD in Machine Learning from the Technical University of Berlin in 2014. He was a visiting researcher at NASA Ames Research Center, Mountain View, CA, and a PhD Fellow at the Bernstein Center for Computational Neuroscience Berlin. He was co-organizer of workshops and tutorials about interpretable machine learning at various conferences, including CVPR, NIPS, ICASSP, MICCAI and ICIP. He is part of the Focus Group on AI for Health, a world-wide initiative led by the ITU and WHO on the application of machine learning technology to the medical domain. He is associated with the Berlin Big Data Center and the Berlin Center of Machine Learning and is a member of the editorial board of Digital Signal Processing and PLOS ONE. He has co-authored more than 90 peer-reviewed papers, predominantly in the areas deep learning, interpretable machine learning, neural network compression, robust signal processing and computer vision.

2019-01-18 We update all information, send the call for papers out and set up the EasyChair system

2018-12-24 We wish all our supporters, friends and colleagues a MERRY CHRISTMAS and a HAPPY 2019!

2018-10-20 Background Image with friendly permission of Michael D. BECKWITH

Canterbury Cathedral was founded in 597 and completely rebuilt between 1070 and 1077.

2018-10-14 Starting to confirm previous experts and inviting new experts to the scientific committee

2018-10-10  Official go form the Springer/Nature BMC journal office for the special issue

2018-08-30 Thank you all for your participation and support;
we hopefully will see us all again end of August 2019 in Canterbury, Kent, UK


2018-08-30 The introduction is available here (preprint, pdf, 835kB):
[GOEBEL et al (2018) Explainable-AI-the-new-42]
The official paper is available via Springerlink: Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.

2018-08-30 The Slides of Randy GOEBEL are available here (with friendly permission of Randy):

2018-08-27 Springer Lecture Notes are available, see:

2018-05-30 Our Program is available via:

2018-05-10 Randy GOEBEL from the Alberta Machine Intelligence Institute  has agreed to be our session Keynote Speaker, see:

2018-05-02 Due to the international Workers’ Day we extend the official deadline to May, 7, 2018 to enable you a stress-free submission – please enjoy your holidays !

2018-03-21 Please note the deadline for submissions is April, 30, 2018, see the authors area:
(In case you cannot meet the April, 30, 2018 deadline and would need a few more days, please submit your draft (and indicate it as draft) by April, 30, 2018 – so that we have an overview and can pre-assign reviewers) – you may then have still sufficient time to complete your paper.

2018-02-07 Web site live – starting to invite additional experts for the scientific program committee


(additional suggestions welcome – we are dedicated to support the international community)


Special Session on eXplainable Artificial Intelligence, organized by Jose M. Alonso, Ciro Castiello, Corrado Mencar at the IEEE International Conference on Fuzzy Systems, June 23-26, 2019 (Submissions due to January, 11, 2019)
see: https://sites.google.com/view/xai-fuzzieee2019


Workshop on Explainable Smart Systems (EXSS) at ACM IUI, Tokyo, March, 11, 2018

Advances on Explainable Artificial Intelligence as a part of the 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU 2018), Cadiz, Spain, June, 11-15, 2018

ODD v5.0 Outlier Detection De-constructed full day workshop, organized in conjunction with ACM SIGKDD 2018 at KDD 2018 in London, August, 20, 2018

Human Level AI – multi-conference on human-level artificial intelligence, Prague, August, 22-25, 2018