xAI 2020

University College Dublin (IE), August, 26-29, 2020 (held electronically)

CD-MAKE 2020 Workshop supported by IFIP and Springer/Nature
Co-organized by the xAI-Lab, Alberta Machine Intelligence Institute
in the context of the 4th CD-MAKE conference and the
15th International Conference on Availability, Reliability and Security ARES 2020

see the published papers from 2019: https://link.springer.com/book/10.1007/978-3-030-29726-8

For all submission details and deadlines plese see the main conference Webpage: https://cd-make.net
NOTE: Due to the current international COVID-Situation the submission deadlines will be extended.
The Springer Lecture Notes will be produced in any case, the conference itself will take place electronically – so in any case the scientific communication will be ensured! Take care and stay healthy!

This page is current as of 18.05.2020 10:30 MST
For inquries please contact a.holzinger AT human-centered.ai

HISTORY of this workshop:

After the success of our 1st international workshop on explainable AI in Hamburg 2018, see:
https://2018.cd-make.net/special-sessions/make-explainable-ai/index.html

with our output in Springer Lecture Notes in Computer Science LNCS 11015
https://link.springer.com/book/10.1007/978-3-319-99740-7

and the 2nd international workshop on explainable AI in Canterbury 2019
https://human-centered.ai/make-explainable-artificial-intelligence-2019

we organize the 3rd international workshop on explainable AI in Dublin 2020
https://human-centered.ai/explainable-ai-2020

GOAL of this workshop:

In this cross-disciplinary workshop we aim to bring together international  experts cross-domain, interested in making machine decisions transparent, interpretable, transparent, reproducible, replicable, re-traceable, re-enactive, comprehensible, explainable towards ethical-responsible AI/machine learning.

SUBMISSION to this workshop:

All submissions will be peer reviewed by three members of our international scientific comittee. Accepted papers will be presented at the workshop orally or as poster and published in the IFIP CD-MAKE Volume of Springer Lecture Notes (LNCS), see LNCS 11015 as example.

Additionally there is also the opportunity to submit to our thematic collection “explainable AI in Medical Informatics and Decision Making” in Springer/Nature BMC Medical Informatics and Decision Making (MIDM), SCI-Impactfactor 2,134
https://human-centered.ai/special-issue-explainable-ai-medical-informatics-decision-making/

INSTRUCTIONS:

See our main conference page: https://cd-make.net

BACKGROUND on this workshop:

Explainable AI is not a new field. Actually, the problem of explainability is as old as AI and maybe the result of AI itself. While early expert systems consisted of handcrafted knowledge, which enabled reasoning over at least a narrowly well-defined domain, such systems had no learning capabilities and were poor in handling of uncertainties when (trying to) solving real-world problems.  This was one reason for the bitter cold AI-winter in the 1980ies. The big success of current AI solutions and ML algorithms is due to the practical capabilities of statistical machine learning gained in 2010 and later. Despite the practical success even on “super-human” level on certain problems, their effectiveness is still limited by their inability to ”explain” – on demand –  decisions in an human understandable way. Even if we understand the underlying mathematical theories, it is complicated and often impossible to get insight into the internal workings of the models, algorithms and tools and to explain why a result was achieved. However, the responsibility remains with the human expert, thus affects trust and acceptance of such systems. Future AI needs contextual adaptation, i.e. systems that help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence (human-in-the-loop, see the Obama-Interview).

TOPICS of this workshop:

In-line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas –  Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work in order to bring together experts from different fields, e.g. computer science, psychology, sociology, philosophy, law, business, … experts who would otherwise possibly not meet together. This cross-domain integration and appraisal of different fields of science and industry shall provide an atmosphere to foster different perspectives and opinions; it will offer a platform for novel crazy ideas and a fresh look on the methodologies to put these ideas into business.

Topics include but are not limited to (alphabetically – not prioritized):

  • Abstraction of human explanations (“How can xAI learn from human explanation straregies?”)
  • Acceptance (“How to ensure acceptance of AI/ML among end users?”)
  • Accountability and responsibility (“Who is to blame if something goes wrong?”)
  • Action Influence Graphs
  • Active machine learning algorithm design for interpretability purposes
  • Adversarial attacts detection, explanation and defense (“How can we interpret adversarial examples?”)
  • Adaptive personal xAI systems (“To whom, when, how to provide explanations?”)
  • Adaptable explanation interfaces (“How can we adapt xAI-interfaces to the needs, demands, requirements of end-users and domain experts?”)
  • Affective computing for successful human-AI interaction  and human-robot interaction
  • Argumentation theories of explanations
  • Artificial advice givers
  • Bayesian rule lists
  • Bayesian modeling and optimization (“How to design efficient methods for learning interpretable models?”)
  • Bias and fairness in explainable AI (“How to avoid bias in machin learning applications?”)
  • Bridiging the gap between humans and machines (concepts, methods, tools, …)
  • Causal learning, causal discovery, causal reasoning, causal explanations, and causal inference
  • Causality and causability research (“measuring understanding”, benchmarking, evaluation of interpretable systems)
  • Cognitive issues of explanation and understanding (“understanding understanding”)
  • Combination of statistical learning approaches with large knowledge repositories (ontologies, terminologies)
  • Combination of deep learning approaches with traditional AI approaches
  • Comparison of human intelligence vs. artificial intelligence (HCI — KDD)
  • Computational behavioural science (“How are people thinking, judging, making decisions – and explaining it?”)
  • Constraints-based explanations
  • Counterfactual explanations (“What did not happen?”, “How can we provide counterfactual explanations?”)
  • Contrastive explanation methods (CEM) as e.g. in Criminology or medical diagnosis
  • Cyber security, cyber defense and malicious use of adversarial examples
  • Data dredging explainability and causal inference
  • Decision making and decision support systems (“Is a human-like decision good enough?)
  • Dialogue systems for enhanced human-ai interaction
  • Emotional intelligence (“Emotion AI”) and emotional UI
  • Ethical aspects of AI in general and human-AI interaction in particular
  • Evaluation criteria
  • Explanation agents and recommender systems
  • Explanatory user interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Explainable reinforcement learning
  • Explainable and verifiable activity recognition
  • Explaining agent behaviour (“How to know if the agent is going to make a mistake and when?”)
  • Explaining robot behviour (“Why did you take action x in state s?”)
  • Fairness, accountability and trust (“How to ensure trust in AI?”)
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability
  • Frameworks for reasoning about causality
  • Gradient based interpretability to understand data sensitivity
  • Graphical causal inference and graphical models for explanation and causality
  • Ground truth
  • Group recommender systems
  • Human-AI interaction and intelligent interfaces
  • Human-AI teaming for ensuring trustworthy AI systems
  • Human-centered AI
  • Human-in-the-loop learning approaches, methodologies, tools and systems
  • Human rights vs. robot rights
  • Implicit knowlege elicitation
  • Industrial applications of xAI, e.g. in medicine, autonomous driving, production, finance, ambient assisted living, etc.
  • Integrating of deep learning approaches with grammars of graphical models
  • Interactive machine learning with a human-in-the-loop
  • Interactive machine learning with (many) humans-in-the-loop (crowd intelligence)
  • Interpretability in ranking algorithms (“How to explain ranking algorithms, e.g. patient ranking in helath, in human interpretable ways?”)
  • Interpretability in reinforement learning
  • Interpretable representation learning (“How to make sense of data assigned to similar representations?”)
  • Kandinsky Patterns experiments and extensions
  • Legal aspects of AI/ML (“Who is to blame if an error occurs?”)
  • Metrics for evaluation of the quality of explanations
  • Misinformation in Social Media, Ground truth evaluation and explanation
  • Model explainability, quality and provenance
  • Moral principles and moral dilemmas of current and future AI
  • Natural Language Argumentation interfaces for explanation generation
  • Natural Language generation for explanatory models
  • Non-IID learning models, algorithms, analytics, recommenders
  • Novel intelligent future user interfaces (e.g. affective mobile interfaces)
  • Novel methods, algorithms, tools, procedures for supporting explainability in the AI/ML pipeline
  • Personalized xAI
  • Philosophical approaches of explainability, theories of mind (“When is it enough explained? Do we have a degree of saturation?”)
  • Policy explanations to humans (“How to explain and why is the next step the best action to select?”)
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes
  • Privacy, surveillance, control and agency
  • Psychology of human concept learning and tranfer to machine learning
  • Python for nerds (Python tricks of the trade – relevant for explainable AI)
  • Quality of explanations and how to measure quality of explanations
  • Rendering of reasoning processes
  • Reproducibility, replicability, retraceability, reenactivity
  • Self-explanatory agents and decision support systems
  • Similarity measures for xAI
  • Social implications of AI (“What AI impacts”), e.g. labour trends, human-human interaction, machine-machine interaction
  • Soft Decision Trees (SDT)
  • Spartanic approaches of explanations (“What is the most simplest explanation?”)
  • Structureal causal equations (SCM)
  • Theoretical approaches of explainability (“What makes a good explanation?”)
  • Theories of explainable/interpretable models
  • Tools for model understanding (diagnostic, debugging, introspection, visualization, …)
  • Transparent reasoning
  • Trustworthy human-AI teaming under uncertainties
  • Understanding understanding
  • Understanding Markov decision processes and prtially observable markov decision processes
  • Usability of Human-AI interfaces
  • Visualizing learned representations
  • Web- and mobile-based cooperative intelligent information systems and tools

MOTIVATION for this workshop:

The grand goal of future explainable AI is to make results understandable and transparent  and to answer questions of how and why a result was achieved. In fact: “Can we explain how and why a specific result was achieved by an algorithm?” In the future it will be essential not only to answer the question “Which of these animals is a cat?”, but  to answer Why is it a cat [Youtube Video]” – “What are the underlying explanatory facts that the machine learning algorithms made this decison”.  This highly relevant emerging area is important for all application areas, ranging from health informatics [1]  to cyber defense [2], [3]. A partiuclar focus is on novel Human-Computer Interaction and intelligent user interfaces for interactive machine learning [4].

[1] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell (2017). What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
[2] David Gunning (2016)  DARPA program on explainable artificial intelligence
[3] Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger (2018). Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.
[4] Todd Kulesza, Margaret Burnett, Weng-Keen Wong & Simone Stumpf (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI 2015), 2015 Atlanta. ACM, 126-137, doi:10.1145/2678025.2701399.

Example: One motivation is the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018, and affects practically all machine learning and artificial intelligence applied to business. For example it will be difficult to apply black-box approaches for professional use in certain business applications, because they are not re-traceable and rarely able to explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology.

WORKSHOP ORGANIZATION COMMITTEE (in alphabetical order):

Randy GOEBEL, University of Alberta, Edmonton, CA (workshop co-chair)
Andreas HOLZINGER, University of Alberta, Edmonton, CA, and Medical University Graz, AT  (workshop co-chair)
Peter KIESEBERG, University of Applied Sciences, St.Poelten, AT
Freddy LECUE, Thales, Montreal, CA, and INRIA, Sophia Antipolis, FR
Luca LONGO, Knowledge & Data Engineering Group, Trinity College, Dublin, IE

Inquiries please directly to a.holzinger AT human-centered.ai

SCIENTIFIC COMMITTEE (in alphabetical order):

in progress, see also the main scientific committee:
https://cd-make.net/committees

Note: All committee members will also be listed in the main conference to a) be included in the EasyChair conference management system, and b) in the frontmatter of the Springer LNCS (see the pdf from last year here).

Jose Maria ALONSO, Centro Singular de Investigación en Tecnoloxías da Información, CiTiUS, University of Santiago de Compostela, ES
Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis, IAIS, Sankt Augustin, and University of Bonn, DE
Vaishak BELLEBelle Lab, Centre for Intelligent Systems and their Applications, School of Informatics, University of Edinburgh, UK
Frenay BENOIT, Universite de Namur, BE
Enrico BERTINI, New York University, Tandon School of Engineering, US
Tarek R. BESOLD, Alpha Health AI Lab, Trustworthy AI leader, Telefonica Barcelona, ES
Guido BOLOGNA, Computer Vision and Multimedia Lab, Université de Genève, Geneva, CH
Federico CABITZA,  Università degli Studi di Milano-Bicocca, DISCO, Milano, IT
Ajay CHANDER, Computer Science Department, Stanford University and Fujitsu Labs of America, US
David EVANS, Computer Science Department, University of Virginia, US
Aldo FAISAL, Department of Computing, Brain and Behaviour Lab, Imperial College London, UK
Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Hani HAGRAS, Computational Intelligence Centre, School of Computer Science & Electronic Engineering, University of Essex, UK
Barbara HAMMER, Machine Learning Group, Bielefeld University, DE
Pim HASELAGER, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
Shunjun LI, Cyber Security Group, University of Kent, Canterbury, UK
Brian Y. LIM, National University of Singapore, SG
Tim MILLER, School of Computing and Information Systems, The University of Melbourne, AU
Fabio MERCORIO, University of Milano-Bicocca, CRISP Research Centre, Milano, IT
Huamin QU, Human-Computer Interaction Group HKUST VIS, Hong-Kong University of Science & Technology, CN
Daniele MAGAZZENI, Trusted Autonomous Systems Hub, King’s College London, UK
Stephen K. REED, Center for Research in Mathematics and Science Education, San Diego State University, US
Marco Tulio RIBEIRO, Guestrin Group, University of Washington, Seattle, WA, US
Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Wojciech SAMEK, Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, DE
Gerhard SCHURZ, Düsseldorf Center for Logic and Philosophy of Science, University Düsseldorf, DE
Marco SCUTARI,  Instituto Dalle Molle di Studi sull’Intelligenza Artificiale, Lugano, CH and Department of Statistics, University of Oxford, UK
Sameer SINGH, University of California UCI, Irvine, CA, US
Alison SMITH, University of Maryland, MD, US
Mohan SRIDHARAN, University of Auckland, NZ
Simone STUMPF,  City, University London, UK
Ramya Malur SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US
Andrea VEDALDI, Visual Geometry Group, University of Oxford, UK
Janusz WOJTUSIAK, Machine Learning and Inference Lab, George Mason University, Fairfax, US
Jianlong ZHOU, Faculty of Engineering and Informtion Technology (FEIT), University of Technology, Sydney, AU

NEWS:

2019-10-25 We are starting the preparation for the ARES 2020 and CD-MAKE 2020 conference and for this workshop on explainable AI, and we begin to consolidate our scientific board and to invite new international colleagues. We have set up a LinkedIN group for those interested in the main conference generally and this workshop specifically, see: https://www.linkedin.com/groups/8831071/

2019-09-02 Remember that the Springer Lecture Notes are available to you as a participant for free via this link: https://cd-make.net/proceedings/

2019-09-01 We had a great conference, thanks each and everybody for taking part and see you all again in Dublin (Ireland) from 26.August to 29.August 2020.

2019-05-01 We are happy to confirm our keynote speaker: Janet BASTIMAN from MMC Londoh, UK. Janet studied at Oxford University and received her PhD form Sussex University at the Centre for computational neuroscience and robotics.  Janets career has focussed in industry where she has worked for large multinationals and smaller enterprises as Chief Technical Officer and continues to champion new technologies. She co-founded the Tech Women London meetup group and is treasurer of the IEEE UK STEM committee. Since 2018 she has also been a judge for Awards.AI, showcasing the best UK AI start-ups. Her interests are predominantly in combinatorial techniques in computer vision and the intersection of machine learning with biological inspiration.

2019-04-21 We are happy to confirm our keynote speaker: Yoichi HAYASHI from Meiji University Tokyo, Japan. Yoichi received the Dr. Eng. degree in systems engineering from Tokyo University of Science, Tokyo, and in 1986 he joined the Computer Science Department of Ibaraki University, Japan. Since 1996, he has been a Full Professor at the Computer Science Department, Meiji University, Tokyo. He has also been a Visiting Professor at the University of Alabama in Birmingham, USA, and the University of Canterbury in New Zealand and has authored over 230 published computer science papers. His research interests include deep learning (DBN, CNN), especially the black box nature of deep neural networks and shallow neural networks, transparency, interpretability and explainability of deep neural networks, XAI, rule extraction, high-performance classifiers, data mining, big data analytics, and medical informatics. He has been the Action Editor of Neural Networks and the Associate Editor of IEEE Transactions of Fuzzy Systems and Artificial Intelligence in Medicine. Overall, he has served as an associate editor, guest editor, or reviewer for 50 academic journals. He has been a senior member of the IEEE since 2000.

2019-03-01 We are happy to welcome our keynote speaker: Wojciech SAMEK from the Fraunhofer Heinrich Hertz Institute in Berlin. Wojciech is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied Computer Science at Humboldt University of Berlin as a scholar of the German National Academic Foundation, and received his PhD in Machine Learning from the Technical University of Berlin in 2014. He was a visiting researcher at NASA Ames Research Center, Mountain View, CA, and a PhD Fellow at the Bernstein Center for Computational Neuroscience Berlin. He was co-organizer of workshops and tutorials about interpretable machine learning at various conferences, including CVPR, NIPS, ICASSP, MICCAI and ICIP. He is part of the Focus Group on AI for Health, a world-wide initiative led by the ITU and WHO on the application of machine learning technology to the medical domain. He is associated with the Berlin Big Data Center and the Berlin Center of Machine Learning and is a member of the editorial board of Digital Signal Processing and PLOS ONE. He has co-authored more than 90 peer-reviewed papers, predominantly in the areas deep learning, interpretable machine learning, neural network compression, robust signal processing and computer vision.

2019-01-18 We update all information, send the call for papers out and set up the EasyChair system

2018-12-24 We wish all our supporters, friends and colleagues a MERRY CHRISTMAS and a HAPPY 2019!

2018-10-20 Background Image with friendly permission of Michael D. BECKWITH

Canterbury Cathedral was founded in 597 and completely rebuilt between 1070 and 1077.

2018-10-14 Starting to confirm previous experts and inviting new experts to the scientific committee

2018-10-10  Official go form the Springer/Nature BMC journal office for the special issue

2018-08-30 Thank you all for your participation and support;
we hopefully will see us all again end of August 2019 in Canterbury, Kent, UK

HISTORIC NEWS:

2018-08-30 The introduction is available here (preprint, pdf, 835kB):
[GOEBEL et al (2018) Explainable-AI-the-new-42]
The official paper is available via Springerlink: Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.

2018-08-30 The Slides of Randy GOEBEL are available here (with friendly permission of Randy):
https://human-centered.ai/2018/08/30/explainable-ai-session-keynote-randy-goebel/

2018-08-27 Springer Lecture Notes are available, see:
https://human-centered.ai/2018/08/27/machine-learning-and-knowledge-extraction-springer-volume-2/

2018-05-30 Our Program is available via:
https://www.ares-conference.eu/agenda/

2018-05-10 Randy GOEBEL from the Alberta Machine Intelligence Institute  has agreed to be our session Keynote Speaker.

2018-05-02 Due to the international Workers’ Day we extend the official deadline to May, 7, 2018 to enable you a stress-free submission – please enjoy your holidays !

2018-03-21 Please note the deadline for submissions is April, 30, 2018, see the authors area:
https://cd-make.net/authors-area/important-dates
(In case you cannot meet the April, 30, 2018 deadline and would need a few more days, please submit your draft (and indicate it as draft) by April, 30, 2018 – so that we have an overview and can pre-assign reviewers) – you may then have still sufficient time to complete your paper.

2018-02-07 Web site live – starting to invite additional experts for the scientific program committee

RELATED EVENTS:

We are dedicated to support the international community – additional suggestions welcome:

2020 ETMLP 2020 International Workshop on Explainability for Trustworthy ML Pipelines
Co-located with EDBT 2020 (30 March 2020, Copenhagen, Denmark) https://europe.naverlabs.com/etmlp/

2019 ICCV Workshop on Interpretating and Explaining Visual Artificial Intelligence Models, November 2, 2019
http://xai.unist.ac.kr/workshop/2019/

VISxAI 2019 2nd Workshop on Visualization for AI Explainability, October 21, 2019 at IEEE VIS, Vancouver, Canada
https://visxai.io/
NL4XAI2019: 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Tokyo, Japan, October 29-November 1, 2019
https://sites.google.com/view/nl4xai2019

CVPR-19 Workshop on Explanable AI, June 16, 2019, Long Beach, California, US
https://explainai.net/

IJCAI 2019 Workshop on Explainable Artificial Intelligence, August 11, 2019, Macau, China
https://sites.google.com/view/xai2019/home

Special Session on eXplainable Artificial Intelligence, organized by Jose M. Alonso, Ciro Castiello, Corrado Mencar at the IEEE International Conference on Fuzzy Systems, June 23-26, 2019 (Submissions due to January, 11, 2019)
see: https://sites.google.com/view/xai-fuzzieee2019

Workshop on Explainable Smart Systems (EXSS) at ACM IUI, Tokyo, March, 11, 2018

Advances on Explainable Artificial Intelligence as a part of the 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU 2018), Cadiz, Spain, June, 11-15, 2018

ODD v5.0 Outlier Detection De-constructed full day workshop, organized in conjunction with ACM SIGKDD 2018 at KDD 2018 in London, August, 20, 2018

Human Level AI – multi-conference on human-level artificial intelligence, Prague, August, 22-25, 2018