xxAI 2020 @ ICML 2020
Vienna, July, 18, 2020 (held electronically)
Over the years, ML models have steadily grown in complexity, gaining predictivity often at the expense of interpretability (correlation vs. causality). An active research area called explainable AI (or XAI) has emerged with the goal to produce models that are both predictive and understandable (by humans). XAI has reached important successes, such as robust heatmap-based explanations of DNN classifiers. From an application perspective, there is now a need to massively engage into new scenarios such as explaining unsupervised / reinforcement learning, as well as producing explanations that are optimally structured for the human. In particular, our planned workshop will cover the following topics :
– Explaining beyond DNN classifiers : random forests, unsupervised learning, reinforcement learning
– Explaining beyond heatmaps : structured explanations, Q/A and dialog systems, human-in-the-loop
– Explaining beyond explaining : Improving ML models and algorithms, verifying ML, getting insights
XAI has received an exponential interest in the research community, and awareness of the need to explain ML models have grown in similar proportions in industry and in the sciences. With the sizable XAI research community that has formed, there is now a key opportunity to achieve this push towards successful applications. Our hope is that our proposed XXAI workshop can accelerate this process, foster a more systematic use of XAI to produce improvement on models in applications, and finally, also serves to better identify in which way current XAI methods need to be improved and what kind of theory of XAI is needed.
Go to the workshop page:
http://interpretable-ml.org/icml2020workshop
We thank all the great contributors and Wojciech Samek for his effort – thank you!
The videos are now available via SlidesLive XXAI: Extending Explainable AI Beyond Deep Models and Classifiers:
https://slideslive.com/icml-2020/xxai-extending-explainable-ai-beyond-deep-models-and-classifiers
The papers are available via our workshop homepage:
http://interpretable-ml.org/icml2020workshop/#papers
We are currently setting up a journal special issue.
Addtionally we are organizing a Springer Lecture Notes in Artificial Intelligence, see:
https://human-centered.ai/springer-lnai-xxai/