Mini Projects from Explainable AI
Short (3 min) video on dialog-systems as a motivator for this course: https://www.youtube.com/watch?v=PWfdu4k0n2A
> TUG Online 706.046 20S 3SH 4,5ECTS WF 786, 791; WK 921, WF 924
GOAL: In this research-based teaching course you will learn about some principles of what is called “explainable AI” to design, develop and test how human and machine learning systems can interact and collaborate with humans for effective decision support. You will learn about the differences between explainable AI and explanability and experiment with explanation user-interface frameworks. Hands On! We speak Python!
MOTIVATION: Artificial Intelligence (AI) and Machine Learning (ML) demonstrate impressive success. Particularly deep learning (DL) approaches hold great premises (see differences between AI/ML/DL here). Unfortuntately, the best performing methods turn out to be intransparent, so-called “black-boxes”. Such models have no explicit declarative knowledge representation, hence have difficulty in generating explanatory and contextual structures. This considerably limits the achievement of their full potential in certain application domains. Consequently, in safety critical systems and domains (e.g. in the medical domain) we may raise the question: “Can we trust these results?”, “Can we explain how and why a result was achieved?”. This is not only crucial for user acceptance, e.g. in medicine the ultimate responsibility remains with the human, but it is mandatory since 25th May 2018 due to the European GDPR, which includes a “right for explanation”.
RELEVANCE: There is growing industrial demand in machine learning approaches, which are not only well performing, but transparent, interpretable and trustworthy, e.g. in medicine, but also in production (industry 4.0), robotics, automous driving, recommender systems, etc.
BACKGROUND: Methods to reenact the machine decision-making process, to reproduce and to comprehend the learning and knowledge extraction process need affective user interfaces. For decision support it is necessary to understand the causality of learned representations. If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must still be able to understand, and most of all to be able to interactively influence the machine decision process – on demand. This needs context awareness and sensemaking to close the gap between human thinking and “machine thinking”.
SETTING: In this course the students will have the opportunity to work on mini-projects.
SLIDES (Introduction and Miniprojects): 706.046-AK-explainable-AI-Introduction-MiniProjects-Class-of-2020 (pdf, 2,916 KB)
SYLLABUS and SCHEDULE: 706046-AK-HCI-Syllabus-2020-explainable-AI (pdf, 76 kB)
Thank you for your kind interest!
Last updated by Andreas Holzinger 10.03.2020, 12:00 CET
Some background reading:
Intelligent User Interfaces (IUI) is where Human-computer interaction (HCI) meet Artificial Intelligence (AI). This is often defined as the design of intelligent agents, which is the core essence in Machine Learning (ML). In interactive Machine Learning (iML) this agents can also be humans:
Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Springer Brain Informatics (BRIN), 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.
Online: https://link.springer.com/article/10.1007/s40708-016-0042-6
Holzinger, A. 2016. Interactive Machine Learning (iML). Informatik Spektrum, 39, (1), 64-68, doi:10.1007/s00287-015-0941-6.
Online: https://link.springer.com/article/10.1007/s00287-015-0941-6
Holzinger, A., et al. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.
Online: https://arxiv.org/abs/1708.01104
Holzinger, A., et al. 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
Online: https://www.groundai.com/project/what-do-we-need-to-build-explainable-ai-systems-for-the-medical-domain
Holzinger, A. 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5.
Online: https://link.springer.com/article/10.1007/s00287-018-1102-5
Holzinger, A., et al. 2018. Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, doi:10.1007/s10489-018-1361-5.
Online: https://link.springer.com/article/10.1007/s10489-018-1361-5
In this practically oriented course, Software Engineering is seen as dynamic, interactive and cooperative process which facilitate an optimal mixture of standardization and tailor-made solutions. Here you have the chance to work on real-world problems (on the project digital pathology).
Previous knowledge expected
Interest in experimental Software Engineering in the sense of:
Science is to test crazy ideas – Engineering is to put these ideas into Business.
Interest in cross-disciplinary work, particularly in the HCI-KDD approach: Many novel discoveries and insights are found at the intersection of two domains, see: A. Holzinger, 2013. “Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together?“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin, New York: Springer, pp. 319-328. [DOI] [Download pdf]
General guidelines for the technical report
Holzinger, A. (2010). Process Guide for Students for Interdisciplinary Work in Computer Science/Informatics. Second Edition. Norderstedt: BoD (128 pages, ISBN 978-3-8423-2457-2)
also available at Fachbibliothek Inffeldgasse.
Technical report templates
Please use the following templates for your scientific paper:
(new) A general LaTeX template can be found on overleaf > https://www.overleaf.com/4525628ngbpmv
Further information and templates available at: Springer Lecture Notes in Computer Science (LNCS)
Review template 2020
REVIEW-TEMPLATE-2020-XXXX (Word-doc 342 kB)
REVIEW-TEMPLATE-2020-XXXX (pdf, 143 kB)