Mini Course Medical AI

“It is remarkable that a science which began with the consideration of games of
chance should have become the most important object of human knowledge”
Pierre Simon de Laplace, 1812.
Winter Term 2021 (2,5 ECTS at the WU Executive Academy)
Artificial intelligence (AI) is increasingly penetrating the medical field. There is no doubt that medical AI will change workflows in the future, but three aspects will be necessary: robustness, re-traceability/explainability and trustworthiness. Robust AI solutions must be able to handle inaccuracies, missing and incorrect information, and be able to explain the outcome as well as the process to a medical expert how an algorithm has achieved a certain result. Consequently, future medical AI solutions must be technically robust, ethically responsible, and also legally compliant. This Mini Course on Medical AI is an introduction to this core area of health informatics. Students will learn the necessary basics and sensitivity to these issues. Andreas Holzinger has taught this course since 2005 in different versions, variations, durations, and at various institutions. This is the 3 ECTS crash mini course version for the WU Executive Academy.
This page is valid as of July, 21, 2021, 16:00 CEST
Valid for students with effect of August, 30, 2021
Goals and Learning Outcomes
A) General:
After completing this mini-course, some basic principles, basic concepts and basic theories, as well as some basic methods of Artificial Intelligence (AI) and Machine Learning (ML) should be understood. In doing so, the concepts of data, information, and knowledge should be clear, and why “information” is important for medicine in general and for decision making specifically.
B) Knowledge and Understanding:
Upon completion of this mini-course, students will understand …
– Why “information” is fundamentally important to medicine as a science of action,
– What the difference is between data, information and knowledge,
– Why the quality of data used for ML is so important,
– What AI and ML (currently) can do but also especially cannot do,
– Why explainable AI (AI that can be interpreted) is becoming increasingly important.
C) Cognitive and personal skills:
Upon completion of this mini-course, students will …
– Understand some selected fundamental basic concepts of AI,
– Have seen some examples of applications of currently popular medical AI.
– Be able to recognize the limitations of statistical machine learning
– Recognize the importance of the issues of re-traceability, explainability, interpretability
D) Key skills:
Upon completion of this mini course, students will be able to …
– Discuss Artificial Intelligence with relevant professionals at an elevated level and critically question key decisions,
– Consider where and how AI might be usefully applied in a complementary way to support their work environments and workflows,
– Apply what they have learned to their specific healthcare institutions in a broader context,
– Recognize the limitations of such AI and identify legal, ethical, and safety, security and privacy aspects.
Pre-Module Phase
1st task:
a) Please watch this video (14 minutes)
https://www.youtube.com/watch?v=UuiV0icAlRs
b) and write a summary with your own words and with reference to your own activity at present and possibly in the future (max. approx. 1000 words – your work will be collected in the course moodle).
2nd task:
a) Please read the publications marked with *) from the Literature (below).
b) Choose a publication and write a summary of this self-selected paper in your own words and with regard to your own either present or future planned work context (max. approx. 1000 words – your work will be collected in the course moodle).
3rd task:
Describe a future scenario, how you could imagine the use of AI in your field (fictitious – everything is allowed). In particular, describe how you personally could imagine that your way of working could be supported or improved by the use of AI ? (max. approx. 1000 words – your work will be collected in the course moodle).
Asssessment
Pre-Module Tasks: 60 %
Core-Module: 30 %
Post Module Tasks: 10 %
Module 00 – Primer on Probability and Information (optional)
When confronted with decision making we inherently must deal with our world of uncertainty. The strenght of probabilistic machine learning results from the mathematically thus computationally mechanisms of the combination of prior knowlege with incoming new data.
Topic 00: Mathematical Notations
Topic 01: Probability Distribution and Probability Density
Topic 02: Expectation and Expected Utility Theory
Topic 03: Joint Probability and Conditional Probability
Topic 04: Independent and Identically Distributed Data (IIDD)
Topic 05: Bayes and Laplace
Topic 06: Measuring Information: Kullback-Leibler Divergence and Entropy
Lecture slides full size (3,984 kB): 0-PRIMER-Probability-and-Information-2019-09-18-HOLZINGER-print
Reading for students:
David J.C. Mackay 2003. Information theory, inference and learning algorithms, Boston (MA), Cambridge University Press.
Online available: https://www.inference.org.uk/itprnn/book.html
Slides online available: https://www.inference.org.uk/itprnn/Slides.shtml
An excellent resource is: David Poole, Alan Mackworth & Randy Goebel 1998. Computational intelligence: a logical approach, New York, Oxford University Press, where there is a new edition available along with excellent student resources available online:
https://artint.info/2e/html/ArtInt2e.html
Module 01 – Introduction to Medical AI and Machine Learning for Health Informatics
Topic 01: Success Stories: Towards Human-level performance in Medical AI
Topic 02: Why is health a complex application area ?
Topic 03: Probabilistic Learning: The basics of the success stories of topic 01
Topic 04: Automatic Machine Learning (aML)
Topic 05: Interactive Machine Learning (iML)
Lecture slides (4 slides per page, 2 x 2, pdf, 5,813 kB): 1-Medical-AI-HOLZINGER-2020-INTRODUCTION-2×2
Lecture slides full size (pdf, 6,995 kB): 1-Medical-AI-HOLZINGER-2020-INTRODUCTION
Youtube (open this link in a new window to adapt the player to 4:3): https://www.youtube.com/watch?v=nVfqV0wi9lA
Module 02 – From Data to Knowledge Representation
Topic 00 Reflection – follow-up from last lecture
Topic 01 Data: the underlying physics of data
Topic 02 Biomedical data sources: Taxonomy of biomedical data
Topic 03 Data Integration, mapping, fusion, Digression: data augmentation
Topic 04 Knowledge Representation
Topic 05 Biomedical ontologies
Topic 06 Biomedical classifications
Lecture Slides (4 slides on one page, 2 x 2, pdf 9,221 kB) 2-Medical-AI-HOLZINGER-2020-DATA-KNOWLEDGE-REP-2×2
Lecuture Slides full size (pdf, 8,850 kB) 2-Medical-AI-HOLZINGER-2020-DATA-KNOWLEDGE-REP
Youtube (open this link in a new window to adapt the player to 4:3): https://www.youtube.com/watch?v=_rrfGTAn7Ck
Module 03 – From Decision Making to Decision Support
Topic 00 Reflection – follow-up from last lecture
Topic 01 Medical Action = Decision Making
Topic 02 Can AI help doctors to make better decisions ?
Topic 03 Human Information Processing
Topic 04 Probabilistic Decision Theory
Topic 05 Example: P4 Medicine
Topic 06 Example: Case Based Reasoning
Lecture Slides 2×2 (pdf, 7,134 kB) 3-Medical-AI-HOLZINGER-2020-DECISION-MAKING-2×2
Lecture Slides full size (pdf, 16,126 kB) 3-Medical-AI-HOLZINGER-2020-DECISION-MAKING
Youtube (open this link in a new window to adapt the player to 4:3): https://www.youtube.com/watch?v=C3V8PQ-bgIM
Module 04 – From Decision Support Systems to Causability
Topic 00 Reflection – follow-up from last lecture
Topic 01: History of Decision Support Systems (DSS) = History of AI
Topic 02: Causality and Decision Making
Topic 03: Medical Communication
Topic 04: Causal Reasoning
Topic 05: Interpretability
Lecture slides 2×2 (pdf, 5,884 kB): 4-Medical-AI-HOLZINGER-2020-DECISION-CAUSABILITY-2×2
Lecture slides full size (pdf, 5,108 kB): 4-Medical-AI-HOLZINGER-2020-DECISION-CAUSABILITY
Youtube (open this link in a new window to adapt the player to 4:3): https://www.youtube.com/watch?v=hpRWERBWlho
Literature:
(* = please read through in preparation for the course)
*Holzinger, A., Malle, B., Saranti, A. & Pfeifer, B. (2021). Towards Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008.
*Mueller, H., Mayrhofer, M.T., Veen, E.-B.V. & Holzinger, A. (2021). The Ten Commandments of Ethical Medical AI. IEEE COMPUTER, 54, (7), 119–123, doi:10.1109/MC.2021.3074263.
*Schneeberger, D., Stoeger, K. & Holzinger, A. (2020). The European legal framework for medical AI. International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer LNCS 12279. Cham: Springer, pp. 209–226, doi:10.1007/978-3-030-57321-8-12 – [download paper here]
*Holzinger, A., Carrington, A. & Müller, H. (2020). Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations. KI – Künstliche Intelligenz (German Journal of Artificial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt, 34, (2), 193-198, doi:10.1007/s13218-020-00636-z.
Holzinger, A. (2020). Explainable AI and Multi-Modal Causability in Medicine. Wiley i-com Journal of Interactive Media, 19, (3), 171–179, doi:10.1515/icom-2020-0024.
Regitnig, P., Mueller, H. & Holzinger, A. (2020). Expectations of Artificial Intelligence in Pathology. Springer Lecture Notes in Artificial Intelligence LNAI 12090. Cham: Springer, pp. 1-15, doi:10.1007/978-3-030-50402-1-1 [download paper here]
Holzinger, A., Haibe-Kains, B. & Jurisica, I. (2019). Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data. European Journal of Nuclear Medicine and Molecular Imaging, 46, (13), 2722-2730, doi:10.1007/s00259-019-04382-9.
*Holzinger, A., Langs, G., Denk, H., Zatloukal, K. & Mueller, H. (2019). Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312.
Holzinger, A. (2018). From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (IEEE DISA). pp. 55-66, doi:10.1109/DISA.2018.8490530.
Holzinger, A. 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5.
Holzinger, A. 2017. Introduction to Machine Learning and Knowledge Extraction (MAKE). Machine Learning and Knowledge Extraction, 1, (1), 1-20, doi:10.3390/make1010001.
Holzinger, A., Biemann, C., Pattichis, C. S. & Kell, D. B. 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
*Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Springer Brain Informatics (BRIN), 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.
*Holzinger, A. 2016. Interactive Machine Learning (iML). Informatik Spektrum, 39, (1), 64-68, doi:10.1007/s00287-015-0941-6.
Jeanquartier, F., Jean-Quartier, C., Cemernek, D. & Holzinger, A. 2016. In silico modeling for tumor growth visualization. BMC Systems Biology, 10, (1), 1-15, doi:10.1186/s12918-016-0318-8.
Girardi, D., Küng, J., Kleiser, R., Sonnberger, M., Csillag, D., Trenkler, J. & Holzinger, A. 2016. Interactive knowledge discovery with the doctor-in-the-loop: a practical example of cerebral aneurysms research. Brain Informatics, 1-11, doi:10.1007/s40708-016-0038-2.
Duerr-Specht, M., Goebel, R. & Holzinger, A. 2015. Medicine and Health Care as a Data Problem: Will Computers become better medical doctors? In: Lecture Notes in Computer Science LNCS 8700. Heidelberg, Berlin, New York: Springer, pp. 21-40, doi:10.1007/978-3-319-16226-3_2.
Holzinger, A., Röcker, C. & Ziefle, M. 2015. From Smart Health to Smart Hospitals. Smart Health: State-of-the-Art and Beyond, Springer Lecture Notes in Computer Science, LNCS 8700. Heidelberg, Berlin: Springer, pp. 1-20, doi:10.1007/978-3-319-16226-3_1.
Holzinger, A. 2014. Trends in Interactive Knowledge Discovery for Personalized Medicine: Cognitive Science meets Machine Learning. IEEE Intelligent Informatics Bulletin, 15, (1), 6-14.
Holzinger, A. 2014. Biomedical Informatics: Discovering Knowledge in Big Data, New York, Springer. http://link.springer.com/book/10.1007/978-3-319-04528-3
Holzinger, A. & Jurisica, I. eds. (2014). Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, Heidelberg, Berlin: Springer. http://link.springer.com/book/10.1007%2F978-3-662-43968-5
Holzinger, A., Dehmer, M. & Jurisica, I. 2014. Knowledge Discovery and interactive Data Mining in Bioinformatics – State-of-the-Art, future challenges and research directions. BMC Bioinformatics, 15, (S6), I1. http://www.biomedcentral.com/1471-2105/15/S6/I1
Some practical links:
32 Examples of AI in Healthcare (Sam Daley, builtin, US):
https://builtin.com/artificial-intelligence/artificial-intelligence-healthcare
Similar courses on medical AI:
https://www.classcentral.com/course/ai-for-medical-diagnosis-19461
https://mltut247.medium.com/best-ai-courses-for-healthcare-you-should-know-in-2021-92bd59f0fa61
https://online-learning.harvard.edu/subject/artificial-intelligence
About the Lecturer:
Andreas Holzinger promotes a synergistic approach to Human-Centred AI (HCAI) and has pioneered in interactive machine learning (iML) with the human-in-the-loop. He promotes an integrated machine learning approach with the goal to augment human intelligence with artificial intelligence to help to solve problems in health informatics.
Due to raising ethical, social and legal issues governed by the European Union, future AI supported systems must be made transparent, re-traceable, thus human interpretable. Andreas’ aim is to explain why a machine decision has been reached, paving the way towards explainable AI and Causability, ultimately fostering ethical responsible machine learning, trust and acceptance for AI.
Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his Habilitation (second Ph.D.) in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor for Machine Learning & Knowledge Extraction in Verona, RWTH Aachen, University College London and Middlesex University London. Since 2016 Andreas is Visiting Professor for Machine Learning in Health Informatics at the Faculty of Informatics at Vienna University of Technology. Currently, Andreas is Visiting Professor for explainable AI, Alberta Machine Intelligence Institute, University of Alberta, Canada.
Andreas Holzinger is lead of the Holzinger Group (HCAI-Lab), Institute for Medical Informatics/Statistics at the Medical University Graz, and Associate Professor of Applied Computer Science at the Faculty of Computer Science and Biomedical Engineering at Graz University of Technology. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. His is in the advisory board of the Artificial Intelligence Strategy AI made in Germany of the German Federal Government and in the advisory board of the Artificial Intelligence Mission Austria 2030.
Personal Homepage: https://www.aholzinger.at
Video for Students: https://youtu.be/lc2hvuh0FwQ
Group Homepage: https://human-centered.ai
Google Scholar: https://scholar.google.com/citations?hl=en&user=BTBd5V4AAAAJ&view_op=list_works&sortby=pubdate
Additional study material:
Course Biomedical Informatics – Discovering Knowledge in (big) data (1 semester – 12 lectures – 3 ECTS):
https://human-centered.ai/biomedical-informatics-big-data/