This project will create an openQKD testbed for quantum communication which is highly relevant for future AI and machine learning
Author Archive for: Andreas Holzinger
About Andreas Holzinger
Andreas Holzinger promotes a synergistic approach to Human-Centred AI (HCAI) and has pioneered in interactive machine learning (iML) with the human-in-the-loop. He promotes an integrated machine learning approach with the goal to augment human intelligence with artificial intelligence to help to solve problems in health informatics.
Due to raising ethical, social and legal issues governed by the European Union, future AI supported systems must be made transparent, re-traceable, thus human interpretable. Andreas’ aim is to explain why a machine decision has been reached, paving the way towards explainable AI and Causability, ultimately fostering ethical responsible machine learning, trust and acceptance for AI.
Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his Habilitation (second Ph.D.) in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor for Machine Learning & Knowledge Extraction in Verona, RWTH Aachen, University College London and Middlesex University London. Since 2016 Andreas is Visiting Professor for Machine Learning in Health Informatics at the Faculty of Informatics at Vienna University of Technology. Currently, Andreas is Visiting Professor for explainable AI, Alberta Machine Intelligence Institute, University of Alberta, Canada.
Entries by Andreas Holzinger
The Human-Centered AI Lab (HCAI) invites the international machine learning community to a challenge on explainable AI and towards IQ-Tests for machines
Springer Lecture Notes in Artificial Intelligence LNAI 9605 Machine Learning for Health Informatics exceeded 80,000 downloads
Andreas Holzinger provided a talk on 26.07.2019 in Edmonton for the faculty members of the University of Alberta Computing Science Faculty
On June, 6, 2019, 10:00-16:00 we organize in Graz/Austria a small Symposium on AI/Machine Learning for Digital Pathology
First Austrian IFIP Forum “AI and future society: The third wave of AI, which takes place on Wednesday, May 8th – Thursday, May 9th 2019 in 1030 Vienna, Radetzkystraße 2, Festsaal of the BMVIT,
Effective (future) Human-AI interaction must take into account a context specific mapping between explainable-AI and human understanding.
In our recent paper we define the notion of causability, which is different from explainability in that causability is a property of a person, while explainability is a property of a system!
There are many different machine learning algorithms for a certain problem, but which one to chose for solving a practical problem? The comparison of learning algorithms is very difficult and is highly dependent of the quality of the data!