Posts

Ten Commandments for Human-AI interaction – Which are the most important?

In the following we present “10 Commandments” and ask our colleagues from the international AI/ML-community to comment on these. We will collect the results and present it openly to the international research community.

FWF Explainable AI project P-32 554-N successfully started

This basic research project will contribute novel results, algorithms and tools to the international ai and machine learning community

Kandinsky Challenge: IQ-Test for Machines is online!

The Human-Centered AI Lab (HCAI) invites the international machine learning community to a challenge on explainable AI and towards IQ-Tests for machines

Talk on Explainable AI

Andreas Holzinger provided a talk on 26.07.2019 in Edmonton for the faculty members of the University of Alberta Computing Science Faculty

First Austrian IFIP-Forum “AI and future society: The third wave of AI” (May, 8-9, 2019) , Vienna

First Austrian IFIP Forum “AI and future society: The third wave of AI, which takes place on Wednesday, May 8th – Thursday, May 9th 2019 in 1030 Vienna, Radetzkystraße 2, Festsaal of the BMVIT,

Interactive Machine Learning: Experimental Evidence for the human-in-the-loop

Recent advances in automatic machine learning (aML) allow solving problems without any human intervention, which is excellent in certain domains, e.g. in autonomous cars, where we want to exclude the human from the loop and want fully automatic learning. However, sometimes a human-in-the-loop can be beneficial – particularly in solving computationally hard problems. We provide new experimental insights [1] on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, an Ant Colony Optimization (ACO) framework was used, because this fosters multi-agent approaches with human agents in the loop. We propose unification between the human intelligence and interaction skills and the computational power of an artificial system. The ACO framework is used on a case study solving the Traveling Salesman Problem, because of its many practical implications, e.g. in the medical domain. We used ACO due to the fact that it is one of the best algorithms used in many applied intelligence problems. For the evaluation we used gamification, i.e. we implemented a snake-like game called Traveling Snakesman with the MAX–MIN Ant System (MMAS) in the background. We extended the MMAS–Algorithm in a way, that the human can directly interact and influence the ants. This is done by “traveling” with the snake across the graph. Each time the human travels over an ant, the current pheromone value of the edge is multiplied by 5. This manipulation has an impact on the ant’s behavior (the probability that this edge is taken by the ant increases). The results show that the humans performing one tour through the graphs have a significant impact on the shortest path found by the MMAS. Consequently, our experiment demonstrates that in our case human intelligence can positively influence machine intelligence. To the best of our knowledge this is the first study of this kind and it is a wonderful experimental platform for explainable AI.

[1] Holzinger, A. et al. (2018). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Springer/Nature: Applied Intelligence, doi:10.1007/s10489-018-1361-5.

Read the full article here:
https://link.springer.com/article/10.1007/s10489-018-1361-5

 

 

 

AI, explain yourself !

“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”

There is an interesting commentary with interviews by Don MONROE in the recent Communications of the ACM, November 2018, Volume 61, Number 11, Pages 11-13, doi:

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment (only). Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.

“Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale”.

Read the full article here:
https://cacm.acm.org/magazines/2018/11/232193-ai-explain-yourself/fulltext

 

 

What if the AI answers are wrong?

Cartoon no. 1838 from the xkcd [1] Web comic by Randall MUNROE [2] describes in a brilliant sarcastic way the state of the art in AI/machine learning today and shows us the current main problem directly. Of course you will always get results from one of your machine learning models. Just fill in your data and you will get results – any results. That’s easy. The main question remains open: “What if the results are wrong?” The central problem is to know at all that my results are wrong and to what degree. Do you know your error? Or do you just believe what you get? This can be ignored in some areas, desired in other areas, but in a safety critical domain, e.g. in the medical area, this is crucial [3]. Here also the interactive machine learning approach can help to compensate or lower the generalization error through human intuition [4].

 

[1] https://xkcd.com

[2] https://en.wikipedia.org/wiki/Randall_Munroe

[3] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923. online available: https://arxiv.org/abs/1712.09923v1

[4] Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6. online available, see:
https://hci-kdd.org/2018/01/29/iml-human-loop-mentioned-among-10-coolest-applications-machine-learning

There is also a discussion on the image above:

https://www.explainxkcd.com/wiki/index.php/1838:_Machine_Learning

 

 

Yoshua Bengio emphasizes: Deep Learning needs Deep Understanding !

Yoshua BENGIO from the Canadian Institute for Advanced Research (CIFAR) emphasized during his workshop talk entitled “towards disentangling underlying explanatory factors”  (cool title) at the ICML 2018 in Stockholm, that the key for success in AI/machine learning is to understand the explanatory/causal factors and mechanisms. This means generalizing beyond identical independent data (i.i.d.); current machine learning theories are strongly dependent on this iid assumption, but applications in the real-world (we see this in the medical domain!) often require learning and generalizing in areas simply not seen during the training epoch. Humans interestingly are able to protect themselves in such situations, even in situations which they have never seen before. See Yoshua BENGIO’s awesome talk here:
http://www.iro.umontreal.ca/~bengioy/talks/ICMLW-limitedlabels-13july2018.pptx.pdf

and here a longer talk (1:17:04) at Microsoft Research Redmond on January, 22, 2018 – awesome – enjoy the talk, I recommend it cordially to all my students!

Portfolio Items