Please take part in our EMPAIA XAI Survey

The Human-Centered AI Lab invites to take part in a causability measurement study to test the new causabilometer

Call for Papers “explainable AI 2020”, University College Dublin, August 24-28 (closed)

Accepted Papers will be published in the Springer/Nature Lecture Notes in Computer Science Volume “Cross Domain Conference for Machine Learning and Knowlege Extraction” (CD-MAKE 2020)

Ten Commandments for Human-AI interaction – Which are the most important?

In the following we present “10 Commandments for human-AI interaction” and ask our colleagues from the international AI/ML-community to comment on these. We will collect the results and present it openly to the international research community.

Lecture Notes in Artificial Intelligence LNAI 9605 just exceeded 80,000 downloads

Springer Lecture Notes in Artificial Intelligence LNAI 9605 Machine Learning for Health Informatics exceeded 80,000 downloads

Talk on Explainable AI

Andreas Holzinger provided a talk on 26.07.2019 in Edmonton for the faculty members of the University of Alberta Computing Science Faculty

AI, explain yourself !

“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”

There is an interesting commentary with interviews by Don MONROE in the recent Communications of the ACM, November 2018, Volume 61, Number 11, Pages 11-13, doi:

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment (only). Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.

“Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale”.

Read the full article here:
https://cacm.acm.org/magazines/2018/11/232193-ai-explain-yourself/fulltext

 

 

What if the AI answers are wrong?

Cartoon no. 1838 from the xkcd [1] Web comic by Randall MUNROE [2] describes in a brilliant sarcastic way the state of the art in AI/machine learning today and shows us the current main problem directly. Of course you will always get results from one of your machine learning models. Just fill in your data and you will get results – any results. That’s easy. The main question remains open: “What if the results are wrong?” The central problem is to know at all that my results are wrong and to what degree. Do you know your error? Or do you just believe what you get? This can be ignored in some areas, desired in other areas, but in a safety critical domain, e.g. in the medical area, this is crucial [3]. Here also the interactive machine learning approach can help to compensate or lower the generalization error through human intuition [4].

 

[1] https://xkcd.com

[2] https://en.wikipedia.org/wiki/Randall_Munroe

[3] Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923. online available: https://arxiv.org/abs/1712.09923v1

[4] Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6. online available, see:
https://human-centered.ai/2018/01/29/iml-human-loop-mentioned-among-10-coolest-applications-machine-learning

There is also a discussion on the image above:

https://www.explainxkcd.com/wiki/index.php/1838:_Machine_Learning

 

 

Microsoft boosts Explainable AI

Microsoft invests into explainable AI and acquired on June, 20, 2018 Bonsai, a California start-up, which was founded by Mark HAMMOND and Keen BROWNE in 2014. Watch an excellent introduction “Programming your way to explainable AI” by Mark HAMMOND here:

and read read the original story about the acquisition here:

https://blogs.microsoft.com/blog/2018/06/20/microsoft-to-acquire-bonsai-in-move-to-build-brains-for-autonomous-systems

“No one really knows how the most advanced algorithms do what they do. That could be a problem.” Will KNIGHT in “The dark secret of the heart of AI”

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai

NEW: The Travelling Snakesman v 1.1 – 18.4.2018 released

Enjoy the new version of our travelling snakesman game:
https://human-centered.ai/gamification-interactive-machine-learning/

Please follow the instructions given. By playing this game you help to proof the following hypothesis:
“A human-in-the-loop enhances the performance of an automatic algorithm”