This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept all cookies and servicesDo not acceptLearn moreWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and Menus of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
The following cookies are also needed - You can choose if you want to allow them:
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.
Legal Information – Impressum
Federated Machine Learning – Privacy by Design won
/in HCI-KDD Events, Projects, Science News/by Andreas HolzingerFederated machine learning – privacy by design EU-project granted!
Good news from Brussels: Our EU RIA project application 826078 FeatureCloud with a total volume of EUR 4,646,000,00 has just been granted. The project was submitted to the H2020-SC1-FA-DTS-2018-2020 call “Trusted digital solutions and Cybersecurity in Health and Care”. The lead is done by TU Munich and we are excited to work in a super cool project consortium together with our partners for the next 60 months. The project’s ground-breaking novel cloud-AI infrastructure only exchanges learned representations (the feature parameters theta θ, hence the name “feature cloud”) which are anonymous by default (no hassle with “real medical data” – no ethical issues). Collectively, our highly interdisciplinary consortium from AI and machine learning to medicine covers all aspects of the value chain: assessment of cyber risks, legal considerations and international policies, development of state-of-the.-art federated machine learning technology coupled to blockchaining and encompasing AI-ethics research. FeatureCloud’s goals are challenging bold, obviously, but achievable, and paving the way for a socially agreeable big data era for the benefit of future medicine. Congratulations to the great project consortium!
Investigating Human Priors for Playing Video Games
/in Recent Publications/by Andreas HolzingerThe group around Tom GRIFFITHS *) from the Cognitive Science Lab at Berkeley recently asked in their paper by Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths & Alexei A. Efros 2018. Investigating Human Priors for Playing Video Games. arXiv:1802.10217: “What makes humans so good at solving seemingly complex video games?”.
(Spoiler short answer in advance: we don’t know – but we can gradually improve our understanding on this topic).
The authors did cool work on investigating the role of human priors for solving video games. On the basis of a specific game, they conducted a series of ablation-studies to quantify the importance of various priors on human performance. For this purpose they modifyied the video game environment to systematically mask different types of visual information that could be used by humans as prior data. The authors found that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Their results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play.
Read the original paper here:
https://arxiv.org/abs/1802.10217
Or at least glance it over via the ArxiV sanity preserver by Andrew KARPATHY:
https://www.arxiv-sanity.com/search?q=+Investigating+Human+Priors+for+Playing+Video+Games
Videos and the game manipulations are available here:
https://rach0012.github.io/humanRL_website
*) Tom Griffiths is Professor of Psychology and Cognitive Science and is interested in developing mathematical models of higher level cognition, and understanding the formal principles that underlie human ability to solve the computational problems we face in everyday life. His current focus is on inductive problems, such as probabilistic reasoning, learning causal relationships, acquiring and using language, and inferring the structure of categories. He tries to analyze these aspects of human cognition by comparing human behavior to optimal or “rational” solutions to the underlying computational problems. For inductive problems, this usually means exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition.
See the homepage of Tom here:
https://cocosci.berkeley.edu/tom
Judea Pearl on explainable-AI: teach machines cause and effect
/in Science News/by Andreas HolzingerTo build truly intelligent machines, teach them cause and effect, emphasizes Judea PEARL in a recent Quanta Magazine article (May, 15, 2018) posted by Kevin HARTNETT. Judea Pearl won in 2011 the Turing Award (“the Nobel Prize in Computer Science”) and just published his newest book, called “The book of why: the new science of cause and effect”, wherein Pearl argues that AI has been handicapped by an incomplete understanding of what intelligence really is. Causal reasoning is a cornerstone in explainable-AI!
Read the interesting article here:
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515
The book is also announced by the UCLA newsroom, along with a nice interview see:
https://newsroom.ucla.edu/stories/artificial-intelligence-pioneers-new-book-examines-the-science-of-cause-and-effect
Microsoft boosts Explainable AI
/in General/by Andreas HolzingerMicrosoft invests into explainable AI and acquired on June, 20, 2018 Bonsai, a California start-up, which was founded by Mark HAMMOND and Keen BROWNE in 2014. Watch an excellent introduction “Programming your way to explainable AI” by Mark HAMMOND here:
and read read the original story about the acquisition here:
https://blogs.microsoft.com/blog/2018/06/20/microsoft-to-acquire-bonsai-in-move-to-build-brains-for-autonomous-systems
“No one really knows how the most advanced algorithms do what they do. That could be a problem.” Will KNIGHT in “The dark secret of the heart of AI”
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai
The Problem with explainable-AI
/in Science News/by Andreas HolzingerA very nice and interesting article by Rudina SESERI in the recent TechCrunch blog (read the orginal blog entry below): at first Rudina points out that the main problem is in data; and yes, indeed, data should always be the first consideration. We consider it a big problem that successful ML approaches (e.g. the mentioned deep learning, our PhD students can tell you a thing or two about it 😉 greatly benefit from big data (the bigger the better) with many training sets; However, it certain domain, e.g. in the health domain we sometimes are confronted with a small number of data sets or rare events, where we suffer of insufficient training samples [1]. This calls for more research towards how we can learn from little data (zero-shot learning), similar as we humans do: Rudina does not need to show her children 10 million samples of a dog and a cat, so that her children can safely discriminate a dog from a cat. However, what I miss in this article is something different, the word trust. Can we trust our machine learning results? [2] Whilst, for sure we do not need to explain everything all the time, we need possibilities to make machine decisions transparent on demand and to check if something could be plausible. Consequently, Explainable AI can be very important to foster trust in machine learning specifically and artificial intelligence generally.
[1] https://link.springer.com/article/10.1007/s40708-016-0042-6
[2] https://ercim-news.ercim.eu/en112/r-i/can-we-trust-machine-learning-results-artificial-intelligence-in-safety-critical-decision-support
MIT emphasizes the importance of HCI for explainable AI
/in Science News/by Andreas HolzingerIn a joint project “The car can explain” with the TOYOTA Research Institute the MIT Computer Science & Artificial Intelligence Lab are working on explainable AI and emphasize the increasing importance of the field of HCI (Human-Computer Interaction) in this regard. Particularly, the group led by Lalana KAGAL is working on monitors for reasoning and explaining: a methodological tool to interpret and detect inconsistent machine behavior by imposing constraints of reasonableness. “Reasonable monitors” are implemented as two types of interfaces around their complex AI/ML frameworks. Local monitors check the behavior of a specific subsystem, and non-local reasonableness monitors watch the behavior of multiple subsystems working together: neighborhoods of interconnected subsystems that share a common task. This enormously interesting monitoring consistently checks that the neighborhood of subsystems are cooperating as expected. Insights of this projects could also be valuable for the health informatics domain:
https://toyota.csail.mit.edu/node/21
Google Brain says Explainability is the “new deep learning”
/in Science News/by Andreas HolzingerThere is a very interesting interview in the Talking Machines*) series from May, 31, 2018. Katherine GORMAN interviews Maithra RAGHU **) from the Google Brain Team, where she mentioned that “explainability is the new deep learning”, and it is particularly important for health informatics, where it is important to re-trace, re-enact and to understand and explain why a machine decision has been reached. This is super for us, because when I tell my students that this is important, nobody believes me; but now I can emphasize that not I am saying that, but Google Brain is saying it. Excellent.
However, the whole field needs a lot of work, before we can provide useable solutions for the end-user in daily routine (e.g. a medical doctor); urgently needed are approaches to explainable User Interfaces and most of all a research framework for testing explainability.
*) Talking Machine is an excellent, highly recommendable Podcast series, founded by Katharine GORMAN and Ryan ADAMS in 2015 and now run by Katharine together with Neil LAWRENCE (who leads the Amazon Research in Cambridge, UK).
**) Maithra RAGHU is currently a PhD working with Jon KLEINBERG at Cornell (see https://maithraraghu.com ), where she is doing extended research with the Google Brain Team, see: https://ai.google/research/teams/brain
Maithra has published some very interesting papers, e.g.: Maithra Raghu, Justin Gilmer, Jason Yosinski & Jascha Sohl-Dickstein. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. Advances in Neural Information Processing Systems, 2017. 6078-6087.
or this is also very interesting:
Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein & Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems, 2016. 3360-3368.
Google Brain says we urgently need a Research Framework around the field of interpretability
/in Science News/by Andreas HolzingerIn a recent interview Been KIM from the Google Brain team emphasizes the significance of research in explainable AI. Particularly, she emphasized the importance of Human-Computer Interaction (HCI) for Artificial Intelligence generally and Machine Learning specifically (see the differences between AI and ML here), and the urgent need of an research framework around the field of interpretability. Listen to the episode six of season four of Talking Machines by Katherine GORMAN and Neil LAWRENCE here (Start at approx. 26:00): https://www.thetalkingmachines.com/episodes/explainability-and-inexplicable
Been KIM is a research scientist at the Google Brain team and is interested in designing machine learning methods that make sense to humans. Her current focus is building interpretability methods for already-trained models (e.g., high performance neural networks). In particular, she believes that the language of explanations should include higher-level, human-friendly concepts. Been gave a tutorial on explainable AI at ICML 2017 and recently the group published the paper: Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman & Finale Doshi-Velez 2018. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv:1802.00682.
https://people.csail.mit.edu/beenkim
“Machine Learning for Health Informatics” Lecture Notes in Artificial Intelligence 9605 > 40,626 downloads 2017
/in HCI-KDD Events, Science News/by Andreas HolzingerSince its online publication on December 10, 2016 the Volume edited by Andreas Holzinger “Machine Learning for Health Informatics” Springer Lecture Notes in Artificial Intelligence LNAI Volume 9605, has been downloaded 54,960 times as of today (May, 11, 2018, 20:00 CEST) and 44,988 with status as of April 2018 according to the official Springer Bookmetrix book performance report – a record; and alone in the year 2017 40,626 downloads, which is 10 times higher than a typical volume of the series of Lecture Notes in Artificial Intelligence by Springer/Nature. A cordial thank you for my international colleagues for this huge acceptance!
https://www.springer.com/978-3-319-50478-0
https://www.springer.com/gp/book/9783319504773
A popular passage from the book:
https://books.google.com/talktobooks/query?q=What%E2%80%99s%20the%20difference%20between%20Machine%20Learning%20and%20deep%20learning%3F
Update on 15th September 2018: 63k downloads

NEW: The Travelling Snakesman v 1.1 – 18.4.2018 released
/in General/by Andreas HolzingerEnjoy the new version of our travelling snakesman game:
https://human-centered.ai/gamification-interactive-machine-learning/
Please follow the instructions given. By playing this game you help to proof the following hypothesis:
“A human-in-the-loop enhances the performance of an automatic algorithm”