MIT emphasizes the importance of HCI for explainable AI

In a joint project “The car can explain” with the TOYOTA Research Institute  the MIT Computer Science & Artificial Intelligence Lab  are working on explainable AI and emphasize the increasing importance of the field of HCI (Human-Computer Interaction) in this regard. Particularly, the group led by Lalana KAGAL is working on monitors for reasoning and explaining: a methodological tool to interpret and detect inconsistent machine behavior by imposing constraints of reasonableness. “Reasonable monitors” are implemented as two types of interfaces around their complex AI/ML frameworks. Local monitors check the behavior of a specific subsystem, and non-local reasonableness monitors watch the behavior of multiple subsystems working together: neighborhoods of interconnected subsystems that share a common task. This enormously interesting monitoring consistently checks that the neighborhood of subsystems are cooperating as expected. Insights of this projects could also be valuable for the health informatics domain:

https://toyota.csail.mit.edu/node/21

Google Brain says Explainability is the “new deep learning”

There is a very interesting interview in the Talking Machines*) series from May, 31, 2018. Katherine GORMAN interviews Maithra RAGHU **) from the Google Brain Team, where she mentioned that “explainability is the new deep learning”, and it is particularly important for health informatics, where it is important to re-trace, re-enact and to understand and explain why a machine decision has been reached. This is super for us, because when I tell my students that this is important, nobody believes me; but now I can emphasize that not I am saying that, but Google Brain is saying it. Excellent.

However, the whole field needs a lot of work, before we can provide useable solutions for the end-user in daily routine (e.g. a medical doctor); urgently needed are approaches to explainable User Interfaces and most of all a research framework for testing explainability.

*) Talking Machine is an excellent, highly recommendable Podcast series, founded by Katharine GORMAN and Ryan ADAMS in 2015 and now run by Katharine together with Neil LAWRENCE (who leads the Amazon Research in Cambridge, UK).

**) Maithra RAGHU is currently a PhD working with Jon KLEINBERG at Cornell (see http://maithraraghu.com ), where she is doing extended research with the Google Brain Team, see: https://ai.google/research/teams/brain
Maithra has published some very interesting papers, e.g.: Maithra Raghu, Justin Gilmer, Jason Yosinski & Jascha Sohl-Dickstein. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. Advances in Neural Information Processing Systems, 2017. 6078-6087.
or this is also very interesting:
Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein & Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems, 2016. 3360-3368.

Google Brain says we urgently need a Research Framework around the field of interpretability

In a recent interview Been KIM from the Google Brain team emphasizes the significance of research in explainable AI. Particularly, she emphasized the importance of Human-Computer Interaction (HCI) for Artificial Intelligence generally and Machine Learning specifically (see the differences between AI and ML here), and the urgent need of an research framework around the field of interpretability. Listen to the episode six of season four of Talking Machines by Katherine GORMAN and Neil LAWRENCE here (Start at approx. 26:00): https://www.thetalkingmachines.com/episodes/explainability-and-inexplicable

Been KIM is a research scientist at the Google Brain team and is interested in designing machine learning methods that  make sense to humans. Her current focus is building interpretability methods for already-trained models (e.g., high performance neural networks). In particular, she believes that the language of explanations should include higher-level, human-friendly concepts.  Been gave a tutorial on explainable AI at ICML 2017 and recently the group published the paper: Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman & Finale Doshi-Velez 2018. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv:1802.00682.
http://people.csail.mit.edu/beenkim

“Machine Learning for Health Informatics” Lecture Notes in Artificial Intelligence 9605 > 40,626 downloads 2017

Since its online publication on December 10, 2016 the Volume edited by Andreas Holzinger “Machine Learning for Health Informatics” Springer Lecture Notes in Artificial Intelligence LNAI Volume 9605, has been downloaded 54,960 times as of today (May, 11, 2018, 20:00 CEST) and 44,988 with status as of April 2018 according to the official Springer Bookmetrix book performance report – a record; and alone in the year 2017 40,626 downloads, which is 10 times higher than a typical volume of the series of Lecture Notes in Artificial Intelligence by Springer/Nature. A cordial thank you for my international colleagues for this huge acceptance!

http://www.springer.com/978-3-319-50478-0

https://www.springer.com/gp/book/9783319504773

A popular passage from the book:
https://books.google.com/talktobooks/query?q=What%E2%80%99s%20the%20difference%20between%20Machine%20Learning%20and%20deep%20learning%3F

Update on 15th September 2018: 63k downloads

 

NEW: The Travelling Snakesman v 1.1 – 18.4.2018 released

Enjoy the new version of our travelling snakesman game:
https://hci-kdd.org/gamification-interactive-machine-learning/

Please follow the instructions given. By playing this game you help to proof the following hypothesis:
“A human-in-the-loop enhances the performance of an automatic algorithm”

AI will change Radiology – NOT replace Radiologists

After the rather shocking statement of Geoffrey HINTON during the Machine Learning and Market for Intelligence Conference in Toronto, where he recommended that hospitals should stop training radiologists, because deep learning will replace them (watch video below), on March, 27, 2018 Thomas H. DAVENPORT and Keith J. DREYER published a really nice article on “AI will change radiology, but it won’t replace radiologists” (see [1]) – which supports our human-in-the-loop approach: for sure, AI/machine learning (difference here) will change workflows, but we envision that the expert will be augmented by new technologies, i.e. routine (boring) tasks will be replaced by automatic algorithms, but this will free up expert time to spent on challenging (cool) tasks and more research – and there are plenty of problems where we need human intelligence!

[1] https://hbr.org/2018/03/ai-will-change-radiology-but-it-wont-replace-radiologists

 

 

Human-in-the-loop AI

Human-in-the-Loop-AI

This is really very interesting. In the recent April, 5, 2018, TWiML & AI (This Week in Machine Learning and Artificial Intelligence) podcast, Robert MUNRO (a graduate from Stanford University, who is an recognized expert in combining human and machine intelligence) reports on the newly branded Figure Eight [1] company, formerly known as CrowdFlower. Their Human-in-the-Loop AI platform supports data science & machine learning teams working on various topics, including autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Most recently on disaster response and epidemiology. This is a further proof on the enormous importance and potential usefulness of the human-in-the-loop interactive machine Leanring (iML) approach! Listen to this awesome discussion led excellently by Sam CHARRINGTON:

https://twimlai.com/twiml-talk-125-human-loop-ai-emergency-response-robert-munro/

This discussion fits well to the previous discussion with Jeff DEAN (head of the Google Brain team) – who emphasized the importance of health and the limits of automatic approaches including deep learning. Enjoy to listen directly at:

https://twimlai.com/twiml-talk-124-systems-software-machine-learning-scale-jeff-dean/

[1] https://www.figure-eight.com/resources/human-in-the-loop

 

A good proof of the importance of the HCI-KDD approach, worth: 2,1 Billion USD !

Our strategic aim is to find solutions for data intensive problems by the combination of two areas, which bring ideal pre-conditions towards understanding intelligence and to bring business value in AI: Human-Computer Interaction (HCI) and Knowledge Discovery (KDD). HCI deals with questions of human intelligence, whereas KDD deals with questions of artificial intelligence, in particular with the development of scalable algorithms for finding previously unknown relationships in data, thus centers on automatic computational methods. A proverb attributed perhaps incorrectly to Albert Einstein illustrates this perfectly: “Computers are incredibly fast, accurate, but stupid. Humans are incredibly slow, inaccurate, but brilliant. Together they may be powerful beyond imagination” [1].

An article published on February, 18, 2018 by David Shaywitz [2] from Forbes reports on the recent purchase of  the oncolology data company Flatiron Health for the enormous sum of 2,1 Billion USD (remember: Deep Mind was purchased by Google for a mere 400 million GBP 😉

This supports a few hypotheses which I try to convince my students all the time (but they won’t believe me unless Google is doing it 😉

a) those who can turn raw health data into insights and understandable knowledge can produce value
b) data – and particularly big data – is useless for the decision maker, what they need is reliable, valuable and trustworthy information
c) for the complexity of sensemaking from health data we (still) need a human-in-the-loop:  Humans (still) exceed machine performance in understanding the context and explaining the underlying explanatory factors of the data
d) consequently this is a good example for the business value of our HCI-KDD approach: Let the computer find in arbitrarily high-dimensional spaces what no human is able to do – but let the human do what no computer is able to do: BOTH working together are powerful beyond imagination!

Flatiron Health [3] is a company which is specialized on health data curation, supported by technology of course, but mostly done manually by human experts in the Mechanical Turk style. Remark: The name mechanical turk has historic origins as it was inspired by an automatic 18th-century chess-playing machine by Wolfgang von Kempelen,  that beats e.g. Benjamin Franklin in chess playing – and was acclaimed as “AI”. However, ti was later revealed that it was neither a machine nor an automatic device – in fact it was a human chess master hidden in a secret space under the chessboard and controlling the movements of an humanoid dummy. Similarly,  services which help to solve problems via human intelligence are called “Mechanical Turk online services”.

[1] Holzinger, A. 2013. Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together? In: Cuzzocrea, Alfredo, Kittl, Christian, Simos, Dimitris E., Weippl, Edgar & Xu, Lida (eds.) Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127. Heidelberg, Berlin, New York: Springer, pp. 319-328, doi:10.1007/978-3-642-40511-2_22

[2] https://www.forbes.com/sites/davidshaywitz/2018/02/18/the-deeply-human-core-of-roches-2-1b-tech-acquisition-and-why-they-did-it/#6242fdbc29c2

[3] https://flatiron.com

On-Device Machine Intelligence

One very interesting approach of federated machine learning is presented by Sujith Ravi from Google: Machine learning models (e.g. CNN) are sucessfully used for the design of intelligent systems capable of visual recognition, speech and language understanding. Most of these are running on a cloud – which is often inpredictable where it is physically running. A huge problem so far is that typical machine learning models are awkward to use on mobile devices due to both computational and memory constraints. While these devices could make use of models running on high-performance data centers with CPUs or GPUs, this is not feasible for many applications and scenarios where inference needs to be performed directly “on” device. This requires re-thinking existing machine learning algorithms and coming up with new models that are directly optimized for on-device machine intelligence rather than doing post-hoc model compression. Sujith Ravi is introducing a novel “projection-based” machine learning system for training compact neural networks. The approach uses a joint optimization framework to simultaneously train a “full” deep network and a lightweight “projection” network. Unlike the full deep network, the projection network uses random projection operations that are efficient to compute and operates in bit space yielding a low memory footprint. The system is trained end-to-end using backpropagation. Ravi shows that the approach is flexible and easily extensible to other machine learning paradigms, for example, they can learn graph-based projection models using label propagation. The trained “projection” models are then directly used for inference, please watch the origial video on:

 

Prefetching – Predicting what will be most likely needed next

A very interesting paper has just been published  about prefetching, which is a nice machine learning solution: predicting which information will be most likely useful next and consequently can be prepared in advance:

Milad Hashemi, Kevin Swersky, Jamie A Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis & Parthasarathy Ranganathan 2018. Learning Memory Access Patterns. arXiv preprint arXiv:1803.02329.

Prefetching is the process of predicting future memory accesses that will miss in the on-chip cache and access memory based on past history. Each of these memory addresses are generated by a memory instruction (a load/store). Memory instructions are a subset of all instructions that interact with
the addressable memory of the computer system.

 

There is a nice article in the MIT Technology Review by Will Knight on March, 8, 2018 on the similarities on how human improve their behaviour with age – a very nice read:

https://www.technologyreview.com/s/610453/your-next-computer-could-improve-with-age/?set=