AI will change Radiology – NOT replace Radiologists

After the rather shocking statement of Geoffrey HINTON during the Machine Learning and Market for Intelligence Conference in Toronto, where he recommended that hospitals should stop training radiologists, because deep learning will replace them (watch video below), on March, 27, 2018 Thomas H. DAVENPORT and Keith J. DREYER published a really nice article on “AI will change radiology, but it won’t replace radiologists” (see [1]) – which supports our human-in-the-loop approach: for sure, AI/machine learning (difference here) will change workflows, but we envision that the expert will be augmented by new technologies, i.e. routine (boring) tasks will be replaced by automatic algorithms, but this will free up expert time to spent on challenging (cool) tasks and more research – and there are plenty of problems where we need human intelligence!

[1] https://hbr.org/2018/03/ai-will-change-radiology-but-it-wont-replace-radiologists

 

 

A good proof of the importance of the HCI-KDD approach, worth: 2,1 Billion USD !

Our strategic aim is to find solutions for data intensive problems by the combination of two areas, which bring ideal pre-conditions towards understanding intelligence and to bring business value in AI: Human-Computer Interaction (HCI) and Knowledge Discovery (KDD). HCI deals with questions of human intelligence, whereas KDD deals with questions of artificial intelligence, in particular with the development of scalable algorithms for finding previously unknown relationships in data, thus centers on automatic computational methods. A proverb attributed perhaps incorrectly to Albert Einstein illustrates this perfectly: “Computers are incredibly fast, accurate, but stupid. Humans are incredibly slow, inaccurate, but brilliant. Together they may be powerful beyond imagination” [1].

An article published on February, 18, 2018 by David Shaywitz [2] from Forbes reports on the recent purchase of  the oncolology data company Flatiron Health for the enormous sum of 2,1 Billion USD (remember: Deep Mind was purchased by Google for a mere 400 million GBP 😉

This supports a few hypotheses which I try to convince my students all the time (but they won’t believe me unless Google is doing it 😉

a) those who can turn raw health data into insights and understandable knowledge can produce value
b) data – and particularly big data – is useless for the decision maker, what they need is reliable, valuable and trustworthy information
c) for the complexity of sensemaking from health data we (still) need a human-in-the-loop:  Humans (still) exceed machine performance in understanding the context and explaining the underlying explanatory factors of the data
d) consequently this is a good example for the business value of our HCI-KDD approach: Let the computer find in arbitrarily high-dimensional spaces what no human is able to do – but let the human do what no computer is able to do: BOTH working together are powerful beyond imagination!

Flatiron Health [3] is a company which is specialized on health data curation, supported by technology of course, but mostly done manually by human experts in the Mechanical Turk style. Remark: The name mechanical turk has historic origins as it was inspired by an automatic 18th-century chess-playing machine by Wolfgang von Kempelen,  that beats e.g. Benjamin Franklin in chess playing – and was acclaimed as “AI”. However, ti was later revealed that it was neither a machine nor an automatic device – in fact it was a human chess master hidden in a secret space under the chessboard and controlling the movements of an humanoid dummy. Similarly,  services which help to solve problems via human intelligence are called “Mechanical Turk online services”.

[1] Holzinger, A. 2013. Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together? In: Cuzzocrea, Alfredo, Kittl, Christian, Simos, Dimitris E., Weippl, Edgar & Xu, Lida (eds.) Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127. Heidelberg, Berlin, New York: Springer, pp. 319-328, doi:10.1007/978-3-642-40511-2_22

[2] https://www.forbes.com/sites/davidshaywitz/2018/02/18/the-deeply-human-core-of-roches-2-1b-tech-acquisition-and-why-they-did-it/#6242fdbc29c2

[3] https://flatiron.com

On-Device Machine Intelligence

One very interesting approach of federated machine learning is presented by Sujith Ravi from Google: Machine learning models (e.g. CNN) are sucessfully used for the design of intelligent systems capable of visual recognition, speech and language understanding. Most of these are running on a cloud – which is often inpredictable where it is physically running. A huge problem so far is that typical machine learning models are awkward to use on mobile devices due to both computational and memory constraints. While these devices could make use of models running on high-performance data centers with CPUs or GPUs, this is not feasible for many applications and scenarios where inference needs to be performed directly “on” device. This requires re-thinking existing machine learning algorithms and coming up with new models that are directly optimized for on-device machine intelligence rather than doing post-hoc model compression. Sujith Ravi is introducing a novel “projection-based” machine learning system for training compact neural networks. The approach uses a joint optimization framework to simultaneously train a “full” deep network and a lightweight “projection” network. Unlike the full deep network, the projection network uses random projection operations that are efficient to compute and operates in bit space yielding a low memory footprint. The system is trained end-to-end using backpropagation. Ravi shows that the approach is flexible and easily extensible to other machine learning paradigms, for example, they can learn graph-based projection models using label propagation. The trained “projection” models are then directly used for inference, please watch the origial video on:

 

Prefetching – Predicting what will be most likely needed next

A very interesting paper has just been published  about prefetching, which is a nice machine learning solution: predicting which information will be most likely useful next and consequently can be prepared in advance:

Milad Hashemi, Kevin Swersky, Jamie A Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis & Parthasarathy Ranganathan 2018. Learning Memory Access Patterns. arXiv preprint arXiv:1803.02329.

Prefetching is the process of predicting future memory accesses that will miss in the on-chip cache and access memory based on past history. Each of these memory addresses are generated by a memory instruction (a load/store). Memory instructions are a subset of all instructions that interact with
the addressable memory of the computer system.

 

There is a nice article in the MIT Technology Review by Will Knight on March, 8, 2018 on the similarities on how human improve their behaviour with age – a very nice read:

https://www.technologyreview.com/s/610453/your-next-computer-could-improve-with-age/?set=

iML with the human-in-the-loop mentioned among 10 coolest applications of machine learning

Within the “Two Minute Papers” series, Karol Károly Zsolnai-Fehér from the Institute of Computer Graphics and Algorithms at the Vienna University of Technology mentions among “10 even cooler Deep Learning Applications” our human-in-the-loop paper:

Seid Muhie Yimam, Chris Biemann, Ljiljana Majnaric, Šefket Šabanović & Andreas Holzinger 2016. An adaptive annotation approach for biomedical entity and relation recognition. Springer/Nature: Brain Informatics, 3, (3), 157-168, doi:10.1007/s40708-016-0036-4

Watch the video here (iML is mentinoned from approx. 1:20):

Here the list of all 10 papers discussed within this 2-minutes-video

1. Geolocation – https://arxiv.org/abs/1602.05314
2. Super-resolution – https://arxiv.org/pdf/1511.04491v1.pdf
3. Neural Network visualizer – https://experiments.mostafa.io/public/…
4. Recurrent neural network for sentence completion:
5. Human-in-the-loop and Doctor-in-the-loop: https://link.springer.com/article/10.1007/s40708-016-0036-4
6. Emoji suggestions for images – https://emojini.curalate.com/
7. MNIST handwritten numbers in HD – https://blog.otoro.net/2016/04/01/generating-large-images-from-latent-vectors
8. Deep Learning solution to the Netflix prize – https://karthkk.wordpress.com/2016/03/22/deep-learning-solution-for-netflix-prize/
9. Curating works of art –
10. More robust neural networks against adversarial examples – https://cs231n.stanford.edu/reports201…
The Keras library: https://keras.io/

A) The basic principle of the iML human-in-the-loop approach:

Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6

B) The entry in the GI Lexikon:
https://gi.de/informatiklexikon/interactive-machine-learning-iml

C) The experimental proof-of-concept:

Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

D) Outline and Survey of application possibilities:

Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.

Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs & Kurt Zatloukal 2017. Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv:1712.06657.

 

Digital Pathology: World’s fastest WSI scanner is now working in Graz

26.10.2017. Today, Prof. Kurt Zatloukal and his group together with us and the digital pathology team of 3DHISTECH, our industrial partner, completed the installation of the new generation panoramic P1000 scanner [0]. The world’s fastest whole slide image scanner (WSI) is now located in Graz. The current scanner outperforms current state-of-the-art systems by a factor 6, which provides enormous opportunities for our machine learning/AI MAKEpatho project.

Digital Pathology and Artificial Intelligence/Machine Learning

Digital pathology [1] is not just the transformation of the classical microscopic analysis of histopathological slides by pathologists to a digital visualization. Digital Pathology is an innovation that will dramatically change medical workflows in the coming years. In the center is Whole Slide Imaging (WSI), but the true added value will result from a combination of heterogenous data sources. This will generate a new kind of information not yet available today. Much information is hidden in arbitrarily high dimensional spaces and not accessible to a human pathologist. Consequently, we need novel approaches from artificial intelligence (AI) and machine learning (ML) (see definition) for exploiting the full possibilities of Digital Pathology [2]. The goal is to gain knowledge from this information, which is not yet available and not exploited to date [3].

Digital Pathology chances

Major changes enabled by digital pathology include the improvement of medical decision making with new human-AI interfaces, new chances for education and research, and the globalization of diagnostic services. The latter allows bringing the top-level expertise essentially to any patient in the world by the use of the Internet/Web. This will also generate totally new business models for  worldwide diagnostic services. Furthermore, by using AI/ML we can make new information of images accessible and quantifiable (e.g. through geometrical approaches and machine learning),  which is not yet available in current diagnostics. Another effect will be that digital pathology and machine learning will change the education and training systems, which will be an urgently needed solution to address the global shortage of medical specialists. While the digitalization is called Pathology 2.0 [4] we envision a Pathology 4.0 – and here explainable-AI will become important.

3DHISTECH

3DHISTECH Ltd. (the name is derived from „Three-dimensional Histological Technologies”) is a leading company, developing high-performance hardware and software products for digital pathology since 1996. As the first European manufacturer, 3DHISTECH is one of the market leaders in the world with more than 1500 sold systems. Founded by Dr. Bela MOLNAR from Semmelweis University Budapest, they are pioneers in this field, and develop high speed digital slide scanners that create high quality bright field and fluorescent digital slides, digital histology software and tissue microarray machinery. 3DHISTECH’s aim is to fully digitalize the traditional pathology workflow so that it can adapt to the ever growing demands of healthcare today. Furthermore, educational programs are also organized to help pathologists learn and master these new technologies easier.

[0] P1000 https://www.youtube.com/watch?v=WuCXkTpy5js (1:41 min)

[1]  Shaimaa Al‐Janabi, Andre Huisman & Paul J. Van Diest (2012). Digital pathology: current status and future perspectives. Histopathology, 61, (1), 1-9, doi:10.1111/j.1365-2559.2011.03814.x.

[2] Anant Madabhushi & George Lee (2016). Image analysis and machine learning in digital pathology: Challenges and opportunities. Medical Image Analysis, 33, 170-175, doi:10.1016/j.media.2016.06.037.

[3]  Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs & Kurt Zatloukal (2017). Machine Learning and Knowledge Extraction in Digital Pathology needs an integrative approach. In: Springer Lecture Notes in Artificial Intelligence Volume LNAI 10344. Cham: Springer International, pp. 13-50. 10.1007/978-3-319-69775-8_2  [pdf-preprint available here]

[4]  Nikolas Stathonikos, Mitko Veta, André Huisman & Paul J Van Diest (2013). Going fully digital: Perspective of a Dutch academic pathology lab. Journal of pathology informatics, 4. doi:  10.4103/2153-3539.114206

[5] Francesca Demichelis, Mattia Barbareschi, P Dalla Palma & S Forti 2002. The virtual case: a new method to completely digitize cytological and histological slides. Virchows Archiv, 441, (2), 159-164. https://doi.org/10.1007/s00428-001-0561-1

[6] Marcus Bloice, Klaus-Martin Simonic & Andreas Holzinger 2013. On the usage of health records for the design of virtual patients: a systematic review. BMC Medical Informatics and Decision Making, 13, (1), 103, doi:10.1186/1472-6947-13-103.

[7] https://www.3dhistech.com

[8]  https://pathologie.medunigraz.at/forschung/forschungslabor-fuer-experimentelle-zellforschung-und-onkologie

Mini Glossary:

Digital Pathology = is not only the conversion of histopathological slides into a digital image (WSI) that can be uploaded to a computer for storage and viewing, but a complete new medical work procedure (from Pathology 2.0 to Pathology 4.0) – the basis is Virtual Microscopy.

Explainability = motivated due to lacking transparency of black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically among end-users. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations (which come into effect in May 2018) will make black-box approaches difficult to use, because they often are not able to explain why a decision has been made (see explainable AI).

Explainable AI = raising legal and ethical aspects make it mandatory to enable a human to understand why a machine decision has been made, i.e. to make machine decisions re-traceable and to explain why a decision has been made [see Wikipedia on Explainable Artificial Intelligence] (Note: that does not mean that it is always necessary to explain everything and all – but to be able to explain it if necessary – e.g. for general understanding, for teaching, for learning, for research – or in court!)

Machine Aided Pathology = is the management, discovery and extraction of knowledge from a virtual case, driven by advances of digital pathology supported by feature detection and classification algorithms.

Virtual Case = the set of all histopathological slides of a case together with meta data from the macro pathological diagnosis [5]

Virtual microscopy = not only viewing of slides on a computer screen over a network, it can be enhanced by supporting the pathologist with equivalent optical resolution and magnification of a microscope whilst changing  the magnification; machine learning and ai methods can help to extract new knowlege out of the image data

Virtual Patient = has very different definitions (see [6]), we define it as a model of electronic records (images, reports, *omics) for studying e.g. diseases.

WSI = Whole Slide Image, a.k.a. virtual slide, is a digitized histopathology glass slide that has been created on a slide scanner and represents a high-resolution volume data cube which can be handled via a virtual microscope and most of all where methods from artificial intelligence generally, and interactive machine learning specifically, together with methods from topological data analysis, can make information accessible to a human pathologists, which would otherwise be hidden.

WSS = Whole Slide Scanner is the machinery for taking WSI including the hardware and the software for creating a WSI.

Transparency & Trust in Machine Learning: Making AI interpretable and explainable

A huge motivation for us in continuing to study interactive Machine Learning (iML) [1] – with a human in the loop [2] (see our project page) is that modern deep learning models are often considered to be “black-boxes” [3]. A further drawback is that such models have no explicit declarative knowledge representation, hence have difficulty in generating the required explanatory structures – which considerably limits the achievement of their full potential [4].

Even if we understand the mathematical theories behind the machine model it is still complicated to get insight into the internal working of that model, hence black box models are lacking transparency, consequently we raise the question: “Can we trust our results?”

In fact: “Can we explain how and why a result was achieved?” A classic example is the question “Which objects are similar?”, but an even more interesting question would be to answer “Why are those objects similar?”

We believe that there is growing demand in machine learning approaches, which are not only well performing, but transparent, interpretable and trustworthy. We are currently working on methods and models to reenact the machine decision-making process, to reproduce and to comprehend the learning and knowledge extraction process. This is important, because for decision support it is necessary to understand the causality of learned representations [5], [6]. If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must still be able to understand, and most of all to be able to interactively influence the machine decision process. This needs context awareness and sensemaking to close the gap between human thinking and machine “thinking”.

A huge motivation for this approach are rising legal and privacy aspects, e.g. with the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018, will make black-box approaches difficult to use in business, because they are not able to explain why a decision has been made.

This will stimulate research in this area with the goal of making decisions interpretable, comprehensible and reproducible. On the example of health informatics this is not only useful for machine learning research, and for clinical decision making, but at the same time a big asset for the training of medical students.

The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU). It also addresses the export of personal data outside of the European Union (this will affect data-centric projects between the EU and e.g. the US). The GDPR aims primarily to give control back to citizens and residents over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology Here to note is that the “right to be forgotten” [7] established by the European Court of Justice has been extended to become a “right of erasure”; it will no longer be sufficient to remove a person’s data from search results when requested to do so, data controllers must now erase that data. However, if the data is encrypted, it may be sufficient to destroy the encryption keys rather than go through the prolonged process of ensuring that the data has been fully erased [8].

A recent, and very interesting discussion with Daniel S. WELD (Artificial Intelligence, Crowdsourcing, Information Extraction) on Explainable AI can be found here:

The interview in essence brings out that most machine learning models are very complicated: deep neural networks operate incredibly quickly, considering thousands of possibilities in seconds before making decisions and Dan Weld points out: “The human brain simply can’t keep up” – and pointed at the example when AlphaGo made an unexpected decision: It is not possible to understand why the algorithm made exactly that choice. Of course this may not be critical in a game – no one gets hurt; however, deploying intelligent machines that we can not understand could set a dangerous precedent in e.g. in our domain: health informatics. According to Dan Weld, understanding and trusting machines is “the key problem to solve” in AI safety, security, data protection and privacy, and it is urgently necessary. He further explains, “Since machine learning is nowadays at the core of pretty much every AI success story, it’s really important for us to be able to understand what is it that the machine learned.” In case a machine learning system is confronted with a “known unknown,” it may recognize its uncertainty with the situation in the given context. However, when it encounters an unknown unknown, it won’t even recognize that this is an uncertain situation: the system will have extremely high confidence that its result is correct – but it still will be wrong, and Dan pointed on the example of classifiers “trained on data that had some regularity in it that’s not reflected in the real world” – which is a problem of having little data or even no available training data (see [1]) – the problem of “unknown unknowns” is definitely underestimated in the traditional AI community. Governments and businesses can’t afford to deploy highly intelligent AI systems that make unexpected, harmful decisions, especially if these systems are in safety critical environments.

 

References:

[1]          Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.

[2]          Holzinger, A., Plass, M., Holzinger, K., Crisan, G. C., Pintea, C.-M. & Palade, V. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

[3]          Lipton, Z. C. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490.

[4]          Bologna, G. & Hayashi, Y. 2017. Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research, 7, (4), 265-286, doi:10.1515/jaiscr-2017-0019.

[5]          Pearl, J. 2009. Causality: Models, Reasoning, and Inference (2nd Edition), Cambridge, Cambridge University Press.

[6]          Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. 2015. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349, (6245), 273-278, doi:10.1126/science.aac6076.

[7]          Malle, B., Kieseberg, P., Schrittwieser, S. & Holzinger, A. 2016. Privacy Aware Machine Learning and the “Right to be Forgotten”. ERCIM News (special theme: machine learning), 107, (3), 22-23.

[8]          Kingston, J. 2017. Using artificial intelligence to support compliance with the general data protection regulation. Artificial Intelligence and Law, doi:10.1007/s10506-017-9206-9.

Links:

https://de.wikipedia.org/wiki/Datenschutz-Grundverordnung

https://en.wikipedia.org/wiki/General_Data_Protection_Regulation

https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:31995L0046

2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY

https://googleblog.blogspot.com/2015/07/neon-prescription-or-rather-new.html

https://sites.google.com/site/nips2016interpretml

 

Interpretable Machine Learning Workshop

Andrew G Wilson, Jason Yosinski, Patrice Simard, Rich Caruana, William Herlands

https://nips.cc/Conferences/2017/Schedule?showEvent=8744

 

Journal “Artificial Intelligence and Law”

https://link.springer.com/journal/volumesAndIssues/10506

ISSN: 0924-8463 (Print) 1572-8382 (Online)

Mini Glossary:

AI = Artificial Intelligence, today interchangeably used together with Machine learning (ML) – those are highly interrelated but not the same

Causality = extends from Greek philosophy to todays neuropsychology; assumptions about the nature of causality may be shown to be functions of a previous event preceding a later event. A relevant reading on this is by Judea Pearl (2000 and 2009)

Explainability = upcoming fundamental topic within recent AI; answering e.g. why a decision has been made

Etiology = in medicine (many) factors coming together to cause an illness (see causality)

Interpretability = there is no formal technical definition yet, but it is considered as a prerequisite for trust

Transparency = opposite of opacity of black-box approaches, and connotes the ability to understand how a model works (that does not mean that it should always be understood, but that – in the case of necessity – it can be re-enacted

 

 

 

 

 

 

 

 

 

Transfer Learning to overcome catastrophic forgetting

In machine learning deep convolutional networks (deep learning) are very successful for solving particular problems [1] – at least when having many training samples. Great success has been made recently, e.g. in automatic game playing by AlphaGo (see the nature news here).  As fantastic these approaches are, it should be mentioned that deep learning has still serious limitations: they are black-box approaches, where it is currently difficult to explain how and why a result was achieved – see our recent work on glass-box approach [2], consequently lacking transparency and trust – issues which will become increasingly important in our data-centric world; they are demanding huge computational resources, and need enormous amounts of training data (often thousands, sometimes even millions of training samples) and standard approaches are poor at representing uncertainties which calls for Bayesian deep learning approaches [3]. Most of all, deep learning approaches are affected by an effect we call “catastrophic forgetting”.

What is catastrophic forgetting? One of the critical steps towards general artificial intelligence (human-level AI) is the ability to continually learn – similarly as we humans do: ongoing and continuous being capable of learning a new task B without forgetting how to perform an old task A. This seemingly trivial characteristic is not trivial for machine learning generally and deep learning specifically: McCloskey & Cohen already in 1989 [4] showed that neural networks have difficulties with this kind of transfer learning and coined the term catastrophic forgetting – and transfer learning is one attempt to overcoming it. Transfer learning is the ability to learn tasks permanently. Humans can do that very good – even very little children (refer to the work of Alison Gopnik, e.g. [5],  and at the bottom of this post). The synaptic consolidation in human brains may enable continual learning by reducing the plasticity of synapses that are vital to previously learned tasks ([6] and see a recent work on intelligent synapses for multi-task and transfer learning [7]). Based on these ideas the Google Deepmind Group around Demis Hassabis implemented a cool algorithm that performs a similar operation in artificial neural networks by constraining important parameters to stay close to their old values in their work on overcoming catastrophic forgetting in neural networks  (arXiv:1612.00796), [8]. As we know, a deep neural network consists of multiple layers of linear projections followed by element-wise non-linearities. Learning a task consists basically of adjusting the set of weights and biases θ of the linear projections, consequently, many configurations of θ will result in the same performance which is relevant for the so-called elastic weight consolidation (EWC): over-parametrization makes it likely that there is a solution for task B, θ ∗ B , that is close to the previously found solution for task A, θ ∗ A . While learning task B, EWC therefore protects the performance in task A by constraining the parameters to stay in a region of low error for task A centered around θ ∗ A. This constraint has been implemented as a quadratic penalty, and can therefore be imagined as a mechanical spring anchoring the parameters to the previous solution, hence the name elastic. In order to justify this choice of constraint and to define which weights are most important for a task, it is useful to consider neural network training from a probabilistic perspective. From this point of view, optimizing the parameters is tantamount to finding their most probable values given some data D. Interestingly, this can be computed as conditional probability p(θ|D) from the prior probability of the parameters p(θ) and the probability of the data p(D|θ) by: log p(θ|D) = log p(D|θ) + log p(θ) − log p(D).

Here exists and constantly emerge a lot of open and important research avenues which challenge the international machine learning community (see e.g. [9]).  The most interesting is what we don’t know yet – and the breaktrough machine learning approaches have not yet invented.

Andrew Y. Ng at the last NIPS 2016 conferene in Barcelona hold a tutorial where he emphasized the importance of transfer learning research and that “transfer learning will be the next driver of machine learning success” …
There is a wonderful post by Sebastian Ruder, see: https://knowledgeofficer.com/knowledge/46-transfer-learning-machine-learning-s-next-frontier

[1]          Yann LeCun, Yoshua Bengio & Geoffrey Hinton 2015. Deep learning. Nature, 521, (7553), 436-444, doi:10.1038/nature14539.

[2]          Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

[3]          Alex Kendall & Yarin Gal 2017. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? arXiv:1703.04977.

[4]          Michael Mccloskey & Neal J Cohen 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In: Bower, G. H. (ed.) The Psychology of Learning and Motivation, Volume 24. San Diego (CA): Academic Press, pp. 109-164.

[5]          Alison Gopnik, Clark Glymour, David M Sobel, Laura E Schulz, Tamar Kushnir & David Danks 2004. A theory of causal learning in children: causal maps and Bayes nets. Psychological review, 111, (1), 3.

[6]          Stefano Fusi, Patrick J Drew & Larry F Abbott 2005. Cascade models of synaptically stored memories. Neuron, 45, (4), 599-611. doi:10.1016/j.neuron.2005.02.001

[7]          Friedemann Zenke, Ben Poole & Surya Ganguli. Continual Learning Through Synaptic Intelligence.  International Conference on Machine Learning, 2017. 3987-3995. PMLR 70:3987-3995

[8]          James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran & Raia Hadsell 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114, (13), 3521-3526, doi:https://www.pnas.org/content/114/13/3521?tab=ds.

[9]          Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville & Yoshua Bengio 2015. An empirical investigation of catastrophic forgeting in gradient-based neural networks. arXiv:1312.6211v3.

Machine learning researchers should watch the videos by Alison Gopnik, e.g.:

Federated Collaborative Machine Learning

The Google Research Group [1] is always doing awesome stuff, the most recent one is on Federated Learning [2], which enables e.g. smart phones (of course any computational device, and maybe later all internet-of-things, intelligent sensors in either smart hospitals or in smart factories etc.) to collaboratively learn a shared representation model, whilst keeping all the training data on the local devices, decoupling the ability to do machine learning from the need to store the data centralized in the cloud. This goes beyond the use of local models that make predictions on mobile devices (like the Mobile Vision API and On-Device Smart Reply) by bringing model training to the device as well – which is great. The problem with standard approaches is that you always need centralized training data – either on your USB-stick, as the medical doctors do, or in a sophisticated centralized data center.

The basic idea is that the mobile device downloads the current modela and subsequently improves it by learning from data on the respective device, and then summarizes the changes as a small focused update. The remarkable detail is that only this update to the model is sent to the cloud (yes, here privacy, data protection safety and security is challenged see e.g. [3] – but this is much easier to do with this small data – as when you would do it with the raw data – think for example on patient data), where it is immediately averaged with other devicer updates to improve the shared model. All the training data remains on the local devices, and no individual updates are stored in the cloud.

The Google Group recently solved a lot of algorithmic and technical challenges. In a typical machine learning system, an optimization algorithm e.g. Stochastic Gradient Descent (SGD) [4] runs on a large dataset partitioned homogeneously across servers in the cloud. Such highly iterative algorithms require low-latency, high-throughput connections to the training data. But in the Federated Learning setting, the data is distributed across millions of devices in a highly uneven fashion. In addition, these devices have significantly higher-latency, lower-throughput connections and are only intermittently available for training.

This calls for a lot of further investigations with interactive Machine Learning (iML) bringing the human-into-the loop, i.e. making use of human cognitive abilities. This can be of particular interest to solve problems, where learning algorithms suffer due to insufficient training samples (rare events, single events), where we deal with complex data and/or computationally hard problems. For example, “doctors-in-the-loop” can help with their long-term experience and heursitic knowledge to solve problems which otherwise would remain NP-hard [5, 6]. A further step is with many humans-in-the-loop: Such collaborative interactive Machine Learning (ciML) can help in many application areas and domains, e.g. in in health informatics (smart hospital) or in industrial applications (smart factory) [7].

Read the original article, posted on April, 6, 2017,  here:
https://research.googleblog.com/2017/04/federated-learning-collaborative.html

[1] https://research.googleblog.com

[2] NIPS Workshop on Private Multi-Party Machine Learning, Barcelona, December, 9, 2016, https://pmpml.github.io/PMPML16/

[3] Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., Mcmahan, H. B., Patel, S., Ramage, D., Segal, A. & Seth, K. 2016. Practical Secure Aggregation for Federated Learning on User-Held Data. arXiv preprint arXiv:1611.04482.

[4] Bottou, L. 2010. Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT’2010. Springer, pp. 177-186. doi:10.1007/978-3-7908-2604-3_16  (N.B.: 836 citations as of 08.04.2017)

[5] Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6

[6] Holzinger, A., Plass, M., Holzinger, K., Crisan, G., Pintea, C. & Palade, V. 2016. Towards interactive Machine Learning (iML): Applying Ant Colony Algorithms to solve the Traveling Salesman Problem with the Human-in-the-Loop approach. In: Springer Lecture Notes in Computer Science LNCS 9817. Heidelberg, Berlin, New York: Springer, pp. 81-95, [pdf]

[7] Robert, S., Büttner, S., Röcker, C. & Holzinger, A. 2016. Reasoning Under Uncertainty: Towards Collaborative Interactive Machine Learning. In: Machine Learning for Health Informatics: Lecture Notes in Artifical Intelligence LNAI 9605. Springer, pp. 357-376, [pdf]

Image source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html

 

3,2 Trillion USD on health per year

The U.S. spends more on health care than any other country

Dieleman et al. (2016) just (Dec, 27, 2016) published a paper [1] which discusses data from the National Health Expenditure Accounts to estimate US spending on personal health care and public health, according to condition, age and sex group, and type of care. This paper was mentioned in the Washington Post by Carolyn Y. Johnson on December 27 at 11:00 AM

Here a link to the original paper:

[1] Dieleman JL, Baral R, Birger M, Bui AL, Bulchis A, Chapin A, Hamavid H, Horst C, Johnson EK, Joseph J, Lavado R, Lomsadze L, Reynolds A, Squires E, Campbell M, DeCenso B, Dicker D, Flaxman AD, Gabert R, Highfill T, Naghavi M, Nightingale N, Templin T, Tobias MI, Vos T, Murray CJL. US Spending on Personal Health Care and Public Health, 1996-2013. JAMA. 2016;316(24):2627-2646. doi:10.1001/jama.2016.16885

Here the article (shortened) from the Washington Post:

American health-care spending, measured in trillions of dollars, boggles the mind. Last year, we spent $3.2 trillion on health care  a number so large that it can be difficult to grasp its scale.

A new study published in the Journal of the American Medical Association reveals what patients and their insurers are spending that money on, breaking it down by 155 diseases, patient age and category — such as pharmaceuticals or hospitalizations. Among its findings:

  • Chronic — and often preventable — diseases are a huge driver of personal health spending. The three most expensive diseases in 2013: diabetes ($101 billion), the most common form of heart disease ($88 billion) and back and neck pain ($88 billion).
  • Yearly spending increases aren’t uniform: Over a nearly two-decade period, diabetes and low back and neck pain grew at more than 6 percent per year — much faster than overall spending. Meanwhile, heart disease spending grew at 0.2 percent.
  • Medical spending increases with age — with the exception of newborns. About 38 percent of personal health spending in 2013 was for people over age 65. Annual spending for girls between 1 and 4 years old averaged $2,000 per person; older women 70 to 74 years old averaged $16,000.

Here the link to the original article:
https://www.washingtonpost.com/news/wonk/wp/2016/12/27/the-u-s-spends-more-on-health-care-than-any-other-country-heres-what-were-buying/?tid=pm_business_pop&utm_term=.71fc517cdc11