Posts

How different are Cats vs. Cells in Histopathology?

An awesome question stated in an article by Michael BEREKET and Thao NGUYEN (Febuary 7, 2018) brings it straight to the point: Deep learning has revolutionized the field of computer vision. So why are pathologists still spending their time looking at cells through microscopes?

The most famous machine learning experiments have been done with recognizing cats (see  the video by Peter Norvig) – and the question is relevant, how different are these cats from the cells in histopathology?

Machine Learning, and in particular deep learning, has reached a human-level in certain tasks, particularly in image classification. Interestingly, in the field  of pathology these methods are not so ubiqutiously used currently. A valid question indeed is: Why do human pathologists spend so much time with visual inspection? Of course we restrict this debate on routine tasks!

This excellent article is worthwhile giving a read:
Stanford AI for healthcare: How different are cats from cells

Source of the animated gif above:
https://giphy.com/gifs/microscope-fluorescence-mitosis-2G5llPaffwvio

Yoshua Bengio emphasizes: Deep Learning needs Deep Understanding !

Yoshua BENGIO from the Canadian Institute for Advanced Research (CIFAR) emphasized during his workshop talk “towards disentangling underlying explanatory factors”  (cool title) at the ICML 2018 in Stockholm, that the key for success in AI/machine learning is to understand the explanatory/causal factors and mechanisms. This means generalizing beyond identical independent data (i.i.d.) – and this is crucial for our domain in medcial AI, because current machine learning theories and models are strongly dependent on this iid assumption, but applications in the real-world (we see this in the medical domain every day!) often require learning and generalizing in areas simply not seen during the training epoch. Humans interestingly are able to protect themselves in such situations, even in situations which they have never seen before. Here a longer talk (1:17:04) at Microsoft Research Redmond on January, 22, 2018 – awesome – enjoy the talk, I recommend it cordially to all of my students!

Explainable AI Session Keynote: Randy GOEBEL

We just had our keynote by Randy GOEBEL from the Alberta Machine Intelligence Institute (Amii), working on enhnancing understanding and innovation in artificial intelligence:
https://cd-make.net/keynote-speaker-randy-goebel

You can see his slides with friendly permission of Randy here (pdf, 2,680 kB):
https://human-centered.ai/wordpress/wp-content/uploads/2018/08/Goebel.XAI_.CD-MAKE.Aug30.2018.pdf

Here you can read a preprint of our joint paper of our explainable ai session (pdf, 835 kB):
GOEBEL et al (2018) Explainable-AI-the-new-42
Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.

Here is the link to our session homepage:
https://cd-make.net/special-sessions/make-explainable-ai/

amii is part of the Pan-Canadian AI Strategy, and conducts leading-edge research to push the bounds of academic knowledge, and forging business collaborations both locally and internationally to create innovative, adaptive solutions to the toughest problems facing Alberta and the world in Artificial Intelligence/Machine Learning.

Here some snapshots:

R.G. (Randy) Goebel is Professor of Computing Science at the University of Alberta, in Edmonton, Alberta, Canada, and concurrently holds the positions of Associate Vice President Research, and Associate Vice President Academic. He is also co-founder and principle investigator in the Alberta Innovates Centre for Machine Learning. He holds B.Sc., M.Sc. and Ph.D. degrees in computer science from the University of Regina, Alberta, and British Columbia, and has held faculty appointments at the University of Waterloo, University of Tokyo, Multimedia University (Malaysia), Hokkaido University, and has worked at a variety of research institutes around the world, including DFKI (Germany), NICTA (Australia), and NII (Tokyo), was most recently Chief Scientist at Alberta Innovates Technology Futures. His research interests include applications of machine learning to systems biology, visualization, and web mining, as well as work on natural language processing, web semantics, and belief revision. He has experience working on industrial research projects in scheduling, optimization, and natural language technology applications.

Here is Randy’s homepage at the University of Alberta:
https://www.ualberta.ca/science/about-us/contact-us/faculty-directory/randy-goebel

The University of Alberta at Edmonton hosts approximately 39k students from all around the world and is among the five top universities in Canada and togehter with Toronto and Montreal THE center in Artificial Intelligence and Machine Learning.

Google Brain says we urgently need a Research Framework around the field of interpretability

In a recent interview Been KIM from the Google Brain team emphasizes the significance of research in explainable AI. Particularly, she emphasized the importance of Human-Computer Interaction (HCI) for Artificial Intelligence generally and Machine Learning specifically (see the differences between AI and ML here), and the urgent need of an research framework around the field of interpretability. Listen to the episode six of season four of Talking Machines by Katherine GORMAN and Neil LAWRENCE here (Start at approx. 26:00): https://www.thetalkingmachines.com/episodes/explainability-and-inexplicable

Been KIM is a research scientist at the Google Brain team and is interested in designing machine learning methods that  make sense to humans. Her current focus is building interpretability methods for already-trained models (e.g., high performance neural networks). In particular, she believes that the language of explanations should include higher-level, human-friendly concepts.  Been gave a tutorial on explainable AI at ICML 2017 and recently the group published the paper: Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman & Finale Doshi-Velez 2018. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv:1802.00682.
https://people.csail.mit.edu/beenkim

What is the difference between AI/ML/DL?