The second Volume of the Springer LNCS Proceedings of the IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE Machine Learning & Knowledge Extraction just appeared, see:
https://link.springer.com/book/10.1007/978-3-319-99740-7
>> Here the preprints of our papers:
[1] Andreas Holzinger, Peter Kieseberg, Edgar Weippl & A Min Tjoa 2018. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, doi:10.1007/978-3-319-99740-7_1.
HolzingerEtAl2018_from-machine-learning-to-explainable-AI-pre (pdf, 198 kB)
[2] Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.
GOEBEL et al (2018) Explainable-AI-the-new-42 (pdf, 835 kB)
Here the link to the bookmetrix page:
https://www.bookmetrix.com/detail/book/38a3a435-ab77-4db9-a4ad-97ce63b072b3#citations
>> From the preface:
Each paper was assigned to at least three reviewers of our international scientific committee; after review and metareview and editorial decision they carefully selected 25 papers for this volume out of 75 submissions in total, which resulted in an acceptance rate of 33 %.
The International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE, is a joint effort of IFIP TC 5, TC 12, IFIP WG 8.4, IFIP WG 8.9 and IFIP WG 12.9 and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES).
IFIP – the International Federation for Information Processing – is the leading multinational, non-governmental, apolitical organization in information and communications technologies and computer sciences, is recognized by the United Nations (UN) and was established in the year 1960 under the auspices of UNESCO as an outcome of the first World Computer Congress held in Paris in 1959. IFIP is incorporated in Austria by decree of the Austrian Foreign Ministry (September 20, 1996, GZ 1055.170/120-I.2/96) granting IFIP the legal status of a non-governmental international organization under the Austrian Law on the Granting of Privileges to Non-Governmental International Organizations (Federal Law Gazette 1992/174).
IFIP brings together more than 3,500 scientists without boundaries from both academia and industry, organized in more than 100 Working Groups (WGs) and 13 Technical Committees (TCs). CD stands for “cross-domain” and means the integration and appraisal of different fields and application domains to provide an atmosphere to foster different perspectives and opinions.
The conference fosters an integrative machine learning approach, taking into account the importance of data science and visualization for the algorithmic pipeline with a strong emphasis on privacy, data protection, safety, and security.
It is dedicated to offering an international platform for novel ideas and a fresh look at methodologies to put crazy ideas into business for the benefit of humans. Serendipity is a desired effect and should lead to the cross-fertilization of methodologies and the transfer of algorithmic developments.
The acronym MAKE stands for “MAchine Learning and Knowledge Extraction,” a field that, while quite old in its fundamentals, has just recently begun to thrive based on both the novel developments in the algorithmic area and the availability of big data and vast computing resources at a comparatively low price.
Machine learning studies algorithms that can learn from data to gain knowledge from experience and to generate decisions and predictions. A grand goal is to understand intelligence for the design and development of algorithms that work autonomously (ideally without a human-in-the-loop) and can improve their learning behavior over time. The challenge is to discover relevant structural and/or temporal patterns (“knowledge”) in data, which are often hidden in arbitrarily high-dimensional spaces, and thus simply not accessible to humans. Machine learning as a branch of artificial intelligence is currently undergoing a kind of Cambrian explosion and is the fastest growing field in computer science today.
There are many application domains, e.g., smart health, smart factory (Industry 4.0), etc. with many use cases from our daily lives, e.g., recommender systems, speech recognition, autonomous driving, etc. The grand challenges lie in sense-making, in context-understanding, and in decisionmaking under uncertainty.
Our real world is full of uncertainties and probabilistic inference had an enormous influence on artificial intelligence generally and statistical learning specifically. Inverse probability allows us to infer unknowns, to learn from data, and to make predictions to support decision-making. Whether in social networks, recommender systems, health, or Industry 4.0 applications, the increasingly complex data sets require efficient, useful, and useable solutions for knowledge discovery and knowledge extraction.
We cordially thank all members of the committee, reviewers, authors, supporters and friends! See you in Hamburg:
Image taken by Andreas Holzinger
“Machine Learning and Knowledge Extraction” 2nd CD-MAKE Volume just appeared
/in Conferences, HCI-KDD Events, Recent Publications/by Andreas HolzingerThe second Volume of the Springer LNCS Proceedings of the IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE Machine Learning & Knowledge Extraction just appeared, see:
https://link.springer.com/book/10.1007/978-3-319-99740-7
>> Here the preprints of our papers:
[1] Andreas Holzinger, Peter Kieseberg, Edgar Weippl & A Min Tjoa 2018. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, doi:10.1007/978-3-319-99740-7_1.
HolzingerEtAl2018_from-machine-learning-to-explainable-AI-pre (pdf, 198 kB)
[2] Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.
GOEBEL et al (2018) Explainable-AI-the-new-42 (pdf, 835 kB)
Here the link to the bookmetrix page:
https://www.bookmetrix.com/detail/book/38a3a435-ab77-4db9-a4ad-97ce63b072b3#citations
>> From the preface:
Each paper was assigned to at least three reviewers of our international scientific committee; after review and metareview and editorial decision they carefully selected 25 papers for this volume out of 75 submissions in total, which resulted in an acceptance rate of 33 %.
The International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE, is a joint effort of IFIP TC 5, TC 12, IFIP WG 8.4, IFIP WG 8.9 and IFIP WG 12.9 and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES).
IFIP – the International Federation for Information Processing – is the leading multinational, non-governmental, apolitical organization in information and communications technologies and computer sciences, is recognized by the United Nations (UN) and was established in the year 1960 under the auspices of UNESCO as an outcome of the first World Computer Congress held in Paris in 1959. IFIP is incorporated in Austria by decree of the Austrian Foreign Ministry (September 20, 1996, GZ 1055.170/120-I.2/96) granting IFIP the legal status of a non-governmental international organization under the Austrian Law on the Granting of Privileges to Non-Governmental International Organizations (Federal Law Gazette 1992/174).
IFIP brings together more than 3,500 scientists without boundaries from both academia and industry, organized in more than 100 Working Groups (WGs) and 13 Technical Committees (TCs). CD stands for “cross-domain” and means the integration and appraisal of different fields and application domains to provide an atmosphere to foster different perspectives and opinions.
The conference fosters an integrative machine learning approach, taking into account the importance of data science and visualization for the algorithmic pipeline with a strong emphasis on privacy, data protection, safety, and security.
It is dedicated to offering an international platform for novel ideas and a fresh look at methodologies to put crazy ideas into business for the benefit of humans. Serendipity is a desired effect and should lead to the cross-fertilization of methodologies and the transfer of algorithmic developments.
The acronym MAKE stands for “MAchine Learning and Knowledge Extraction,” a field that, while quite old in its fundamentals, has just recently begun to thrive based on both the novel developments in the algorithmic area and the availability of big data and vast computing resources at a comparatively low price.
Machine learning studies algorithms that can learn from data to gain knowledge from experience and to generate decisions and predictions. A grand goal is to understand intelligence for the design and development of algorithms that work autonomously (ideally without a human-in-the-loop) and can improve their learning behavior over time. The challenge is to discover relevant structural and/or temporal patterns (“knowledge”) in data, which are often hidden in arbitrarily high-dimensional spaces, and thus simply not accessible to humans. Machine learning as a branch of artificial intelligence is currently undergoing a kind of Cambrian explosion and is the fastest growing field in computer science today.
There are many application domains, e.g., smart health, smart factory (Industry 4.0), etc. with many use cases from our daily lives, e.g., recommender systems, speech recognition, autonomous driving, etc. The grand challenges lie in sense-making, in context-understanding, and in decisionmaking under uncertainty.
Our real world is full of uncertainties and probabilistic inference had an enormous influence on artificial intelligence generally and statistical learning specifically. Inverse probability allows us to infer unknowns, to learn from data, and to make predictions to support decision-making. Whether in social networks, recommender systems, health, or Industry 4.0 applications, the increasingly complex data sets require efficient, useful, and useable solutions for knowledge discovery and knowledge extraction.
We cordially thank all members of the committee, reviewers, authors, supporters and friends! See you in Hamburg:
Image taken by Andreas Holzinger
IEEE DISA 2018 in Kosice
/in Conferences, Lectures/by Andreas HolzingerThe IEEE DISA 2018 World Symposium on Digital Intelligence for Systems and Machines was organized by the TU Kosice:
Here you can download my keynote presentation (see title and abstract below)
a) 4 Slides per page (pdf, 5,280 kB):
HOLZINGER-Kosice-ex-AI-DISA-2018-30Minutes-4×4
b) 1 slide per page (pdf, 8,198 kB):
HOLZINGER-Kosice-ex-AI-DISA-2018-30Minutes
c) and here the link to the paper (IEEE Xplore)
From Machine Learning to Explainable AI
d) and here the link to the video recording
https://archive.tp.cvtisr.sk/archive.php?tag=disa2018##videoplayer
Title: Explainable AI: Augmenting Human Intelligence with Artificial Intelligence and v.v
Abstract: Explainable AI is not a new field. Rather, the problem of explainability is as old as AI itself. While rule‐based approaches of early AI are comprehensible “glass‐box” approaches at least in narrow domains, their weakness was in dealing with uncertainties of the real world. The introduction of probabilistic learning methods has made AI increasingly successful. Meanwhile deep learning approaches even exceed human performance in particular tasks. However, such approaches are becoming increasingly opaque, and even if we understand the underlying mathematical principles of such models they lack still explicit declarative knowledge. For example, words are mapped to high‐dimensional vectors, making them unintelligible to humans. What we need in the future are context‐adaptive procedures, i.e. systems that construct contextual explanatory models for classes of real‐world phenomena.
Maybe one step is in linking probabilistic learning methods with large knowledge representations (ontologies), thus allowing to understand how a machine decision has been reached, making results re‐traceable, explainable and comprehensible on demand ‐ the goal of explainable AI.
Federated Machine Learning – Privacy by Design won
/in HCI-KDD Events, Projects, Science News/by Andreas HolzingerFederated machine learning – privacy by design EU-project granted!
Good news from Brussels: Our EU RIA project application 826078 FeatureCloud with a total volume of EUR 4,646,000,00 has just been granted. The project was submitted to the H2020-SC1-FA-DTS-2018-2020 call “Trusted digital solutions and Cybersecurity in Health and Care”. The lead is done by TU Munich and we are excited to work in a super cool project consortium together with our partners for the next 60 months. The project’s ground-breaking novel cloud-AI infrastructure only exchanges learned representations (the feature parameters theta θ, hence the name “feature cloud”) which are anonymous by default (no hassle with “real medical data” – no ethical issues). Collectively, our highly interdisciplinary consortium from AI and machine learning to medicine covers all aspects of the value chain: assessment of cyber risks, legal considerations and international policies, development of state-of-the.-art federated machine learning technology coupled to blockchaining and encompasing AI-ethics research. FeatureCloud’s goals are challenging bold, obviously, but achievable, and paving the way for a socially agreeable big data era for the benefit of future medicine. Congratulations to the great project consortium!
Investigating Human Priors for Playing Video Games
/in Recent Publications/by Andreas HolzingerThe group around Tom GRIFFITHS *) from the Cognitive Science Lab at Berkeley recently asked in their paper by Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths & Alexei A. Efros 2018. Investigating Human Priors for Playing Video Games. arXiv:1802.10217: “What makes humans so good at solving seemingly complex video games?”.
(Spoiler short answer in advance: we don’t know – but we can gradually improve our understanding on this topic).
The authors did cool work on investigating the role of human priors for solving video games. On the basis of a specific game, they conducted a series of ablation-studies to quantify the importance of various priors on human performance. For this purpose they modifyied the video game environment to systematically mask different types of visual information that could be used by humans as prior data. The authors found that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Their results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play.
Read the original paper here:
https://arxiv.org/abs/1802.10217
Or at least glance it over via the ArxiV sanity preserver by Andrew KARPATHY:
https://www.arxiv-sanity.com/search?q=+Investigating+Human+Priors+for+Playing+Video+Games
Videos and the game manipulations are available here:
https://rach0012.github.io/humanRL_website
*) Tom Griffiths is Professor of Psychology and Cognitive Science and is interested in developing mathematical models of higher level cognition, and understanding the formal principles that underlie human ability to solve the computational problems we face in everyday life. His current focus is on inductive problems, such as probabilistic reasoning, learning causal relationships, acquiring and using language, and inferring the structure of categories. He tries to analyze these aspects of human cognition by comparing human behavior to optimal or “rational” solutions to the underlying computational problems. For inductive problems, this usually means exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition.
See the homepage of Tom here:
https://cocosci.berkeley.edu/tom
Judea Pearl on explainable-AI: teach machines cause and effect
/in Science News/by Andreas HolzingerTo build truly intelligent machines, teach them cause and effect, emphasizes Judea PEARL in a recent Quanta Magazine article (May, 15, 2018) posted by Kevin HARTNETT. Judea Pearl won in 2011 the Turing Award (“the Nobel Prize in Computer Science”) and just published his newest book, called “The book of why: the new science of cause and effect”, wherein Pearl argues that AI has been handicapped by an incomplete understanding of what intelligence really is. Causal reasoning is a cornerstone in explainable-AI!
Read the interesting article here:
https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515
The book is also announced by the UCLA newsroom, along with a nice interview see:
https://newsroom.ucla.edu/stories/artificial-intelligence-pioneers-new-book-examines-the-science-of-cause-and-effect
Microsoft boosts Explainable AI
/in General/by Andreas HolzingerMicrosoft invests into explainable AI and acquired on June, 20, 2018 Bonsai, a California start-up, which was founded by Mark HAMMOND and Keen BROWNE in 2014. Watch an excellent introduction “Programming your way to explainable AI” by Mark HAMMOND here:
and read read the original story about the acquisition here:
https://blogs.microsoft.com/blog/2018/06/20/microsoft-to-acquire-bonsai-in-move-to-build-brains-for-autonomous-systems
“No one really knows how the most advanced algorithms do what they do. That could be a problem.” Will KNIGHT in “The dark secret of the heart of AI”
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai
The Problem with explainable-AI
/in Science News/by Andreas HolzingerA very nice and interesting article by Rudina SESERI in the recent TechCrunch blog (read the orginal blog entry below): at first Rudina points out that the main problem is in data; and yes, indeed, data should always be the first consideration. We consider it a big problem that successful ML approaches (e.g. the mentioned deep learning, our PhD students can tell you a thing or two about it 😉 greatly benefit from big data (the bigger the better) with many training sets; However, it certain domain, e.g. in the health domain we sometimes are confronted with a small number of data sets or rare events, where we suffer of insufficient training samples [1]. This calls for more research towards how we can learn from little data (zero-shot learning), similar as we humans do: Rudina does not need to show her children 10 million samples of a dog and a cat, so that her children can safely discriminate a dog from a cat. However, what I miss in this article is something different, the word trust. Can we trust our machine learning results? [2] Whilst, for sure we do not need to explain everything all the time, we need possibilities to make machine decisions transparent on demand and to check if something could be plausible. Consequently, Explainable AI can be very important to foster trust in machine learning specifically and artificial intelligence generally.
[1] https://link.springer.com/article/10.1007/s40708-016-0042-6
[2] https://ercim-news.ercim.eu/en112/r-i/can-we-trust-machine-learning-results-artificial-intelligence-in-safety-critical-decision-support
MIT emphasizes the importance of HCI for explainable AI
/in Science News/by Andreas HolzingerIn a joint project “The car can explain” with the TOYOTA Research Institute the MIT Computer Science & Artificial Intelligence Lab are working on explainable AI and emphasize the increasing importance of the field of HCI (Human-Computer Interaction) in this regard. Particularly, the group led by Lalana KAGAL is working on monitors for reasoning and explaining: a methodological tool to interpret and detect inconsistent machine behavior by imposing constraints of reasonableness. “Reasonable monitors” are implemented as two types of interfaces around their complex AI/ML frameworks. Local monitors check the behavior of a specific subsystem, and non-local reasonableness monitors watch the behavior of multiple subsystems working together: neighborhoods of interconnected subsystems that share a common task. This enormously interesting monitoring consistently checks that the neighborhood of subsystems are cooperating as expected. Insights of this projects could also be valuable for the health informatics domain:
https://toyota.csail.mit.edu/node/21
Google Brain says Explainability is the “new deep learning”
/in Science News/by Andreas HolzingerThere is a very interesting interview in the Talking Machines*) series from May, 31, 2018. Katherine GORMAN interviews Maithra RAGHU **) from the Google Brain Team, where she mentioned that “explainability is the new deep learning”, and it is particularly important for health informatics, where it is important to re-trace, re-enact and to understand and explain why a machine decision has been reached. This is super for us, because when I tell my students that this is important, nobody believes me; but now I can emphasize that not I am saying that, but Google Brain is saying it. Excellent.
However, the whole field needs a lot of work, before we can provide useable solutions for the end-user in daily routine (e.g. a medical doctor); urgently needed are approaches to explainable User Interfaces and most of all a research framework for testing explainability.
*) Talking Machine is an excellent, highly recommendable Podcast series, founded by Katharine GORMAN and Ryan ADAMS in 2015 and now run by Katharine together with Neil LAWRENCE (who leads the Amazon Research in Cambridge, UK).
**) Maithra RAGHU is currently a PhD working with Jon KLEINBERG at Cornell (see https://maithraraghu.com ), where she is doing extended research with the Google Brain Team, see: https://ai.google/research/teams/brain
Maithra has published some very interesting papers, e.g.: Maithra Raghu, Justin Gilmer, Jason Yosinski & Jascha Sohl-Dickstein. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. Advances in Neural Information Processing Systems, 2017. 6078-6087.
or this is also very interesting:
Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein & Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems, 2016. 3360-3368.
Google Brain says we urgently need a Research Framework around the field of interpretability
/in Science News/by Andreas HolzingerIn a recent interview Been KIM from the Google Brain team emphasizes the significance of research in explainable AI. Particularly, she emphasized the importance of Human-Computer Interaction (HCI) for Artificial Intelligence generally and Machine Learning specifically (see the differences between AI and ML here), and the urgent need of an research framework around the field of interpretability. Listen to the episode six of season four of Talking Machines by Katherine GORMAN and Neil LAWRENCE here (Start at approx. 26:00): https://www.thetalkingmachines.com/episodes/explainability-and-inexplicable
Been KIM is a research scientist at the Google Brain team and is interested in designing machine learning methods that make sense to humans. Her current focus is building interpretability methods for already-trained models (e.g., high performance neural networks). In particular, she believes that the language of explanations should include higher-level, human-friendly concepts. Been gave a tutorial on explainable AI at ICML 2017 and recently the group published the paper: Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman & Finale Doshi-Velez 2018. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv:1802.00682.
https://people.csail.mit.edu/beenkim