Welcome to our XXAI ICML 2020 workshop: extending explainable ai beyond deep models and classifiers
On June, 6, 2019, 10:00-16:00 we organize in Graz/Austria a small Symposium on AI/Machine Learning for Digital Pathology
First Austrian IFIP Forum “AI and future society: The third wave of AI, which takes place on Wednesday, May 8th – Thursday, May 9th 2019 in 1030 Vienna, Radetzkystraße 2, Festsaal of the BMVIT,
We just had our keynote by Randy GOEBEL from the Alberta Machine Intelligence Institute (Amii), working on enhnancing understanding and innovation in artificial intelligence:
You can see his slides with friendly permission of Randy here (pdf, 2,680 kB):
Here you can read a preprint of our joint paper of our explainable ai session (pdf, 835 kB):
GOEBEL et al (2018) Explainable-AI-the-new-42
Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.
Here is the link to our session homepage:
amii is part of the Pan-Canadian AI Strategy, and conducts leading-edge research to push the bounds of academic knowledge, and forging business collaborations both locally and internationally to create innovative, adaptive solutions to the toughest problems facing Alberta and the world in Artificial Intelligence/Machine Learning.
Here some snapshots:
R.G. (Randy) Goebel is Professor of Computing Science at the University of Alberta, in Edmonton, Alberta, Canada, and concurrently holds the positions of Associate Vice President Research, and Associate Vice President Academic. He is also co-founder and principle investigator in the Alberta Innovates Centre for Machine Learning. He holds B.Sc., M.Sc. and Ph.D. degrees in computer science from the University of Regina, Alberta, and British Columbia, and has held faculty appointments at the University of Waterloo, University of Tokyo, Multimedia University (Malaysia), Hokkaido University, and has worked at a variety of research institutes around the world, including DFKI (Germany), NICTA (Australia), and NII (Tokyo), was most recently Chief Scientist at Alberta Innovates Technology Futures. His research interests include applications of machine learning to systems biology, visualization, and web mining, as well as work on natural language processing, web semantics, and belief revision. He has experience working on industrial research projects in scheduling, optimization, and natural language technology applications.
Here is Randy’s homepage at the University of Alberta:
The University of Alberta at Edmonton hosts approximately 39k students from all around the world and is among the five top universities in Canada and togehter with Toronto and Montreal THE center in Artificial Intelligence and Machine Learning.
Prof. Dr. Klaus-Robert MÜLLER from the TU Berlin was our keynote speaker on Tuesday, August, 28th, 2018 during our CD-MAKE conference at the University of Hamburg, see:
Klaus-Robert emphasized in his talk the “right of explanation” by the new European Union General Data Protection Regulations, but also shows some diffulties, challenges and future research directions in the area what is now called explainable AI. Here you find his presentation slides with friendly permission from Klaus-Robert MÜLLER:
Here some snapshots from the keynote:
Thanks to Klaus-Robert for his presentation!
The second Volume of the Springer LNCS Proceedings of the IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE Machine Learning & Knowledge Extraction just appeared, see:
>> Here the preprints of our papers:
 Andreas Holzinger, Peter Kieseberg, Edgar Weippl & A Min Tjoa 2018. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, doi:10.1007/978-3-319-99740-7_1.
HolzingerEtAl2018_from-machine-learning-to-explainable-AI-pre (pdf, 198 kB)
 Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015, 2018 Cham. Springer, 295-303, doi:10.1007/978-3-319-99740-7_21.
GOEBEL et al (2018) Explainable-AI-the-new-42 (pdf, 835 kB)
Here the link to the bookmetrix page:
>> From the preface:
Each paper was assigned to at least three reviewers of our international scientific committee; after review and metareview and editorial decision they carefully selected 25 papers for this volume out of 75 submissions in total, which resulted in an acceptance rate of 33 %.
The International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE, is a joint effort of IFIP TC 5, TC 12, IFIP WG 8.4, IFIP WG 8.9 and IFIP WG 12.9 and is held in conjunction with the International Conference on Availability, Reliability and Security (ARES).
IFIP – the International Federation for Information Processing – is the leading multinational, non-governmental, apolitical organization in information and communications technologies and computer sciences, is recognized by the United Nations (UN) and was established in the year 1960 under the auspices of UNESCO as an outcome of the ﬁrst World Computer Congress held in Paris in 1959. IFIP is incorporated in Austria by decree of the Austrian Foreign Ministry (September 20, 1996, GZ 1055.170/120-I.2/96) granting IFIP the legal status of a non-governmental international organization under the Austrian Law on the Granting of Privileges to Non-Governmental International Organizations (Federal Law Gazette 1992/174).
IFIP brings together more than 3,500 scientists without boundaries from both academia and industry, organized in more than 100 Working Groups (WGs) and 13 Technical Committees (TCs). CD stands for “cross-domain” and means the integration and appraisal of different ﬁelds and application domains to provide an atmosphere to foster different perspectives and opinions.
The conference fosters an integrative machine learning approach, taking into account the importance of data science and visualization for the algorithmic pipeline with a strong emphasis on privacy, data protection, safety, and security.
It is dedicated to offering an international platform for novel ideas and a fresh look at methodologies to put crazy ideas into business for the beneﬁt of humans. Serendipity is a desired effect and should lead to the cross-fertilization of methodologies and the transfer of algorithmic developments.
The acronym MAKE stands for “MAchine Learning and Knowledge Extraction,” a ﬁeld that, while quite old in its fundamentals, has just recently begun to thrive based on both the novel developments in the algorithmic area and the availability of big data and vast computing resources at a comparatively low price.
Machine learning studies algorithms that can learn from data to gain knowledge from experience and to generate decisions and predictions. A grand goal is to understand intelligence for the design and development of algorithms that work autonomously (ideally without a human-in-the-loop) and can improve their learning behavior over time. The challenge is to discover relevant structural and/or temporal patterns (“knowledge”) in data, which are often hidden in arbitrarily high-dimensional spaces, and thus simply not accessible to humans. Machine learning as a branch of artiﬁcial intelligence is currently undergoing a kind of Cambrian explosion and is the fastest growing ﬁeld in computer science today.
There are many application domains, e.g., smart health, smart factory (Industry 4.0), etc. with many use cases from our daily lives, e.g., recommender systems, speech recognition, autonomous driving, etc. The grand challenges lie in sense-making, in context-understanding, and in decisionmaking under uncertainty.
Our real world is full of uncertainties and probabilistic inference had an enormous inﬂuence on artiﬁcial intelligence generally and statistical learning speciﬁcally. Inverse probability allows us to infer unknowns, to learn from data, and to make predictions to support decision-making. Whether in social networks, recommender systems, health, or Industry 4.0 applications, the increasingly complex data sets require efﬁcient, useful, and useable solutions for knowledge discovery and knowledge extraction.
The IEEE DISA 2018 World Symposium on Digital Intelligence for Systems and Machines was organized by the TU Kosice:
Here you can download my keynote presentation (see title and abstract below)
a) 4 Slides per page (pdf, 5,280 kB):
b) 1 slide per page (pdf, 8,198 kB):
c) and here the link to the paper (IEEE Xplore)
From Machine Learning to Explainable AI
d) and here the link to the video recording
Title: Explainable AI: Augmenting Human Intelligence with Artificial Intelligence and v.v
Abstract: Explainable AI is not a new field. Rather, the problem of explainability is as old as AI itself. While rule‐based approaches of early AI are comprehensible “glass‐box” approaches at least in narrow domains, their weakness was in dealing with uncertainties of the real world. The introduction of probabilistic learning methods has made AI increasingly successful. Meanwhile deep learning approaches even exceed human performance in particular tasks. However, such approaches are becoming increasingly opaque, and even if we understand the underlying mathematical principles of such models they lack still explicit declarative knowledge. For example, words are mapped to high‐dimensional vectors, making them unintelligible to humans. What we need in the future are context‐adaptive procedures, i.e. systems that construct contextual explanatory models for classes of real‐world phenomena.
Maybe one step is in linking probabilistic learning methods with large knowledge representations (ontologies), thus allowing to understand how a machine decision has been reached, making results re‐traceable, explainable and comprehensible on demand ‐ the goal of explainable AI.
The CD-MAKE 2017 in the context of the ARES conference series was a full success in beautiful Reggio di Calabria.
A crazy 5700-people event is over: NIPS 2016 in Barcelona. Registration on Sunday, 4th December, on Monday, 5th traditionally the tutorials were presented concluded by the first keynote talk given by Yann LeCun (now director at Facebook AI research) and completed by the official opening and the first poster presentation. On Tuesday, Dec 6th, after starting with a keynote by Drew Purves (Google Deep Mind), parallel tracks on clustering and graphical models took place concluded by a keynote given by Saket Nevlakha (The Salk Institute) and completed by parallel tracks on deep learning and machine learning theory and poster sessions and demonstrations. Wednesday was openend by a keynote from Kyle Cranmer (New York University), the award talk “matrix completion has no spurious local min” and dominated by parallel tracks on algorithms and applications, concluded by a keynote by Marc Raibert (Boston Dynamics) who presented advances in latest robotic learning, and parallel tracks on deep learning and optimization, completed by the poster sessions with cool demonstrations. The Thursday was opened by a keynote fromm Irina Rish (IBM) and Susan Holmes (Stanford), followed by parallel tracks on interpretable models and cognitive neuroscience, concluded by various symposia until 21:30! Friday and Saturday were the whole day workshops – the sunday was reserverd for recreation on the sand beach of Barcelona 🙂
NIPS is definitely the most exciting conference with amazing variety on topics and themes revolving in machine learning with all sorts of theory and applications.