xAI 2021 – Submissions due to April, 15, 2021

Digital Conference, August, 16-20, 2021

CD-MAKE 2021 Workshop supported by IFIP and Springer/Nature
Co-organized by the Fraunhofer Heinrich Hertz Institute, Berlin
and the xAI-Lab,
Alberta Machine Intelligence Institute, Edmonton
in the context of the 5th CD-MAKE conference and the
16th International Conference on Availability, Reliability and Security ARES 2021

see the published papers from 2020: https://www.springer.com/de/book/9783030573201

For all submission details and deadlines plese see the main conference Webpage: https://cd-make.net

This page is current as of 01.04.2021, 11:00 CEST (UTC+2), 07:00 MDT (UTC-6), 00:00 AEDT (UTC+11)
For inquries please contact andreas.holzinger AT human-centered.ai

Keynote Speaker of 2021

After having Bernhard SCHÖLKOPF in 2016, Neil LAWRENCE and Marta MILO in 2017, Klaus-Robert MÜLLER in 2018, Wojciech SAMEK in 2019, and Gregoire MONTAVON in 2020 we are very proud to welcome Cynthia RUDIN in 2021.

Cynthia RUDIN is Professor of Computer Science, Electrical and Computer Engineering and Statistical Science at Duke University, Durham, NC, US
Title of her talk: Almost Matching Exactly

Abstract: I will present a matching framework for causal inference in the potential outcomes setting called Almost Matching Exactly. This framework  has several important elements: (1) Its algorithms create matched groups that are interpretable. The goal is to match treatment and control units on as many covariates as possible, or “almost exactly.” (2) Its algorithms create accurate estimates of individual treatment effects. This is because we use machine learning on a separate training set to learn which features are important for matching. The key constraint is that units are always matched on a set of covariates that together can predict the outcome well. (3) Our methods are fast and scalable. In summary, these methods rival black box machine learning methods in their estimation accuracy but have the benefit of being interpretable and easier to troubleshoot. Our lab website is here: 
https://almost-matching-exactly.github.io

Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. 

Some of Cynthia (collaborative) projects are: (1) she has developed practical code for optimal decision trees and sparse scoring systems, used for creating models for high stakes decisions. Some of these models are used to manage treatment and monitoring for patients in intensive care units of hospitals. (2) She led the first major effort to maintain a power distribution network with machine learning (in NYC). (3) She developed algorithms for crime series detection, which allow police detectives to find patterns of housebreaks. Her code was developed with detectives in Cambridge MA, and later adopted by the NYPD. (4) She solved several well-known previously open theoretical problems about the convergence of AdaBoost and related boosting methods. (5) She is a co-lead of the Almost-Matching-Exactly lab, which develops matching methods for use in interpretable causal inference.

GOAL of this workshop:

In this cross-disciplinary workshop we aim to bring together international  experts cross-domain, interested in making machine decisions transparent, interpretable, transparent, reproducible, replicable, re-traceable, re-enactive, comprehensible, explainable towards ethical-responsible AI/machine learning.

SUBMISSION to this workshop:

All submissions will be peer reviewed by three members of our international scientific comittee. Accepted papers will be presented at the workshop orally or as poster and published in the IFIP CD-MAKE Volume of Springer Lecture Notes (LNCS), see LNCS 11015 as example.

Additionally there is also the opportunity to submit to our thematic collection “explainable AI in Medical Informatics and Decision Making” in Springer/Nature BMC Medical Informatics and Decision Making (MIDM), SCI-Impactfactor 2,134
https://human-centered.ai/special-issue-explainable-ai-medical-informatics-decision-making/

INSTRUCTIONS:

See our main conference page: https://cd-make.net

BACKGROUND on this workshop:

Explainable AI is not a new field. Actually, the problem of explainability is as old as AI and maybe the result of AI itself. While early expert systems consisted of handcrafted knowledge, which enabled reasoning over at least a narrowly well-defined domain, such systems had no learning capabilities and were poor in handling of uncertainties when (trying to) solving real-world problems.  This was one reason for the bitter cold AI-winter in the 1980ies. The big success of current AI solutions and ML algorithms is due to the practical capabilities of statistical machine learning gained in 2010 and later. Despite the practical success even on “super-human” level on certain problems, their effectiveness is still limited by their inability to ”explain” – on demand –  decisions in an human understandable way. Even if we understand the underlying mathematical theories, it is complicated and often impossible to get insight into the internal workings of the models, algorithms and tools and to explain why a result was achieved. However, the responsibility remains with the human expert, thus affects trust and acceptance of such systems. Future AI needs contextual adaptation, i.e. systems that help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence (human-in-the-loop, see the Obama-Interview).

TOPICS of this workshop:

In-line with the general theme of the CD-MAKE conference of augmenting human intelligence with artificial intelligence, and Science is to test crazy ideas –  Engineering is to bring these ideas into Business – we foster cross-disciplinary and interdisciplinary work in order to bring together experts from different fields, e.g. computer science, psychology, sociology, philosophy, law, business, … experts who would otherwise possibly not meet together. This cross-domain integration and appraisal of different fields of science and industry shall provide an atmosphere to foster different perspectives and opinions; it will offer a platform for novel crazy ideas and a fresh look on the methodologies to put these ideas into business.

Topics include but are not limited to (alphabetically – not prioritized):

  • Abstraction of human explanations (“How can xAI learn from human explanation straregies?”)
  • Acceptance (“How to ensure acceptance of AI/ML among end users?”)
  • Accountability and responsibility (“Who is to blame if something goes wrong?” “What legal aspects does it involve?”)
  • Action Influence Graphs (“How can they help interpretability?”)
  • Active machine learning algorithm design for interpretability purposes
  • Adversarial attacts detection, explanation and defense (“How can we interpret adversarial examples?”)
  • Adaptive personal xAI systems (“To whom, when, how to provide explanations?”)
  • Adaptable explanation interfaces (“How can we adapt xAI-interfaces to the needs, demands, requirements of end-users and domain experts?”)
  • Affective computing for successful human-AI interaction  and human-robot interaction
  • Application of xAI methods (e.g. in smart health, smart farming, forest ecosystems, …)
  • Argumentation theories of explanations
  • Artificial advice givers
  • Bayesian rule lists
  • Bayesian modeling and optimization (“How to design efficient methods for learning interpretable models?”)
  • Bias and fairness in explainable AI (“How to avoid bias in machin learning applications?”)
  • Bridiging the gap between humans and machines (concepts, methods, tools, …)
  • Causal learning, causal discovery, causal reasoning, causal explanations, and causal inference
  • Causality research in the sense of Judea Pearl
  • Causability research (“measuring the Quality of explanations“)
  • Cognitive issues of explanation and understanding (“understanding understanding”)
  • Combination of statistical learning approaches with large knowledge repositories (ontologies, terminologies, …)
  • Combination of deep learning approaches with traditional AI approaches
  • Comparison of human intelligence vs. artificial intelligence (HCI — KDD)
  • Computational behavioural science (“How are people thinking, judging, making decisions – and explaining it?”)
  • Constraints-based explanations
  • Counterfactual explanations (“What did not happen?”, “How can we provide counterfactual explanations?”)
  • Contrastive explanation methods (CEM) as e.g. in Criminology or medical diagnosis
  • Cyber security, cyber defense and malicious use of adversarial examples
  • Data dredging explainability and causal inference
  • Decision making and decision support systems (“Is a human-like decision good enough?)
  • Dialogue systems for enhanced human-ai interaction
  • Emotional intelligence (“Emotion AI”) and emotional UI
  • Ethical aspects of AI in general and human-AI interaction in particular
  • Evaluation criteria
  • Explanation agents and recommender systems
  • Explanatory user interfaces and Human-Computer Interaction (HCI) for explainable AI
  • Explainable reinforcement learning
  • Explainable and verifiable activity recognition
  • Explaining agent behaviour (“How to know if the agent is going to make a mistake and when?”)
  • Explaining robot behviour (“Why did you take action x in state s?”)
  • Fairness, accountability and trust (“How to ensure trust in AI?”)
  • Frameworks, architectures, algorithms and tools to support post-hoc and ante-hoc explainability
  • Frameworks for reasoning about causality
  • Gradient based interpretability to understand data sensitivity
  • Graphical causal inference and graphical models for explanation and causality
  • Ground truth
  • Group recommender systems
  • Human-AI interaction and intelligent interfaces
  • Human-AI teaming for ensuring trustworthy AI systems
  • Human-centered AI
  • Human-in-the-loop learning approaches, methodologies, tools and systems
  • Human rights vs. robot rights
  • Implicit knowlege elicitation
  • Industrial applications of xAI, e.g. in medicine, autonomous driving, production, finance, ambient assisted living, etc.
  • Integrating of deep learning approaches with grammars of graphical models
  • Interactive machine learning with a human-in-the-loop
  • Interactive machine learning with (many) humans-in-the-loop (crowd intelligence)
  • Interpretability in ranking algorithms (“How to explain ranking algorithms, e.g. patient ranking in helath, in human interpretable ways?”)
  • Interpretability in reinforement learning
  • Interpretable representation learning (“How to make sense of data assigned to similar representations?”)
  • Kandinsky Patterns experiments and extensions
  • Legal aspects of AI/ML (“Who is to blame if an error occurs?”)
  • Metrics for evaluation of the quality of explanations
  • Misinformation in Social Media, Ground truth evaluation and explanation
  • Model explainability, quality and provenance
  • Moral principles and moral dilemmas of current and future AI
  • Natural Language Argumentation interfaces for explanation generation
  • Natural Language generation for explanatory models
  • Non-IID learning models, algorithms, analytics, recommenders
  • Novel intelligent future user interfaces (e.g. affective mobile interfaces)
  • Novel methods, algorithms, tools, procedures for supporting explainability in the AI/ML pipeline
  • Personalized xAI
  • Philosophical approaches of explainability, theories of mind (“When is it enough explained? Do we have a degree of saturation?”)
  • Policy explanations to humans (“How to explain and why is the next step the best action to select?”)
  • Proof-of-concepts and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes
  • Privacy, surveillance, control and agency
  • Psychology of human concept learning and tranfer to machine learning
  • Python for nerds (Python tricks of the trade – relevant for explainable AI)
  • Quality of explanations and how to measure quality of explanations
  • Real-World success stories of xAI
  • Rendering of reasoning processes
  • Reproducibility, replicability, retraceability, reenactivity
  • Self-explanatory agents and decision support systems
  • Similarity measures for xAI
  • Social implications of AI (“What AI impacts”), e.g. labour trends, human-human interaction, machine-machine interaction
  • Soft Decision Trees (SDT)
  • Spartanic approaches of explanations (“What is the most simplest explanation?”)
  • Structureal causal equations (SCM)
  • Theoretical approaches of explainability (“What makes a good explanation?”)
  • Theories of explainable/interpretable models
  • Tools for model understanding (diagnostic, debugging, introspection, visualization, …)
  • Transparent reasoning
  • Trustworthy human-AI teaming under uncertainties
  • Understanding understanding
  • Understanding Markov decision processes and prtially observable markov decision processes
  • Usability of Human-AI interfaces
  • Visualizing learned representations
  • Web- and mobile-based cooperative intelligent information systems and tools

MOTIVATION for this workshop:

The success of statistical machine learning has made AI successful again and in certain tasks AI outperforms human experts – even in complex domains such as medicine. Humans on the other hand are experts at multi-modal thinking and can embed new inputs almost instantly into a conceptual knowledge space shaped by experience. In many fields the aim is to build systems capable of explaining themselves, engaging in interactive what-if questions.  Using conceptual knowledge as a guiding model of reality will help to train more robust, explainable, less biased machine learning models, ideally able to learn from fewer data. 

Example: One motivation is the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May, 25, 2018, and affects practically all machine learning and artificial intelligence applied to business. For example it will be difficult to apply black-box approaches for professional use in certain business applications, because they are not re-traceable and rarely able to explain on demand why a decision has been made.

Note: The GDPR replaces the data protection Directive 95/46/EC) of 1995. The regulation was adopted on 27 April 2016 and becomes enforceable from 25 May 2018 after now a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding – which affects practically all data-driven businesses and particularly machine learning and AI technology.

WORKSHOP ORGANIZATION COMMITTEE (in alphabetical order):

Randy GOEBEL, University of Alberta, Edmonton, CA (workshop co-chair)
Andreas HOLZINGER, University of Alberta, Edmonton, CA, and Medical University Graz, AT  (workshop co-chair)
Peter KIESEBERG, University of Applied Sciences, St.Poelten, AT (workshop co-chair)
Wojciech SAMEK, Fraunhofer Heinrich Hertz Insitute, Berlin, DE (workshop co-chair)

Inquiries please directly to andreas.holzinger AT human-centered.ai

SCIENTIFIC COMMITTEE (in alphabetical order):

in progress, see also the main scientific committee:
https://cd-make.net/committees

Note: All committee members will also be listed in the main conference to a) be included in the EasyChair conference management system, and b) in the frontmatter of the Springer LNCS (see the pdf from last year here).

Jose Maria ALONSO, Centro Singular de Investigación en Tecnoloxías da Información, CiTiUS, University of Santiago de Compostela, ES
Christian BAUCKHAGE, Fraunhofer Institute Intelligent Analysis, IAIS, Sankt Augustin, and University of Bonn, DE
Vaishak BELLEBelle Lab, Centre for Intelligent Systems and their Applications, School of Informatics, University of Edinburgh, UK
Frenay BENOIT, Universite de Namur, BE
Enrico BERTINI, New York University, Tandon School of Engineering, US
Tarek R. BESOLD, Alpha Health AI Lab, Trustworthy AI leader, Telefonica Barcelona, ES
Guido BOLOGNA, Computer Vision and Multimedia Lab, Université de Genève, Geneva, CH
Federico CABITZA,  Università degli Studi di Milano-Bicocca, DISCO, Milano, IT
Ajay CHANDER, Computer Science Department, Stanford University and Fujitsu Labs of America, US
David EVANS, Computer Science Department, University of Virginia, US
Aldo FAISAL, Department of Computing, Brain and Behaviour Lab, Imperial College London, UK
Bryce GOODMAN, Oxford Internet Institute and San Francisco Bay Area, CA, US
Hani HAGRAS, Computational Intelligence Centre, School of Computer Science & Electronic Engineering, University of Essex, UK
Barbara HAMMER, Machine Learning Group, Bielefeld University, DE
Pim HASELAGER, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
Shunjun LI, Cyber Security Group, University of Kent, Canterbury, UK
Brian Y. LIM, National University of Singapore, SG
Tim MILLER, School of Computing and Information Systems, The University of Melbourne, AU
Fabio MERCORIO, University of Milano-Bicocca, CRISP Research Centre, Milano, IT
Huamin QU, Human-Computer Interaction Group HKUST VIS, Hong-Kong University of Science & Technology, CN
Daniele MAGAZZENI, Trusted Autonomous Systems Hub, King’s College London, UK
Stephen K. REED, Center for Research in Mathematics and Science Education, San Diego State University, US
Marco Tulio RIBEIRO, Guestrin Group, University of Washington, Seattle, WA, US
Brian RUTTENBERG, Charles River Analytics, Cambridge, MA, US
Wojciech SAMEK, Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, DE
Gerhard SCHURZ, Düsseldorf Center for Logic and Philosophy of Science, University Düsseldorf, DE
Marco SCUTARI,  Instituto Dalle Molle di Studi sull’Intelligenza Artificiale, Lugano, CH and Department of Statistics, University of Oxford, UK
Sameer SINGH, University of California UCI, Irvine, CA, US
Alison SMITH, University of Maryland, MD, US
Mohan SRIDHARAN, University of Auckland, NZ
Simone STUMPF,  City, University London, UK
Ramya Malur SRINIVASAN, Fujitsu Labs of America, Sunnyvale, CA, US
Andrea VEDALDI, Visual Geometry Group, University of Oxford, UK
Janusz WOJTUSIAK, Machine Learning and Inference Lab, George Mason University, Fairfax, US
Jianlong ZHOU, Faculty of Engineering and Informtion Technology (FEIT), University of Technology, Sydney, AU