Call for Papers open until December, 15, 2020
In the last few years, the interest in deriving complex AI models capable of achieving unprecedented levels of performance has been progressively displaced by a growing concern with alternative design factors, aimed at making such models more usable in practice. Indeed, in a manifold of applications complex AI models become of limited or even null practical utility. The reason lies on the fact that AI models are often designed with only performance as their design target, thus leaving aside other important aspects such as privacy awareness, transparency, confidence, fairness or accountability. Remarkably, all these aspects have acquired a great momentum in the Artificial Intelligence community, giving rise to exclusive sections devoted to all these concepts in prospective studies and reports delivered at the highest international levels (see e.g. “Ethics Guidelines for Trustworthy Artificial Intelligence”, by the High-Level Expert Group on AI, April 2019).
In this context, Explainable AI (XAI) refers to those Artificial Intelligence techniques aimed at explaining, to a given audience, the details or reasons by which a model produces its output [1]. To this end, XAI borrows concepts from philosophy, cognitive sciences and social psychology to yield a spectrum of methodological approaches that can provide explainable decisions for users without a strong background on Artificial Intelligence. Therefore, XAI targets at bridging the gap between the complexity of the model to be explained, and the cognitive skills of the audience for which explainability is sought. Interdisciplinary XAI methods have so far embraced assorted elements from multiple disciplines, including signal processing, adversarial learning, visual analytics or cognitive modeling, to mention a few. Although reported XAI advances have risen sharply in recent times, there is global consensus around the need for further studies around the explainability of ML models. A major focus has been placed on XAI developments that involve the human in the loop and thereby, become human-centric. This includes the automated generation of counterfactuals, neuro-symbolic reasoning, or fuzzy rule-based systems, among others.
A step beyond XAI is Responsible AI (RAI), which denotes a set of principles to be met when deploying AI-based systems in practical scenarios: Fairness, Explainability, Human-Centric, Privacy Awareness, Accountability, Safety and Security. Therefore, RAI extends further XAI by ensuring that other critical modeling aspects are taken into account when deploying AI-based systems in practice, including not only algorithmic proposals but also new procedures devoted to ensuring responsibility in the application and usage of AI models, including tools for accountability and data governance, methods to assess and explain the impact of decisions made by AI models, or techniques to detect, counteract or mitigate the effect of bias on the model’s output. It is only by carefully accounting for all these aspects when humans, through all processes and systems endowed with AI-based functionalities (e.g. Robotics, Machine Learning, Optimization and Reasoning), will fully trust and welcome the arrival of this technology.
This special issue seeks original works and fresh studies dealing with research findings on XAI and RAI.
Please consult the journal page for more details: