Enhancing trust in automated 3D point cloud data interpretation through explainable counterfactuals

Our most recent paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. Read the paper here:  https://doi.org/10.1016/j.inffus.2025.103032   and get an overview by listening to this podcast 🙂

 

Graph Neural Networks with the Human-in-the-Loop > Trustworthy AI

In our Nature Scientific Reports paper we introduce a novel framework – our last deliverable to the FeatureCloud project – that integrates federated learning with Graph Neural Networks (GNNs) to classify diseases, incorporating Human-in-the-Loop methodologies. This advanced framework innovatively employs collaborative voting mechanisms on subgraphs within a Protein-Protein Interaction (PPI) network, situated in a federated ensemble-based deep learning context. This methodological approach marks a significant stride in the development of explainable and privacy-aware Artificial Intelligence, significantly contributing to the progression of personalized digital medicine in a responsible and transparent manner. Read the article here https://doi.org/10.1038/s41598-024-72748-7 and get an overview by listening to this podcast:

 

Explainable AI Methods – A brief overview (open access)

open access paper available – free to the international research community

FWF Explainable AI project P 32554 in the News

This basic research project will contribute novel results, algorithms and tools to the international ai and machine learning community

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI has been published on 27 January 2021 in the Journal Information Fusion, Q1, IF=13,669, rank 2/137 in the field of Computer Science, Artificial Intelligence:

https://doi.org/10.1016/j.inffus.2021.01.008

We are grateful for the valuable comments of the anonymous reviewers. Parts of this work have received funding from the EU Project FeatureCloud. The FeatureCloud project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. Parts of this work have been funded by the Austrian Science Fund (FWF) , Project: P-32554 “explainable Artificial Intelligence”.

Measuring the Quality of Explanations just exceeded 5k downloads

In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

Information Fusion on rank 2 out of 136 in the field of Artificial Intelligence > open call on xAI

The Journal Information Fusion made it to rank 2 out of 136 journals in the field of Artificial Intelligence, congrats to Francisco Herrera, this is excellent for our special issue on rAI – which goes beyond xAI towards accountability, privacy, safety and security.

Explainability vs. Causability of Artificial Intelligence in Medicine

In our recent highly cited paper we define the notion of causability, which is different from explainability in that causability is a property of a person, while explainability is a property of a system!

The need for deep understanding of algorithms

There are many different machine learning algorithms for a certain problem, but which one to chose for solving a practical problem? The comparison of learning algorithms is very difficult and is highly dependent of the quality of the data!