Enhancing trust in automated 3D point cloud data interpretation through explainable counterfactuals

Our most recent paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. Read the paper here:  https://doi.org/10.1016/j.inffus.2025.103032   and get an overview by listening to this podcast 🙂

 

Graph Neural Networks with the Human-in-the-Loop > Trustworthy AI

In our Nature Scientific Reports paper we introduce a novel framework – our last deliverable to the FeatureCloud project – that integrates federated learning with Graph Neural Networks (GNNs) to classify diseases, incorporating Human-in-the-Loop methodologies. This advanced framework innovatively employs collaborative voting mechanisms on subgraphs within a Protein-Protein Interaction (PPI) network, situated in a federated ensemble-based deep learning context. This methodological approach marks a significant stride in the development of explainable and privacy-aware Artificial Intelligence, significantly contributing to the progression of personalized digital medicine in a responsible and transparent manner. Read the article here https://doi.org/10.1038/s41598-024-72748-7 and get an overview by listening to this podcast:

 

Explainable AI Methods – A brief overview (open access)

open access paper available – free to the international research community

FWF Explainable AI project P 32554 in the News

This basic research project will contribute novel results, algorithms and tools to the international ai and machine learning community

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI has been published on 27 January 2021 in the Journal Information Fusion, Q1, IF=13,669, rank 2/137 in the field of Computer Science, Artificial Intelligence:

https://doi.org/10.1016/j.inffus.2021.01.008

We are grateful for the valuable comments of the anonymous reviewers. Parts of this work have received funding from the EU Project FeatureCloud. The FeatureCloud project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. Parts of this work have been funded by the Austrian Science Fund (FWF) , Project: P-32554 “explainable Artificial Intelligence”.

Measuring the Quality of Explanations just exceeded 5k downloads

In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

Kandinsky Challenge: IQ-Test for Machines is online!

The Human-Centered AI Lab (HCAI) invites the international machine learning community to a challenge on explainable AI and towards IQ-Tests for machines

Lecture Notes in Artificial Intelligence LNAI 9605 just exceeded 80,000 downloads

Springer Lecture Notes in Artificial Intelligence LNAI 9605 Machine Learning for Health Informatics exceeded 80,000 downloads