Posts

Enhancing trust in automated 3D point cloud data interpretation through explainable counterfactuals

Our most recent paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. Read the paper here:  https://doi.org/10.1016/j.inffus.2025.103032   and get an overview by listening to this podcast 🙂

 

Measuring the Quality of Explanations just exceeded 5k downloads

In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

ICML Workshop interpretable machine learning, July, 18, 2020

Welcome to our XXAI ICML 2020 workshop: extending explainable ai beyond deep models and classifiers

Call for Papers “explainable AI 2020”, University College Dublin, August 24-28 (closed)

Accepted Papers will be published in the Springer/Nature Lecture Notes in Computer Science Volume “Cross Domain Conference for Machine Learning and Knowlege Extraction” (CD-MAKE 2020)

Portfolio Items