Project GRAPHINIUS – Interactive Graph Research Framework

AUGMENTOR

The Augmentor is a widely accepted data augmentation library for machine learning, deep learning, in Python and Julia

#KANDINSKYPatterns our Swiss-Knife for the study of explainable-AI

KANDINSKYPatterns our Swiss Knife for studying explainbale AI are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence.

Project TUGROVIS – Tumor-Growth Machine Learning

EU Project FeatureCloud (Federated Machine Learning)

The project’s ground-breaking novel cloud-AI infrastructure only exchanges learned representations (the feature parameters theta θ, hence the name “feature cloud”) which are anonymous by default (no hassle with “real medical data” – no ethical issues) - the data remain in safe harbours where they are and belong.

FWF Project Reference Model of Explainable AI for the Medical Domain

The FWF project P-32554 "A reference model of explainable Artificial Intelligence for the Medical Domain" will provide important contributions to the international machine learning community, i.e. develop a library of explanatory patterns and a novel grammar how these can be combined, and will define criteria/benchmarks for explainability and principles to measure effectiveness of explainability and interpretability guidelines in mapping human understanding with machine explanations and deploying an open explanatory framework along with a set of benchmarks and open data to stimulate and inspire further research in transparent machine learning.

Project MAKEpatho Machine Learning & Knowledge Extraction in Digital Pathology

Based on the ICT-2011.9.5 - FET Flagship "IT Future of Medicine" and in a joint effort together with BBMRI.at and the ADOPT project, we are working on making novel information accessible to a human expert in digital pathology.

Project iML interactive Machine Learning with the Human-in-the-Loop

In this project we follow the HCI-KDD approach, i.e. with the human expert in the machine learning loop and opening the black box to a glass box!

NOE FTI-22-I-004 Infrastructure for Testing AI-driven robot systems in complex environments