The Holzinger Group works on data driven artificial intelligence (AI) and machine learning (ML) promoting a synergistic approach of Human-Centered AI (HCAI) to augment human intelligence with machine intelligence.
Our focus is on explainable AI (TEDx) and interpretable ML, where our pioneer work on interactive ML (iML, video, paper) with a human-in-the-loop and our experience in the application area health is very beneficial.
Our goal is to enable a human expert to understand the underlying explanatory factors, the causality, of why an AI-decision has been made, paving the way towards verifiable machine learning and ethical responsible AI !
We speak Python!
To reach a level of usable intelligence we need to learn from prior data, extract knowledge, generalize, fight the curse of dimensionality, disentangle underlying explanatory factors of data, i.e. to understand the data in the context of an application domain (see > Research statement).
This needs an international concerted effort (see > Expert network) and education of a new kind of graduates (see > Teaching statement). Cross-domain approaches foster serendipity, cross-fertilize methodologies and insights, and ultimately transfer ideas into Business for the benefit of humans (see > Conference CD-MAKE).
https://human-centered.ai/wordpress/wp-content/uploads/2019/05/Kandinsky-Pattern.png600600Andreas Holzingerhttps://human-centered.ai/wordpress/wp-content/uploads/2019/09/hcai.pngAndreas Holzinger2019-05-29 09:10:492020-01-16 09:09:46#KANDINSKYPatterns our Swiss-Knife for the study of explainable-AI
https://human-centered.ai/wordpress/wp-content/uploads/2019/05/FWF-P-32554-Reference-Model-Explainable-AI.png504619Andreas Holzingerhttps://human-centered.ai/wordpress/wp-content/uploads/2019/09/hcai.pngAndreas Holzinger2019-05-23 20:23:192020-01-19 05:48:15FWF Project Reference Model of Explainable AI for the Medical Domain