This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept all cookies and servicesDo not acceptLearn moreWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and Menus of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
The following cookies are also needed - You can choose if you want to allow them:
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.
Legal Information – Impressum
SURVEY – THE SEVEN DEADLY SINS OF AI IN MEDICINE
/in experiments, General/by Andreas HolzingerWe invite you to participate in a crucial survey examining the key ethical challenges in medical AI.
Your insights are invaluable in shaping a future where AI enhances healthcare responsibly.
This survey will take only approx 4 minutes of your time, yet it addresses issues of paramount importance that affect us all.
https://ec.europa.eu/eusurvey/runner/seven-sins-of-medical-ai
Thank you very much,
Heimo MUELLER, Vimla L. PATEL, Edward H. SHORTLIFFE, Andreas HOLZINGER
Enhancing trust in automated 3D point cloud data interpretation through explainable counterfactuals
/in HCAI success, Recent Publications, Science News/by Andreas HolzingerOur most recent paper introduces a novel framework for augmenting explainability in the interpretation of point cloud data by fusing expert knowledge with counterfactual reasoning. Given the complexity and voluminous nature of point cloud datasets, derived predominantly from LiDAR and 3D scanning technologies, achieving interpretability remains a significant challenge, particularly in smart cities, smart agriculture, and smart forestry. This research posits that integrating expert knowledge with counterfactual explanations – speculative scenarios illustrating how altering input data points could lead to different outcomes – can significantly reduce the opacity of deep learning models processing point cloud data. The proposed optimization-driven framework utilizes expert-informed ad-hoc perturbation techniques to generate meaningful counterfactual scenarios when employing state-of-the-art deep learning architectures. Read the paper here: https://doi.org/10.1016/j.inffus.2025.103032 and get an overview by listening to this podcast 🙂
Graph Neural Networks with the Human-in-the-Loop > Trustworthy AI
/in General, HCAI success, Recent Publications, Science News/by Andreas HolzingerIn our Nature Scientific Reports paper we introduce a novel framework – our last deliverable to the FeatureCloud project – that integrates federated learning with Graph Neural Networks (GNNs) to classify diseases, incorporating Human-in-the-Loop methodologies. This advanced framework innovatively employs collaborative voting mechanisms on subgraphs within a Protein-Protein Interaction (PPI) network, situated in a federated ensemble-based deep learning context. This methodological approach marks a significant stride in the development of explainable and privacy-aware Artificial Intelligence, significantly contributing to the progression of personalized digital medicine in a responsible and transparent manner. Read the article here https://doi.org/10.1038/s41598-024-72748-7 and get an overview by listening to this podcast:
Your Human-AI Co-Existence (5 Minutes Survey)
/in experiments, General/by Andreas HolzingerYour responses are anonymous and your personal data will not be recorded.
How can you participate?
By filling out this five minute-long and anonymous survey, you can help us in making AI technology more accessible and understandable:
https://forms.gle/DTHmeD9v6XbwXeFn9
Why is that important?
Establishing adaptable and interpretable AI machinery is crucial for individuals and governments to catch up with the speed of technology. Key is not promoting solely development-friendly AI and regulatory overseeing frameworks, but rather working on transparency and readability of technologies through insightful guidelines so that participation for the individual is made possible. This includes the topic of informational self-determination through open legislation frameworks, policies, and ethical guidelines. Both the collective and individual aspect are important for AI technology progression, but a future towards sustainable-friendly AI as an enabler of the 17 sustainable development goals (SDGs) and targets rather than an inhibitor starts with open participation and constant confrontation of the individual with AI technology. One global example that affects us all is ongoing climate change [2], and here we need AI – and the workhorse machine learning (ML) – to contribute to what is clearly the greatest challenge facing humanity. Each and every one of us can contribute to the global challenges of climate change, and we want to explore how AI can help us do that.
[1] Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M. & Fuso Nerini, F. 2020. The role of artificial intelligence in achieving the Sustainable Development Goals. Nature communications, 11, (233), 1–10, https://doi.org/10.1038/s41467-019-14108-y
[2] Rolnick, D., Donti, P.L., Kaack, L.H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A.S., Milojevic-Dupont, N., Jaques, N. & Waldman-Brown, A. (2022). Tackling climate change with machine learning. ACM Computing Surveys (CSUR), 55, (2), 1–96, https://doi.org/10.1145/3485128
[3] This page is: https://human-centered.ai/2023/09/21/human-ai-5-minutes-survey
Thank you very much:
Andreas HOLZINGER, Heimo MUELLER, Jianlong ZHOU, Fang CHEN
Human Centered AI Lab Austria and Human-Centered AI Lab Australia
Your Views on ChatGPT in Applications (3 Minutes Survey)
/in experiments, General/by Andreas HolzingerThe current development in Large Language Models is good for the machine learning community because it demonstrates the state of the art in statistical learning in an easy to understand way. For example, ChatGPT can fluently answer questions from users. It produces human-like texts with a seemingly logical connection between different sections. According to recent reports, individuals have already used ChatGPT extensively to formulate university essays, write scientific articles with references, debug computer programme code, compose music, write poetry, submit restaurant reviews, create advertising copy and solve exams, co-author magazine articles, and much, much more.
Despite the apparent benefits of ChatGPT, many human users have various ethical concerns about misinformation, transparency, privacy and security, bias, abuse, loss of jobs, lack of originality, over-dependence and even massive job loss.
In our survey, we want to know your views and concerns about ChatGPT so that we can summarise recommendations to users when they use ChatGPT in applications.
Your responses are anonymous and your personal data will not be recorded.
Please take part in our 3 minutes survey:
https://forms.gle/cYMzDyTT7UUP9wRi7
Thank you very much:
Jianlong ZHOU, Heimo MUELLER, Andreas HOLZINGER, Fang CHEN
Human-Centered AI Lab Australia and Human Centered AI Lab Austria
Usability Evaluation of Interactive XAI platform for Graph Neural Networks
/in General/by Andreas HolzingerLack of trust in artificial intelligence (AI) models in medicine is still the key blockage for the use of AI in clinical decision support systems (CDSS). Although AI models are already performing excellently in medicine, their black-box nature entails that it is often impossible to understand why a particular decision was made. In the field of explainable AI (XAI), many good tools have already been developed to “explain” to a human expert, for example, which input features influenced a prediction. However, in the clinical domain, it is essential that these explanations lead to some degree of causal understanding by a clinician in the context of a specific application. For this reason, we have developed an interactive XAI platform that allows the domain expert to ask manual counterfactual (“what-if”) questions. CLARUS allows the expert to observe how changes based on their questions affect the AI decision and the corresponding XAI explanation [1].
Please help us now with a usability evaluation and spend 10 minutes and go through this:
https://survey.medunigraz.at/index.php/368984?lang=en
and please fill out all fields (please include all feedback you think is necessary into the boxes),
please note that there will be TWO Windows open: one is the application and one is the questionaire,
so maybe it is better to open them in two separate windows for your convienience,
thank you very much
[1] https://doi.org/10.1101/2022.11.21.517358
Human-Centered AI for smart farming BOKU Tulln March, 9, 2023
/in Conferences, General, Lectures/by Andreas HolzingerOn March, 9, 2023 at the BOKU Tulln, we were guest speakers at the traditional Schlumberger lectures. For us a wonderful opportunity to show what Human-Centered AI can do for smart farming. Thanks to the organizers Michaela Griesser and Astrid Forneck from the Department of Crop Sciences (DNW) lead by Hans-Peter Kaul. Looking forward to help to discover the causality of berry shrivel (Traubenwelke) with methods from deep geometric learning for knowledge discovery from point cloud data.
Inaugural Lecture Andreas Holzinger Human-Centered AI
/in HCAI-event, Lectures/by Andreas HolzingerThe inaugural lecture of Andreas Holzinger on Monday, Nov, 7, 2022, 18:00 on Human-Centered AI is open to the public – you are cordially welcome
Open Postdoc Position “Artificial intelligence for smart forest operations”
/in General/by Andreas HolzingerWe continue to build up our HCAI-Lab in an absolutely cool environment with exciting Artificial Intelligence topics.
Cyber-physical systems, robotics, sensor technology, data management in general, and methods of artificial intelligence (Al) and machine learning (ML) with applications to smart farm and forest operations are of increasing interest.
We seek an postdoctoral research associate in AI/machinelearning for Forest Operations, Reference Code 184
Please note the following required qualifications:
please apply here:
https://euraxess.ec.europa.eu/jobs/838860
Note: We regret that we cannot reimburse applicants travel and lodging expenses incurred as part of the selection and hiring process.
We constantly seek to increase the number of female faculty members. Therefore qualified women are strongly encouraged to apply. In case of equal qualification, female candidates will be given preference unless reasons specific to an individual male candidate tilt the balance in his favour. People with disabilities and appropriate qualifications are specifically encouraged to apply.
Cross Domain Machine Learning and Knowledge Extraction Conference
/in HCAI-event/by Andreas HolzingerGreat Cross Domain Machine Learning and Knowledge Extraction Conference in Vienna