Welcome to our XXAI ICML 2020 workshop: extending explainable ai beyond deep models and classifiers
Accepted Papers will be published in the Springer/Nature Lecture Notes in Computer Science Volume “Cross Domain Conference for Machine Learning and Knowlege Extraction” (CD-MAKE 2020)
In the following we present “10 Commandments for human-AI interaction” and ask our colleagues from the international AI/ML-community to comment on these. We will collect the results and present it openly to the international research community.
This basic research project will contribute novel results, algorithms and tools to the international ai and machine learning community
The Human-Centered AI Lab (HCAI) invites the international machine learning community to a challenge on explainable AI and towards IQ-Tests for machines
Andreas Holzinger provided a talk on 26.07.2019 in Edmonton for the faculty members of the University of Alberta Computing Science Faculty
First Austrian IFIP Forum “AI and future society: The third wave of AI, which takes place on Wednesday, May 8th – Thursday, May 9th 2019 in 1030 Vienna, Radetzkystraße 2, Festsaal of the BMVIT,
Recent advances in automatic machine learning (aML) allow solving problems without any human intervention, which is excellent in certain domains, e.g. in autonomous cars, where we want to exclude the human from the loop and want fully automatic learning. However, sometimes a human-in-the-loop can be beneficial – particularly in solving computationally hard problems. We provide new experimental insights  on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, an Ant Colony Optimization (ACO) framework was used, because this fosters multi-agent approaches with human agents in the loop. We propose unification between the human intelligence and interaction skills and the computational power of an artificial system. The ACO framework is used on a case study solving the Traveling Salesman Problem, because of its many practical implications, e.g. in the medical domain. We used ACO due to the fact that it is one of the best algorithms used in many applied intelligence problems. For the evaluation we used gamification, i.e. we implemented a snake-like game called Traveling Snakesman with the MAX–MIN Ant System (MMAS) in the background. We extended the MMAS–Algorithm in a way, that the human can directly interact and influence the ants. This is done by “traveling” with the snake across the graph. Each time the human travels over an ant, the current pheromone value of the edge is multiplied by 5. This manipulation has an impact on the ant’s behavior (the probability that this edge is taken by the ant increases). The results show that the humans performing one tour through the graphs have a significant impact on the shortest path found by the MMAS. Consequently, our experiment demonstrates that in our case human intelligence can positively influence machine intelligence. To the best of our knowledge this is the first study of this kind and it is a wonderful experimental platform for explainable AI.
 Holzinger, A. et al. (2018). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Springer/Nature: Applied Intelligence, doi:10.1007/s10489-018-1361-5.
Read the full article here:
“It’s time for AI to move out its adolescent, game-playing phase and take seriously the notions of quality and reliability.”
There is an interesting commentary with interviews by Don MONROE in the recent Communications of the ACM, November 2018, Volume 61, Number 11, Pages 11-13, doi: 10.1145/3276742 which emphasizes the importance of explainability and the need for effective human-computer interaction:
Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment (only). Often, however, the “reasoning” behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI “explainable” to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.
Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.
“Considering the internal complexity of modern AI, it may seem unreasonable to hope for a human-scale explanation of its decision-making rationale”.
Read the full article here: