On Wednesday, 9th December, 2020, 13:00 CET, we have scheduled two talks in our MiniConf series:

“Towards Games in Explainable AI” by Lukas GRASSAUER

Abstract: Explainable AI is a highly relevant field for many aspects of human-centered computing. Historically many fundamental questions in computer science and machine learning can be traced back to the concepts of games and puzzles. Alan Turing called his now famous Turing Test “The Imitation Game”. Turing also developed the worldwide first Chess engine. Puzzles (CAPTCHA) are also used to determine if a user is human. The whole world laid eyes on the world champion Lee Sedol
as he lost a game against a machine in the previously human dominated domain “Go”. Clearly, games are an excellent catalyst for research in the field of explainable AI and moreover an excellent benchmark for the evaluation of explanations. Might gamification and games in the context of explainable AI lead to an approach for solving other open questions? This talk outlines some plans for generating explanation data by using KandinskyPatterns.

Short Bio: Lukas was born 1997 in Vienna, Austria. Studying Computer Science since 2015. Always a huge nerd for logic puzzles and games, found the potential of the power games have not only on us humans but in research when writing his game engine ‘sge’ (Strategy Game Engine), providing an easy interface for agents to play against each other.

***

“Simultaneous Neural Nets and Syntheziced Literate-Logic-Programs” by Johannes LINDNER

Abstract: The central question for us is, how can we integrate querying a database, infer additional knowledge, simulate processes and reflect reasonable about all these stages as lightweight as possible? One first step would be to bundle these different notions in a common representation together, but restore essential computational significance. In this talk I introduce therefore Answer-Set-Logic-Programs as a transfer-media, that can be used to combine the rich ontological knowledge environment of Description Logic including their abduction methods with modifiable programs in order to refine predicates automatically or to enable interpretative logic learning methods while training neural nets simultaneously.
This might control data quality and the reliability of the overall system. Additionally we would like to show the possibility to integrate linguistic annotation to establish autoepistemic computation that turns computation in explanation and therefore closes a processing cycle of feature-extraction, evaluation and causal interpretation on different abstraction layers. So far the
focus will be on natural language processing and reasoning due to the already available tools, but the analysis of higher-order predicate patterns and characteristic constellations identified in the Herbrand Base of a logic-program-skeleton might be adoptable to other data domains as typical in medical applications.

Short Bio: Johannes, born in 1985, studies the Master in Logic and Computer Science at TU Vienna as a second degree since 2019. He achieved a Bachelor-degree in Business & Communication at the University of Arts Berlin with a thesis about ontological media interfaces. After the Physikum-Exam in Medicine, while preparing a doctorate degree in Neurogenetic, he found his way from Biochemistry and Neurohistology to Logic, Categories and Types. After the first year at the Vienna Logic Centre he focuses on the dynamic synthetisation of knowledge and verification structures.

***