Our Kandinsky Challenge just went online. We invite the international machine learning community to a challenge. We encourage our international AI/machine learning research colleagues to experiment with our Kandinsky Patterns. This shall foster making progress in the field of explainable AI and to contribute to the upcoming field of explainability and causability [1].

Kandinsky Figures and Kandinsky Patterns are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence. Whilst Kandinsky Patterns have these computationally manageable properties, they are at the same time easily distinguishable from human observers. Consequently, controlled patterns can be described by both humans and computers.

Please proceed to the Kandinsky Challenge page

Some background on our ideas: AI follows the notion of human intelligence. The most common definition was provided by psychology/cognitive science. They saw intelligence from the beginning as a mental capability. This includes abstract thinking, reasoning, sensemaking and solving real-world-problems. A hot topic in current AI/machine learning research is to find out whether and to what extent algorithms can learn such abstract (human-level) thinking and reasoning. To get a deeper insight we propagate our Kandinsky Patterns as an IQ-Test for machines [2].

[1] The notion of Causability is differentiated from Explainability in that Causability is a property of a human (natural intelligence), while explainability is a property of a technologial system (artificial intelligence).

Wichtig: Causability ist eine Eigenschaft einer Person (natürliche Intelligenz), während die Erklärbarkeit eine Eigenschaft eines technischen Systems darstellt (künstliche Intelligenz). For more information refer to: https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312