Kandinsky Figures and Kandinsky Patterns are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence (AI). Whilst Kandinsky Patterns have these computationally manageable properties, they are at the same time easily distinguishable from human observers. Consequently, controlled patterns can be described by both humans and algorithms.
We define a Kandinsky Pattern as a set of Kandinsky Figures, where for each figure an “infallible authority” (ground truth) defines that the figure belongs to the Kandinsky Pattern. With this simple principle we build training and validation data sets for automatic interpretability and context learning.
In the recent paper we describe the underlying idea of Kandinsky Patterns and provide a Github repository to invite the international machine learning research community to a challenge to experiment with our Kandinsky Patterns to expand and thus make progress to the field of explainable AI and to contribute to the upcoming field of causability.
In the second paper we propose to use our Kandinsky-Patterns as an IQ-Test for machines similarly as tests for testing human intelligence. Originally, intelligence tests are tools for cognitive performance diagnostics to provide quantitative measures for the “intelligence” of a person, which is called the intelligence quotient (IQ). Intelligence tests are therefore colloquially called IQ-test.