ABSTRACT
Aritificial Intelligence (A) and Machine learning (ML) are the fastest growing fields in computer science and the grand goal of AI/ML (see definitions) is to develop algorithms which can learn and improve over time fully automatically without any human intervention. Consequently, most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example: in speech recognition, recommender systems, or autonomous vehicles (“Google Car”). Automatic approaches greatly benefit from big data with many training sets. However, in the health domain sometimes (e.g. in our project Digital Pathology), we are confronted with a small number of data sets or rare events, and complex problems, where aML-approaches suffer of insufficient training samples and missing the underlying explanatory factors, i.e. the context. Here interactive machine learning (iML) may be of help, defined as ‘‘algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.’’ This ‘‘human-in-the-loop’’ (or a crowd of intelligent humans, e.g. doctors in the loop), can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise remain an NP-hard problem, reduces greatly in complexity through the input and the assistance of human agents involved in the learning phase. Most of all the human in the loop can bring in expertise, “intuition”, explicit knowledge and conceptual understanding which to date all AI is lacking completely, Ultimateley, such glass-box approaches foster explainable-AI towards transparency and trust in Machine Learning, making AI interpretable and explainable – which is mandatory due to rising legal issues on privacy, data protection, safety and security (cf. GDPR).