Aritificial Intelligence (A) and Machine learning (ML) are the fastest growing fields in computer science and the grand goal of AI/ML (see definitions) is to develop algorithms which can learn and improve over time fully automatically without any human intervention. Consequently, most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example: in speech recognition, recommender systems, or autonomous vehicles (“Google Car”). Automatic approaches greatly benefit from big data with many training sets. However, in the health domain sometimes (e.g. in our project Digital Pathology), we are confronted with a small number of data sets or rare events, and complex problems, where aML-approaches suffer of insufficient training samples and missing the underlying explanatory factors, i.e. the context. Here interactive machine learning (iML) may be of help, defined as ‘‘algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.’’ This ‘‘human-in-the-loop’’ (or a crowd of intelligent humans, e.g. doctors in the loop), can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise remain an NP-hard problem, reduces greatly in complexity through the input and the assistance of human agents involved in the learning phase. Ultimateley, such glass-box approaches foster explainable-AI towards transparency and trust in Machine Learning, making AI interpretable and explainable – which is mandatory due to rising legal issues on privacy, data protection, safety and security (cf. GDPR).
Holzinger, et. al. (2018). Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, doi:10.1007/s10489-018-1361-5.
Holzinger, A., Kieseberg, P., Weippl, E. & Tjoa, A.M. 2018. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, doi:10.1007/978-3-319-99740-7_1.
Holzinger, A. From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), 23-25 Aug. 2018 2018. 55-66, doi:10.1109/DISA.2018.8490530.
Holzinger, A. 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5.
Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P. & Holzinger, A. 2018. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer. 295-303, doi:10.1007/978-3-319-99740-7_21.
Holzinger, A., M. Plass, K. Holzinger, G. C. Crisan, C.-M. Pintea and V. Palade (2017). A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.