People and Artificial Intelligence Research (PAIR) Initiative

We experience enormous advances in AI and ML (see here for the difference), with impressive, daily visible improvements in technical performance, particularly in speech recognition, deep learning from images, autonomous driving, etc.

It is really great that the Google Brain team led by Jeff Dean and the Google Initiative People and Artificial Intelligence Research (PAIR) supports people-centric AI systems. They are interested in augmenting human interaction with machine intelligence and foster a humanistic approach to artificial intelligence towards making people and AI partnerships productive, enjoyable and fair.

See: https://ai.google/pair

This perfectly supports our HCI-KDD approach [1] generally, and specifically our interactive Machine Learning (iML) approach with a human in the loop [2]. The basic idea of augmenting human intelligence with artificial intelligence can foster trust [6], causal reasoning, explainability and re-traceability [5] – which is of utmost importance of the medical domain [4], [3].

[1]          Andreas Holzinger 2013. Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together? In: Cuzzocrea, Alfredo, Kittl, Christian, Simos, Dimitris E., Weippl, Edgar & Xu, Lida (eds.) Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127. Heidelberg, Berlin, New York: Springer, pp. 319-328, doi:10.1007/978-3-642-40511-2_22.

[2]          Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.

[3]          Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.

[4]          Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs & Kurt Zatloukal 2017. Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv:1712.06657.

[5]          Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea & Vasile Palade 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.

[6]          Katharina Holzinger, Klaus Mak, Peter Kieseberg & Andreas Holzinger 2018. Can we trust Machine Learning Results? Artificial Intelligence in Safety-Critical decision Support. ERCIM News, 112, (1), 42-43.