UCLIC Research Seminar Series
In the medical domain the expectations to automatic AI/machine learning systems are extremely high, particularly in disciplines requiring prognostic models (oncology) and/or decision support (radiology, pathology). Due to the raising ethical, social, and legal issues governed by the European Union, the field of explainable AI is becoming extremely important. The problem of explainability is as old as AI itself, and classic rule-based approaches have been human understandable, but their weakness was in dealing with non-linearities and the intrinsic uncertainties of medical data. The progress in probabilistic machine learning, the availability of large amounts of training data and increasing computational power has made AI successful today, and in certain medical tasks it even exceeds human performance. However, such approaches are considered as "black box"- models, and even if we understand the underlying mathematical principles of such models, they still lack explicit declarative knowledge. Consequently, in the future we need contextadaptive procedures, i.e. systems that construct contextual explanatory models for classes of real world phenomena. One possible step is in linking probabilistic learning methods with large knowledge representations (ontologies), thus allowing to understand how a machine decision has been reached. Our aim is not only to make machine decisions re-traceable, interpretable and comprehensible, but to interpret why a certain machine decision has been reached. In medicine the "why" is often more important than the classification result. The re-traceability and interpretability on demand shall foster reliability and trust ensuring that the human remains in control, so to augment human intelligence with artificial intelligence and vice versa.
Andreas Holzinger is lead of the Human-Centered AI Group at the Medical University Graz and since 2016 he is Visiting Professor for machine learning in health informatics at Vienna University of Technology. Andreas is currently Visiting Professor for explainable AI at the University of Alberta, Edmonton, Canada. Andreas serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. He is in the advisory board of the Artificial Intelligence Strategy "AI Made in Germany 2030" of the German Federal Government and in the advisory board of the "Artificial Intelligence Mission Austria 2030". Andreas Holzinger promotes a synergistic approach to Human-Centred Artificial Intelligence (HCAI) and has pioneered in interactive machine learning (iML) with the human-in-the-loop. Andreas' goal is to augment human intelligence with artificial intelligence to help to solve problems in health informatics. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his second Ph.D. in Computer Science from TU Graz in 2003. He serves as Austrian Representative for AI in IFIP TC 12, is organizer of the IFIP Cross-Domain Conference "Machine Learning & Knowledge Extraction (CD-MAKE)" and is member of IFIP WG 12.9 Computational Intelligence, the ACM, IEEE, GI, the Austrian Computer Science and the Association for the Advancement of Artificial Intelligence (AAAI). More information: https://www.aholzinger.at