UCLIC Research Seminar Series

Title
Abstract
We are currently on the cusp of a revolution in smart technologies based on complex algorithmic decision-making, data science and machine learning. These systems are starting to be integrated into everyday life but it has been shown that users need to understand how these systems work, so that users trust them enough to adopt them, and continue to use them effectively and appropriately. Yet so far many of these systems are like black boxes and not at all intelligible. In this talk, I will cover what makes a smart system intelligible, my work in making them intelligible through explanations, and the impact that explanations might have on trust and understanding.
Biography
Dr. Simone Stumpf is a Senior Lecturer at City, University of London, UK, in the Centre for HCI Design. She has a long-standing research focus on user interactions with machine learning systems and has authored over 60 publications in this area. Her current research projects include investigating sensor-based health self-care systems for people with dementia and Parkinson's disease, personal health information management for people living with HIV, collecting data from blind users to personalise object recognition, and investigating fair AI. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach to interactive machine learning, providing design principles for crafting explanations. The prime aim of her work is to empower all users to use intelligent machines effectively.