Modern society is characterized by increasing interaction between humans and intelligent machines. Against this background, human trust in automation plays a central role in synergistic interactions. For this reason, is essential to find a way to assess the human confidence in intelligent machines they collaborate with. In order to do this, Purdue University researchers have developed a “classification model” that uses two techniques to provide psychophysiological data useful for gauge trust: electroencephalography (EEG) and galvanic skin response (GSR).
The use of psychophysiological measurements is motivated by their ability to capture a human’s response in real time. The research was conducted by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University’s School of Mechanical Engineering.
Improving human-machine interaction
Intelligent machines are becoming increasingly common in the everyday lives of humans. For instance, aircraft pilots and industrial workers routinely interact with automated systems. However, a successful collaboration between humans and machines depends on the ability to create systems capable of building and maintaining trust with humans.
Researchers have developed a method that allows machines to assess the level of human confidence in themselves in real time. This enables the system to respond to changes in human trust level in real time. The method used by the researchers focuses on data obtained through electroencephalography and the galvanic response of the skin. The first records the cortical activity of the brain, while the second monitors changes in the electrical characteristics of the skin. This model provides a psychophysiological “set of characteristics” related to trust.
“The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting.” – Tahira Reid-
A first step
To validate the method, 581 subjects ran a driving simulation in which a computer signaled the presence of road obstacles. In some scenarios, the computer correctly recognized obstacles 100 percent of the time, whereas in other scenarios the computer incorrectly identified the obstacles 50 percent of the time.
Participants had the task of evaluating the feedback received, choosing between two possibilities: “trust” or “distrust”. The testing allowed the researchers to identify psychophysiological features associated with human trust in intelligent systems. This model adopted the same set of psychophysiological characteristics for all participants and had an accuracy of 71.22%.
A personalized approach
Subsequently, Jain and Reid have investigated a customized feature set for each individual, that considered gender and cultural differences. This approach improved mean accuracy but at the expense of an increase in training time.
Details of the research can be found in ACM Transactions on Interactive Intelligent Systems (TiiS).