TY - GEN
T1 - Trustworthiness assessment in multimodal human-robot interaction based on cognitive load
AU - Kirtay, Murat
AU - Oztop, Erhan
AU - Kuhlen, Anna K.
AU - Asada, Minoru
AU - Hafner, Verena V.
N1 - Funding Information:
*This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2002/1 “Science of Intelligence” - project number 390523135. Additional support is provided by the International Joint Research Promotion Program, Osaka University under the project “Developmentally and biologically realistic modeling of perspective invariant action understanding”.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.
AB - In this study, we extend our robot trust model into a multimodal setting in which the Nao robot leverages audio-visual data to perform a sequential multimodal pattern recalling task while interacting with a human partner who has different guiding strategies: reliable, unreliable, and random. Here, the humanoid robot is equipped with a multimodal auto-associative memory module to process audio-visual patterns to extract cognitive load (i.e., computational cost) and an internal reward module to perform cost-guided reinforcement learning. After interactive experiments, the robot associates a low cognitive load (i.e., high cumulative reward) yielded during the interaction with high trustworthiness of the guiding strategy of the partner. At the end of the experiment, we provide a free choice to the robot to select a trustworthy instructor. We show that the robot forms trust in a reliable partner. In the second setting of the same experiment, we endow the robot with an additional simple theory of mind module to assess the efficacy of the instructor in helping the robot perform the task. Our results show that the performance of the robot is improved when the robot bases its action decisions on factoring in the instructor assessment.
UR - http://www.scopus.com/inward/record.url?scp=85138710483&partnerID=8YFLogxK
U2 - 10.1109/RO-MAN53752.2022.9900730
DO - 10.1109/RO-MAN53752.2022.9900730
M3 - Conference contribution
AN - SCOPUS:85138710483
T3 - RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication: Social, Asocial, and Antisocial Robots
SP - 469
EP - 476
BT - RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 31st IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2022
Y2 - 29 August 2022 through 2 September 2022
ER -