Abstract
There is an increasing interest in the use of humanoid robots as a platform for presenting (educational) content. The robot’s ability to communicate non-verbally can increase understanding between human and robot, and can help to maintain an engaging interaction. For example, in the context of the L2TOR project, we have seen that a robot performing iconic gestures when teaching children a second language helps long-term memorization of new words.
To gather and make publicly available a dataset of Kinect recordings, from a diverse group of participants performing iconic gestures, and to learn more about the comprehensibility of these recorded gestures when translated to a humanoid robot, we propose an exploratory study where participants play ten rounds of a gesture guessing game with a NAO robot. First, the participant performs an iconic gesture, depicting an object (out of a predetermined set). Then, the robot will perform a gesture (that it has “learned” from the Kinect recording of a previous participant) and the participant will have to guess. The set-up of the experiment is shown in Figure 1. The system consists of several components which are outlined in Figure 2. For the clustering and recognition steps, we attempt to extract the gist (essence) of a gesture, inspired by existing work. Because participants effectively rate the robot’s gestures by guessing, we expect to discover which of the recorded gestures remain comprehensible when performed by the robot, taking into account its physical limitations.
The proposed study will take place at the NEMO science museum in Amsterdam.
To gather and make publicly available a dataset of Kinect recordings, from a diverse group of participants performing iconic gestures, and to learn more about the comprehensibility of these recorded gestures when translated to a humanoid robot, we propose an exploratory study where participants play ten rounds of a gesture guessing game with a NAO robot. First, the participant performs an iconic gesture, depicting an object (out of a predetermined set). Then, the robot will perform a gesture (that it has “learned” from the Kinect recording of a previous participant) and the participant will have to guess. The set-up of the experiment is shown in Figure 1. The system consists of several components which are outlined in Figure 2. For the clustering and recognition steps, we attempt to extract the gist (essence) of a gesture, inspired by existing work. Because participants effectively rate the robot’s gestures by guessing, we expect to discover which of the recorded gestures remain comprehensible when performed by the robot, taking into account its physical limitations.
The proposed study will take place at the NEMO science museum in Amsterdam.
Original language | English |
---|---|
Title of host publication | Workshop on Gesture & Technology, Warwick 2018 |
Publication status | Published - May 2018 |
Event | Workshop on Gesture & Technology - University of Warwick, Warwick, United Kingdom Duration: 3 Jun 2018 → 3 Jun 2018 https://warwick.ac.uk/fac/sci/psych/research/language/gesture2018/ |
Workshop
Workshop | Workshop on Gesture & Technology |
---|---|
Country/Territory | United Kingdom |
City | Warwick |
Period | 3/06/18 → 3/06/18 |
Internet address |