iCub! Do you recognize what I am doing? multimodal human action recognition on multisensory-enabled iCub robot

Kas Kniesmeijer, Murat Kirtay

Research output: Other contribution

37 Downloads (Pure)


This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few.
Original languageUndefined/Unknown
Publication statusPublished - 17 Dec 2022


  • cs.RO
  • cs.CV
  • cs.LG

Cite this