TY - GEN
T1 - NAA
T2 - 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017
AU - Lefter, Iulia
AU - Jonker, Catholijn M.
AU - Tuente, Stephanie Klein
AU - Veling, Wim
AU - Bogaerts, Stefan
N1 - Publisher Copyright:
© 2017 IEEE.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2018
Y1 - 2018
N2 - We present the collection and annotation of a multi-modal database with negative human-human interactions. The work is part of supporting behavior recognition in the context of a virtual reality aggression prevention training system. The data consist of dyadic interactions between professional aggression training actors (actors) and naive participants (students). In addition to audio and video, we have recorded motion capture data with kinect, head tracking, and physiological data: Heart rate (ECG), galvanic skin response (GSR) and electromyography (EMG) of biceps, triceps and trapezius muscles. Aggression levels, fear, valence, arousal and dominance have been rated separately for actors and students. We observe higher inter-rater agreement for rating the actors than for rating the students, consistently for each annotated dimension, and a higher inter-rater agreement for speaking behavior than for listening behavior. The data can be used among others for research on affect recognition, multimodal fusion and the relation between different bodily manifestation.
AB - We present the collection and annotation of a multi-modal database with negative human-human interactions. The work is part of supporting behavior recognition in the context of a virtual reality aggression prevention training system. The data consist of dyadic interactions between professional aggression training actors (actors) and naive participants (students). In addition to audio and video, we have recorded motion capture data with kinect, head tracking, and physiological data: Heart rate (ECG), galvanic skin response (GSR) and electromyography (EMG) of biceps, triceps and trapezius muscles. Aggression levels, fear, valence, arousal and dominance have been rated separately for actors and students. We observe higher inter-rater agreement for rating the actors than for rating the students, consistently for each annotated dimension, and a higher inter-rater agreement for speaking behavior than for listening behavior. The data can be used among others for research on affect recognition, multimodal fusion and the relation between different bodily manifestation.
UR - http://www.scopus.com/inward/record.url?scp=85047336202&partnerID=8YFLogxK
U2 - 10.1109/ACII.2017.8273574
DO - 10.1109/ACII.2017.8273574
M3 - Conference contribution
AN - SCOPUS:85047336202
VL - 2018
T3 - 2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017
SP - 21
EP - 27
BT - 2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 October 2017 through 26 October 2017
ER -