Abstract
The current practical restraints for facial expression recognition in Virtual Reality (VR) led to the development of a novel wearable interface called Faceteq. Our team designed a pilot feasibility study to explore the effect of spontaneous facial expressions on eight EMG sensors, incorporated on the Faceteq interface. Thirty-four participants took part in the study where they watched a sequence of video stimuli while self-rating their emotional state. After a specifically designed signal pre-processing, we aimed to classify the responses into three classes (negative, neutral, positive). A C-SVM classifier was cross-validated for each participant, reaching an out-of-sample average accuracy of 82.5%. These preliminary results have encouraged us to enlarge our dataset and incorporate data from different physiological signals to achieve automatic detection of combined arousal and valence states for VR applications.
Original language | English |
---|---|
Title of host publication | 12th International Conference on Disability, Virtual Reality and Associated Technologies (ICDVRAT 2018) |
Publication status | Published - 2018 |