Towards valence detection from EMG for Virtual Reality applications

Ifigeneia Mavridou, Ellen Seiss, M. Hamedi, Emili Balaguer-Ballester, C. Nduka

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

The current practical restraints for facial expression recognition in Virtual Reality (VR) led to the development of a novel wearable interface called Faceteq. Our team designed a pilot feasibility study to explore the effect of spontaneous facial expressions on eight EMG sensors, incorporated on the Faceteq interface. Thirty-four participants took part in the study where they watched a sequence of video stimuli while self-rating their emotional state. After a specifically designed signal pre-processing, we aimed to classify the responses into three classes (negative, neutral, positive). A C-SVM classifier was cross-validated for each participant, reaching an out-of-sample average accuracy of 82.5%. These preliminary results have encouraged us to enlarge our dataset and incorporate data from different physiological signals to achieve automatic detection of combined arousal and valence states for VR applications.
Original languageEnglish
Title of host publication12th International Conference on Disability, Virtual Reality and Associated Technologies (ICDVRAT 2018)
Publication statusPublished - 2018

Fingerprint

Dive into the research topics of 'Towards valence detection from EMG for Virtual Reality applications'. Together they form a unique fingerprint.

Cite this