Decoding affect in videos employing the MEG brain signal

  • Mojtaba Khomami Abadi
  • , Seyed Mostafa Kia
  • , Ramanathan Subramanian
  • , Paolo Avesani
  • , Nicu Sebe

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

This paper presents characterization of affect
(valence and arousal) using the Magnetoencephalogram (MEG)
brain signal. We attempt single-trial classification of movie
and music videos with MEG responses extracted from seven
participants. The main findings of this study are that: (i) the
MEG signal effectively encodes affective viewer responses, (ii)
clip arousal is better predicted than valence employing MEG
and (iii) prediction performance is better for movie clips as
compared to music videos.
Original languageEnglish
Title of host publicationProceedings of IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
Place of PublicationUSA
PublisherIEEE Computer Society
Pages1-6
Number of pages6
ISBN (Print)978-1-4673-5545-2
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) - Shanghai, China
Duration: 22 Apr 202326 Apr 2024
Conference number: 10
https://www.computer.org/csdl/proceedings/fg/2013/12OmNzFMFow

Conference

Conference2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)
Country/TerritoryChina
CityShanghai
Period22/04/2326/04/24
Internet address

Keywords

  • EWI-24277
  • HMI-HF: Human Factors
  • particle filtering (numerical methods)
  • pose estimation
  • target tracking

Fingerprint

Dive into the research topics of 'Decoding affect in videos employing the MEG brain signal'. Together they form a unique fingerprint.

Cite this