Inner Speech Classification using EEG Signals: A Deep Learning Approach

Bram van den Berg*, Sander van Donkelaar, Maryam Alimardani

*Corresponding author for this work

Research output: Contribution to conferencePaperScientificpeer-review

Abstract

Brain computer interfaces (BCIs) provide a direct communication pathway between humans and computers. There are three major BCI paradigms that are commonly employed: motor-imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). In our study, we sought to expand this by focusing on “Inner Speech” paradigm using EEG signals. Inner Speech refers to the internalized process of imagining one’s own “voice”. Using a 2D Convolutional Neural Network (CNN) based on the EEGNet architecture, we classified the EEG signals from eight subjects when they internally thought about four different words. Our results showed an average accuracy of 29.7% for word recognition, which is slightly above chance. We discuss the limitations and provide suggestions for future research.
Original languageEnglish
Publication statusPublished - 2021
Event2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS) - Magdeburg, Germany
Duration: 8 Sept 202110 Sept 2021
https://www.ichms2021.de/

Conference

Conference2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS)
Country/TerritoryGermany
CityMagdeburg
Period8/09/2110/09/21
Internet address

Fingerprint

Dive into the research topics of 'Inner Speech Classification using EEG Signals: A Deep Learning Approach'. Together they form a unique fingerprint.

Cite this