Construction and Supervised Learning of Long-Term Grey Cognitive Networks

Gonzalo Nápoles*, Jose L. Salmeron, Koen Vanhoof

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

10 Citations (Scopus)

Abstract

Modeling a real-world system by means of a neural model involves numerous challenges that range from formulating transparent knowledge representations to obtaining reliable simulation errors. However, that knowledge is often difficult to formalize in a precise way using crisp numbers. In this paper, we present the long-term grey cognitive networks which expands the recently proposed long-term cognitive networks (LTCNs) with grey numbers. One advantage of our neural system is that it allows embedding knowledge into the network using weights and constricted neurons. In addition, we propose two procedures to construct the network in situations where only historical data are available, and a regularization method that is coupled with a nonsynaptic backpropagation algorithm. The results have shown that our proposal outperforms the LTCN model and other state-of-the-art methods in terms of accuracy.

Original languageEnglish
Article number8718506
Pages (from-to)686-695
Number of pages10
JournalIEEE Transactions on Cybernetics
Volume51
Issue number2
DOIs
Publication statusPublished - Feb 2021
Externally publishedYes

Keywords

  • Error backpropagation
  • grey systems
  • neural cognitive modeling
  • recurrent systems

Fingerprint

Dive into the research topics of 'Construction and Supervised Learning of Long-Term Grey Cognitive Networks'. Together they form a unique fingerprint.

Cite this