Long short-term cognitive networks

Gonzalo Nápoles*, Isel Grau, Agnieszka Jastrzębska, Yamisleydi Salgueiro

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)

Abstract

In this paper, we present a recurrent neural system named long short-term cognitive networks (LSTCNs) as a generalization of the short-term cognitive network (STCN) model. Such a generalization is motivated by the difficulty of forecasting very long time series efficiently. The LSTCN model can be defined as a collection of STCN blocks, each processing a specific time patch of the (multivariate) time series being modeled. In this neural ensemble, each block passes information to the subsequent one in the form of weight matrices representing the prior knowledge. As a second contribution, we propose a deterministic learning algorithm to compute the learnable weights while preserving the prior knowledge resulting from previous learning processes. As a third contribution, we introduce a feature influence score as a proxy to explain the forecasting process in multivariate time series. The simulations using three case studies show that our neural system reports small forecasting errors while being significantly faster than state-of-the-art recurrent models.

Original languageEnglish
Pages (from-to)16959-16971
Number of pages13
JournalNeural Computing and Applications
Volume34
Issue number19
DOIs
Publication statusPublished - Oct 2022

Keywords

  • Interpretability
  • Multivariate time Series
  • Recurrent Neural Networks
  • Short-term Cognitive Networks

Fingerprint

Dive into the research topics of 'Long short-term cognitive networks'. Together they form a unique fingerprint.

Cite this