TY - JOUR
T1 - Pattern classification with Evolving Long-term Cognitive Networks
AU - Nápoles, Gonzalo
AU - Jastrzebska, Agnieszka
AU - Salgueiro, Yamisleydi
N1 - Funding Information:
The authors would like to thank the reviewers who provided constructive feedback. Part of this research was supported by the Special Research Fund (BOF) of Hasselt University, Belgium, through the project BOF20KV01. Part of the contribution was supported by the National Science Centre , grant No. 2019/35/D/HS4/01594 , decision no. DEC-2019/35/D/HS4/01594.
Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2021/2/16
Y1 - 2021/2/16
N2 - This paper presents an interpretable neural system-termed Evolving Long-term Cognitive Network-for pattern classification. The proposed model was inspired by Fuzzy Cognitive Maps, which are interpretable recurrent neural networks for modeling and simulation. The network architecture is comprised of two neural blocks: a recurrent input layer and an output layer. The input layer is a Long-term Cognitive Network that gets unfolded in the same way as other recurrent neural networks, thus producing a sort of abstract hidden layers. In our model, we can attach meaningful linguistic labels to each neuron since the input neurons correspond to features in a given classification problem and the output neurons correspond to class labels. Moreover, we propose a variant of the backpropagation learning algorithm to compute the required parameters. This algorithm includes two new regularization components that are aimed at obtaining more interpretable knowledge representations. The numerical simulations using 58 datasets show that our model achieves higher prediction rates when compared with traditional white boxes while remaining competitive with the black boxes. Finally, we elaborate on the interpretability of our neural system using a proof of concept. (c) 2020 The Authors. Published by Elsevier Inc.
AB - This paper presents an interpretable neural system-termed Evolving Long-term Cognitive Network-for pattern classification. The proposed model was inspired by Fuzzy Cognitive Maps, which are interpretable recurrent neural networks for modeling and simulation. The network architecture is comprised of two neural blocks: a recurrent input layer and an output layer. The input layer is a Long-term Cognitive Network that gets unfolded in the same way as other recurrent neural networks, thus producing a sort of abstract hidden layers. In our model, we can attach meaningful linguistic labels to each neuron since the input neurons correspond to features in a given classification problem and the output neurons correspond to class labels. Moreover, we propose a variant of the backpropagation learning algorithm to compute the required parameters. This algorithm includes two new regularization components that are aimed at obtaining more interpretable knowledge representations. The numerical simulations using 58 datasets show that our model achieves higher prediction rates when compared with traditional white boxes while remaining competitive with the black boxes. Finally, we elaborate on the interpretability of our neural system using a proof of concept. (c) 2020 The Authors. Published by Elsevier Inc.
KW - Backpropagation
KW - Interpretability
KW - Long-term Cognitive Networks
KW - Recurrent neural networks
U2 - 10.1016/j.ins.2020.08.058
DO - 10.1016/j.ins.2020.08.058
M3 - Article
VL - 548
SP - 461
EP - 478
JO - Information Sciences
JF - Information Sciences
SN - 0020-0255
ER -