This paper presents an interpretable neural system-termed Evolving Long-term Cognitive Network-for pattern classification. The proposed model was inspired by Fuzzy Cognitive Maps, which are interpretable recurrent neural networks for modeling and simulation. The network architecture is comprised of two neural blocks: a recurrent input layer and an output layer. The input layer is a Long-term Cognitive Network that gets unfolded in the same way as other recurrent neural networks, thus producing a sort of abstract hidden layers. In our model, we can attach meaningful linguistic labels to each neuron since the input neurons correspond to features in a given classification problem and the output neurons correspond to class labels. Moreover, we propose a variant of the backpropagation learning algorithm to compute the required parameters. This algorithm includes two new regularization components that are aimed at obtaining more interpretable knowledge representations. The numerical simulations using 58 datasets show that our model achieves higher prediction rates when compared with traditional white boxes while remaining competitive with the black boxes. Finally, we elaborate on the interpretability of our neural system using a proof of concept. (c) 2020 The Authors. Published by Elsevier Inc.
- Long-term Cognitive Networks
- Recurrent neural networks