Abstract
In recent years, pattern classification has started to move from computing models with outstanding prediction rates to models able to reach a suitable trade-off between accuracy and interpretability. Fuzzy Cognitive Maps (FCMs) and their extensions are recurrent neural networks that have been partially exploited towards fulfilling such a goal. However, the interpretability of these neural systems has been confined to the fact that both neural concepts and weights have a well-defined meaning for the problem being modeled. This rather naive assumption oversimplifies the complexity behind an FCM-based classifier. In this paper, we propose a symbolic explanation module that allows extracting useful insights and patterns from a trained FCM-based classifier. The proposed explanation module is implemented in Prolog and can be seen as a reverse symbolic reasoning rule that infers the inputs to be provided to the model to obtain the desired output.
Original language | English |
---|---|
Title of host publication | Artificial Intelligence XXXVII |
Pages | 21-34 |
Number of pages | 14 |
Publication status | Published - 2020 |
Event | 40th SGAI International Conference on Artificial Intelligence - Cambridge, United Kingdom Duration: 15 Dec 2020 → 17 Dec 2020 http://bcs-sgai.org/ai2020/ |
Conference
Conference | 40th SGAI International Conference on Artificial Intelligence |
---|---|
Country/Territory | United Kingdom |
City | Cambridge |
Period | 15/12/20 → 17/12/20 |
Internet address |