Symbolic Explanation Module for Fuzzy Cognitive Map-Based Reasoning Models

Fabian Hoitsma, Andreas Knoben, Maikel Leon Espinosa, Gonzalo Nápoles*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


In recent years, pattern classification has started to move from computing models with outstanding prediction rates to models able to reach a suitable trade-off between accuracy and interpretability. Fuzzy Cognitive Maps (FCMs) and their extensions are recurrent neural networks that have been partially exploited towards fulfilling such a goal. However, the interpretability of these neural systems has been confined to the fact that both neural concepts and weights have a well-defined meaning for the problem being modeled. This rather naive assumption oversimplifies the complexity behind an FCM-based classifier. In this paper, we propose a symbolic explanation module that allows extracting useful insights and patterns from a trained FCM-based classifier. The proposed explanation module is implemented in Prolog and can be seen as a reverse symbolic reasoning rule that infers the inputs to be provided to the model to obtain the desired output.
Original languageEnglish
Title of host publicationArtificial Intelligence XXXVII
Number of pages14
Publication statusPublished - 2020
Event40th SGAI International Conference on Artificial Intelligence - Cambridge, United Kingdom
Duration: 15 Dec 202017 Dec 2020


Conference40th SGAI International Conference on Artificial Intelligence
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Symbolic Explanation Module for Fuzzy Cognitive Map-Based Reasoning Models'. Together they form a unique fingerprint.

Cite this