Abstract
The growing demand for eXplainable AI (XAI) has renewed interest in fuzzy cognitive maps (FCMs) due to their interpretability, causal transparency, and hybrid intelligence capabilities. However, current FCM explanation methods either overlook their dynamic behavior or limit themselves to feature attribution. Counterfactual explanations, which describe the minimal input changes required to alter outcomes, address this gap but remain largely unexplored in these models. The only existing FCM-specific approach relies on fuzzy discretization and predefined rules, producing causally invalid and overly conservative explanations. On the other hand, generic model-agnostic methods assume feature independence and suffer from instability, restrictive assumptions, and high computational costs. To overcome these limitations, this article presents the counterfactuals via the backpropagation (CF-BP) algorithm, a first backpropagation-based counterfactual explanation method for quasi-nonlinear FCMs (q-FCMs), which is a generalization of traditional FCMs that resolves convergence issues. CF-BP exploits the similarity between q-FCMs' recurrent reasoning and neural network forward propagation, using exact analytical gradients to generate precise, causally consistent, and robust counterfactual explanations within the continuous state space of the model. Extensive evaluations, including hyperparameter sensitivity analysis and benchmarking against eight state-of-the-art (SOTA) model-agnostic methods, confirm the superior performance of the proposed method across key counterfactual quality metrics.
| Original language | English |
|---|---|
| Number of pages | 15 |
| Journal | Ieee Transactions on Systems Man Cybernetics-systems |
| Early online date | Jan 2026 |
| DOIs | |
| Publication status | Published - 7 Jan 2026 |
Keywords
- Analytical models
- Cognition
- Computational modeling
- Convergence
- Counterfactual explanations
- Fuzzy cognitive maps
- Mathematical models
- Predictive models
- Sensitivity analysis
- Stability analysis
- Vectors
- eXplainable artificial intelligence (XAI)
- fuzzy cognitive maps (FCMs)
- Recurrent neural networks
Fingerprint
Dive into the research topics of 'Backpropagation-Based Counterfactual Explanations for Quasi-Nonlinear Fuzzy Cognitive Maps'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver