Sparseness-Optimized Feature Importance

Isel Grau, Gonzalo Nápoles

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

7 Downloads (Pure)

Abstract

In this paper, we propose a model-agnostic post-hoc explanation procedure devoted to computing feature attribution. The proposed method, termed Sparseness-Optimized Feature Importance (SOFI), entails solving an optimization problem related to the sparseness of feature importance explanations. The intuition behind this property is that the model’s performance is severely affected after marginalizing the most important features while remaining largely unaffected after marginalizing the least important ones. Existing post-hoc feature attribution methods do not optimize this property directly but rather implement proxies to obtain this behavior. Numerical simulations using both structured (tabular) and unstructured (image) classification datasets show the superiority of our proposal compared with state-of-the-art feature attribution explanation methods. The implementation of the method is available on https://github.com/igraugar/sofi.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence. xAI 2024
EditorsLuca Longo, Sebastian Lapuschkin, Christin Seifert
PublisherSpringer Cham
Pages393-415
Number of pages23
ISBN (Print)9783031637964
DOIs
Publication statusPublished - 2024

Publication series

NameCommunications in Computer and Information Science
Volume2154 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Keywords

  • feature importance
  • model-agnostic explainability
  • sparse explanations

Fingerprint

Dive into the research topics of 'Sparseness-Optimized Feature Importance'. Together they form a unique fingerprint.

Cite this