Measuring Implicit Bias Using SHAP Feature Importance and Fuzzy Cognitive Maps

Isel Grau, Gonzalo Nápoles, Fabian Hoitsma, Lisa Koutsoviti Koumeri, Koen Vanhoof

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    26 Downloads (Pure)

    Abstract

    In this paper, we integrate the concepts of feature importance with implicit bias in the context of pattern classification. This is done by means of a three-step methodology that involves (i) building a classifier and tuning its hyperparameters, (ii) building a Fuzzy Cognitive Map model able to quantify implicit bias, and (iii) using the SHAP feature importance to active the neural concepts when performing simulations. The results using a real case study concerning fairness research support our two-fold hypothesis. On the one hand, it is illustrated the
    risks of using a feature importance method as an absolute tool to measure implicit bias. On the other hand, it is concluded that the amount of bias towards protected features might differ depending on whether the features are numerically or categorically encoded.
    Original languageUndefined/Unknown
    Title of host publicationIntelligent Systems Conference
    Pages745-764
    Number of pages20
    DOIs
    Publication statusPublished - 2024

    Keywords

    • fairness
    • implicit bias
    • explainable artificial intelligence
    • feature importance
    • fuzzy cognitive maps

    Cite this