Bias Quantification for Protected Features in Pattern Classification Problems

Lisa Koutsoviti Koumeri*, Gonzalo Nápoles

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


The need to measure and mitigate bias in machine learning data sets has gained wide recognition in the field of Artificial Intelligence (AI) during the past decade. The academic and business communities call for new general-purpose measures to quantify bias. In this paper, we propose a new measure that relies on the fuzzy-rough set theory. The intuition of our measure is that protected features should not change the fuzzy-rough set boundary regions significantly. The extent to which this happens can be understood as a proxy for bias quantification. Our measure can be categorized as an individual fairness measure since the fuzzy-rough regions are computed using instance-based information pieces. The main advantage of our measure is that it does not depend on any prediction model but on a distance function. At the same time, our measure offers an intuitive rationale for the bias concept. The results using a proof-of-concept show that our measure can capture the bias issues better than other state-of-the-art measures.
Original languageEnglish
Title of host publicationProgress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
EditorsJoão Manuel R. S. Tavares, João Paulo Papa, Manuel González Hidalgo
Place of PublicationCham
PublisherSpringer International Publishing
Number of pages10
ISBN (Print)9783030934200
Publication statusPublished - 2021

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Dive into the research topics of 'Bias Quantification for Protected Features in Pattern Classification Problems'. Together they form a unique fingerprint.

Cite this