Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

Marlotte Pannekoek, Giacomo Spigler

Research output: Contribution to journalArticleScientific

223 Downloads (Pure)

Abstract

To enable an ethical and legal use of machine learning algorithms, they must both be fair and protect the privacy of those whose data are being used. However, implementing privacy and fairness constraints might come at the cost of utility (Jayaraman & Evans, 2019; Gong et al., 2020). This paper investigates the privacy-utility-fairness trade-off in neural networks by comparing a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and a Differentially Private and Fair Neural Network (DPF-NN) to evaluate differences in performance on metrics for privacy (epsilon, delta), fairness (risk difference), and utility (accuracy). In the scenario with the highest considered privacy guarantees (epsilon = 0.1, delta = 0.00001), the DPF-NN was found to achieve better risk difference than all the other neural networks with only a marginally lower accuracy than the S-NN and DP-NN. This model is considered fair as it achieved a risk difference below the strict (0.05) and lenient (0.1) thresholds. However, while the accuracy of the proposed model improved on previous work from Xu, Yuan and Wu (2019), the risk difference was found to be worse.
Original languageEnglish
JournalarXiv
Publication statusUnpublished - 2021

Keywords

  • cs.LG
  • cs.AI

Fingerprint

Dive into the research topics of 'Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks'. Together they form a unique fingerprint.

Cite this