Denoising Autoencoders for Overgeneralization in Neural Networks

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. Solving this problem may help improve the security of such systems in critical applications, and may further lead to applications in the context of open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using denoising autoencoders and shows that such confidence score can correctly identify the regions of the input space close to the training distribution by approximately identifying its local maxima.
Original languageEnglish
Pages (from-to)1-1
Number of pages1
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
DOIs
Publication statusPublished - 21 May 2019

Fingerprint

Denoising
Confidence
Neural Networks
Neural networks
Open set
Partitioning
Unknown
Target
Training
Class
Model

Cite this

@article{d4a353a004cb4ff78b4711551f15c524,
title = "Denoising Autoencoders for Overgeneralization in Neural Networks",
abstract = "Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. Solving this problem may help improve the security of such systems in critical applications, and may further lead to applications in the context of open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using denoising autoencoders and shows that such confidence score can correctly identify the regions of the input space close to the training distribution by approximately identifying its local maxima.",
author = "Giacomo Spigler",
year = "2019",
month = "5",
day = "21",
doi = "10.1109/tpami.2019.2909876",
language = "English",
pages = "1--1",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",

}

Denoising Autoencoders for Overgeneralization in Neural Networks. / Spigler, Giacomo.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 21.05.2019, p. 1-1.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Denoising Autoencoders for Overgeneralization in Neural Networks

AU - Spigler, Giacomo

PY - 2019/5/21

Y1 - 2019/5/21

N2 - Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. Solving this problem may help improve the security of such systems in critical applications, and may further lead to applications in the context of open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using denoising autoencoders and shows that such confidence score can correctly identify the regions of the input space close to the training distribution by approximately identifying its local maxima.

AB - Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. Solving this problem may help improve the security of such systems in critical applications, and may further lead to applications in the context of open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using denoising autoencoders and shows that such confidence score can correctly identify the regions of the input space close to the training distribution by approximately identifying its local maxima.

UR - http://www.mendeley.com/research/denoising-autoencoders-overgeneralization-neural-networks

U2 - 10.1109/tpami.2019.2909876

DO - 10.1109/tpami.2019.2909876

M3 - Article

SP - 1

EP - 1

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

ER -