Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations

Chris Emmery, Akos Kádár, Grzegorz Chrupała, W.M.P. Daelemans

Research output: Contribution to conferencePaperScientificpeer-review

Abstract

A limited amount of studies investigate the role of model-agnostic adversarial behavior in toxic content classification. As toxicity classifiers predominantly rely on lexical cues, (deliberately) creative and evolving language-use can be detrimental to the utility of current corpora and state-of-the-art models when they are deployed for content moderation. The less training data is available, the more vulnerable models might become. This study is, to our knowledge, the first to investigate the effect of adversarial behavior and augmentation for cyberbullying detection. We demonstrate that model-agnostic lexical substitutions significantly hurt classifier performance. Moreover, when these perturbed samples are used for augmentation, we show models become robust against word-level perturbations at a slight trade-off in overall task performance. Augmentations proposed in prior work on toxicity prove to be less effective. Our results underline the need for such evaluations in online harm areas with small corpora. The perturbed data, models, and code are available for reproduction at https://github.com/cmry/augtox .
Original languageEnglish
Publication statusPublished - 20 Jun 2022
EventLanguage Resources and Evaluation Conference - Palais du Pharo, Marseille, France
Duration: 21 Jun 202125 Jul 2021
Conference number: 13
https://lrec2022.lrec-conf.org/en/

Conference

ConferenceLanguage Resources and Evaluation Conference
Abbreviated titleLREC 2022
Country/TerritoryFrance
CityMarseille
Period21/06/2125/07/21
Internet address

Keywords

  • Cyberbullying detection
  • Lexical substitution
  • Data augmentation

Fingerprint

Dive into the research topics of 'Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations'. Together they form a unique fingerprint.

Cite this