Abstract
A limited amount of studies investigate the role of model-agnostic adversarial behavior in toxic content classification. As toxicity classifiers predominantly rely on lexical cues, (deliberately) creative and evolving language-use can be detrimental to the utility of current corpora and state-of-the-art models when they are deployed for content moderation. The less training data is available, the more vulnerable models might become. This study is, to our knowledge, the first to investigate the effect of adversarial behavior and augmentation for cyberbullying detection. We demonstrate that model-agnostic lexical substitutions significantly hurt classifier performance. Moreover, when these perturbed samples are used for augmentation, we show models become robust against word-level perturbations at a slight trade-off in overall task performance. Augmentations proposed in prior work on toxicity prove to be less effective. Our results underline the need for such evaluations in online harm areas with small corpora. The perturbed data, models, and code are available for reproduction at https://github.com/cmry/augtox .
Original language | English |
---|---|
Publication status | Published - 20 Jun 2022 |
Event | Language Resources and Evaluation Conference - Palais du Pharo, Marseille, France Duration: 21 Jun 2021 → 25 Jul 2021 Conference number: 13 https://lrec2022.lrec-conf.org/en/ |
Conference
Conference | Language Resources and Evaluation Conference |
---|---|
Abbreviated title | LREC 2022 |
Country/Territory | France |
City | Marseille |
Period | 21/06/21 → 25/07/21 |
Internet address |
Keywords
- Cyberbullying detection
- Lexical substitution
- Data augmentation