Abstract
Adversarial examples in NLP are receiving increasing research attention. One line of investigation is the generation of word-level adversarial examples against fine-tuned Transformer models that preserve naturalness and grammaticality. Previous work found that human- and machine-generated adversarial examples are comparable in their naturalness and grammatical correctness. Most notably, humans were able to generate adversarial examples much more effortlessly than automated attacks. In this paper, we provide a detailed analysis of exactly how humans create these adversarial examples. By exploring the behavioural patterns of human workers during the generation process, we identify statistically significant tendencies based on which words humans prefer to select for adversarial replacement (e.g., word frequencies, word saliencies, sentiment) as well as where and when words are replaced in an input sequence. With our findings, we seek to inspire efforts that harness human strategies for more robust NLP models.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics: EMNLP 2022 |
Place of Publication | Abu Dhabi, United Arab Emirates |
Publisher | Association for Computational Linguistics |
Pages | 6118-6126 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 2022 |