Implicit causality in GPT-2: a case study

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

This case study investigates the extent to which a language model (GPT-2) is able to capture native speakers' intuitions about implicit causality in a sentence completion task. Study 1 reproduces earlier results (showing that the model's surprisal values correlate with the implicit causality bias of the verb; Davis and van Schijndel 2021), and then examine the effects of gender and verb frequency on model performance. Study 2 examines the reasoning ability of GPT-2: Is the model able to produce more sensible motivations for why the subject VERBed the object if the verbs have stronger causality biases? For this study we took care to avoid human raters being biased by obscenities and disfluencies generated by the model.
Original languageEnglish
Title of host publicationProceedings of the 15th International Conference on Computational Semantics
EditorsMaxime Amblard, Ellen Breitholtz
Place of PublicationNancy, France
PublisherAssociation for Computational Linguistics
Pages67-77
Number of pages11
Publication statusPublished - 1 Jun 2023

Fingerprint

Dive into the research topics of 'Implicit causality in GPT-2: a case study'. Together they form a unique fingerprint.

Cite this