Assessing the usefulness of Google Books' word frequencies for psycholinguistic research on word processing

Marc Brysbaert*, Emmanuel Keuleers, Boris New

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

40 Citations (Scopus)

Abstract

In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies.

Original languageEnglish
Article number27
Number of pages8
JournalFrontiers in Psychology
Volume2
DOIs
Publication statusPublished - 2011
Externally publishedYes

Keywords

  • word frequency
  • lexical decision
  • Google Books ngrams
  • SUBTLEX

Fingerprint

Dive into the research topics of 'Assessing the usefulness of Google Books' word frequencies for psycholinguistic research on word processing'. Together they form a unique fingerprint.

Cite this