Abstract
We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens acquisition of syntactic categories.
Original language | English |
---|---|
Title of host publication | Proceedings of the 6th Workshop on Cognitive Aspects of Computational Language Learning |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 28-32 |
Number of pages | 5 |
Publication status | Published - Sept 2015 |
Externally published | Yes |