Abstract
We address the problem of creating and evaluating quality Neo-Latin word embeddings for the purpose of philosophical research, adapting the Nonce2Vec tool to learn embeddings from Neo-Latin sentences. This distributional semantic modeling tool can learn from tiny data incrementally, using a larger background corpus for initialization. We conduct two evaluation tasks: definitional learning of Latin Wikipedia terms, and learning consistent embeddings from 18th century Neo-Latin sentences pertaining to the concept of mathematical method. Our results show that consistent Neo-Latin word embeddings can be learned from this type of data. While our evaluation results are promising, they do not reveal to what extent the learned models match domain expert knowledge of our Neo-Latin texts. Therefore, we propose an additional evaluation method, grounded in expert-annotated data, that would assess whether learned representations are conceptually sound in relation to the domain of study.
Original language | English |
---|---|
Title of host publication | Proceedings of LT4HALA 2020 |
Subtitle of host publication | 1st Workshop on Language Technologies for Historical and Ancient Languages |
Publisher | European Language Resources Association (ELRA) |
Pages | 84-93 |
Number of pages | 10 |
ISBN (Print) | 9791095546535 |
Publication status | Published - 2020 |
Keywords
- distributional semantics
- evaluation
- small data
- philosophy
- digital humanities
- Neo-Latin