Abstract
As computer-assisted research of voluminous datasets becomes more pervasive, so does the criticism of its epistemological, methodological, and ethical/normative inadequacies. This article proposes a hybrid approach that combines the scale of computational methods with the depth of qualitative analysis. It uses simple natural language processing algorithms to extract purposive samples from large textual corpora, which can then be analyzed using interpretive techniques. This approach helps research become more theoretically grounded and contextually sensitive—two major failings of typical “Big Data” studies. Simultaneously, it allows qualitative scholars to examine datasets that are otherwise too large to study manually and also bring more rigor to the process of sampling. The method is illustrated with two case studies, one looking at the inaugural addresses of U.S. presidents and the other investigating the news coverage of two shootings at an army camp in Texas.
Original language | English |
---|---|
Pages (from-to) | 28-50 |
Journal | Communication Methods & Measures |
Volume | 10 |
Issue number | 1 |
Publication status | Published - 2016 |