Abstract
Voting Advice Applications (VAAs) are online tools with attitude questions about political issues, such as “All coffee shops should be closed down. Agree-Disagree/no-opinion”. Users visit these survey tools spontaneously to obtain a voting advice based on a comparison between a user’s answers and issue positions of the political parties. While VAAs have become a central source of political information in many European countries, not much is known about the comprehension and use of political opinion questions in VAAs. We employed cognitive interviewing as well as statistical analyses of large amounts of VAA answers to evaluate comprehension difficulties in VAAs.
We collected data during 3 elections in the Netherlands: the national elections (2012), the municipal elections (2014), and the provincial elections (2015). In each election, we asked between 40 and 80 people to fill out a VAA while thinking aloud (Willis 2005). During this process, verbalizations were recorded so that we could interpret verbalizations. Moreover, we gained access to all the VAA answers provided in 34 VAAs in the 2014 Dutch municipality elections, as well as to answers provided in 11 provincial elections, and 4 national elections. This allows us to combine qualitative think-aloud data to analyses of these big datasets.
Analyses of the qualitative data show that users encounter comprehension difficulties for about 1 in every 5 VAA-questions. About two-thirds of the comprehension problems are related to the semantic meaning of the question. These problems often encompass a lack of understanding technical political terms (e.g., dog tax). Problems with the pragmatic meaning of the question (about 1/3 of the problems) included having too little information about the current situation or a lack of background information about the reason for a proposal. In case of comprehension problems, VAA users often make assumptions about the meaning of/about the question (“welfare work…do they mean health care with that? They probably do.”). Failing to look for additional information, VAA users nevertheless proceed by supplying an answer, which is disproportionally often a neutral or a no-opinion answer.
A drawback of these qualitative analyses is that they rely on relatively few respondents. We therefore conducted quantitative analyses for each election, to investigate if the question characteristics related to difficulties in the qualitative study, indeed led to more neutral and no-opinion responding in a large and real-life dataset with people’s actual responses to VAA questions. For the municipal elections, for example, we analyzed all answers provided to VAAs during the Dutch municipality elections (34 * 30 questions, of over 300,000 respondents). This confirmed that the mentioning of political terms and locations was correlated with larger proportions of no-opinion responses and neutral responses. We also found that while semantic comprehension problems are more often correlated with no-opinion answers, pragmatic problems more frequently translate to a neutral answer. This is a refinement of results concerning the use of nonsubstantial answering options in existing research (e.g. Sturgis, Roberts & Smith, 2014).
Original language | English |
---|---|
Publication status | Published - Oct 2018 |
Event | BigSurv - Barcelona, Spain Duration: 24 Oct 2018 → 27 Oct 2018 |
Conference
Conference | BigSurv |
---|---|
Country/Territory | Spain |
City | Barcelona |
Period | 24/10/18 → 27/10/18 |