I Get It! Using Qualitative and Quantitative Data to Investigate Comprehension Difficulties in Political Attitude Questions

Naomi Kamoen, Bregje Holleman

Research output: Contribution to conferenceAbstractOther research output

Abstract

Voting Advice Applications (VAAs) are online tools with attitude questions about political issues, such as “All coffee shops should be closed down. Agree-Disagree/no-opinion”. Users visit these survey tools spontaneously to obtain a voting advice based on a comparison between a user’s answers and issue positions of the political parties. While VAAs have become a central source of political information in many European countries, not much is known about the comprehension and use of political opinion questions in VAAs. We employed cognitive interviewing as well as statistical analyses of large amounts of VAA answers to evaluate comprehension difficulties in VAAs. We collected data during 3 elections in the Netherlands: the national elections (2012), the municipal elections (2014), and the provincial elections (2015). In each election, we asked between 40 and 80 people to fill out a VAA while thinking aloud (Willis 2005). During this process, verbalizations were recorded so that we could interpret verbalizations. Moreover, we gained access to all the VAA answers provided in 34 VAAs in the 2014 Dutch municipality elections, as well as to answers provided in 11 provincial elections, and 4 national elections. This allows us to combine qualitative think-aloud data to analyses of these big datasets. Analyses of the qualitative data show that users encounter comprehension difficulties for about 1 in every 5 VAA-questions. About two-thirds of the comprehension problems are related to the semantic meaning of the question. These problems often encompass a lack of understanding technical political terms (e.g., dog tax). Problems with the pragmatic meaning of the question (about 1/3 of the problems) included having too little information about the current situation or a lack of background information about the reason for a proposal. In case of comprehension problems, VAA users often make assumptions about the meaning of/about the question (“welfare work…do they mean health care with that? They probably do.”). Failing to look for additional information, VAA users nevertheless proceed by supplying an answer, which is disproportionally often a neutral or a no-opinion answer. A drawback of these qualitative analyses is that they rely on relatively few respondents. We therefore conducted quantitative analyses for each election, to investigate if the question characteristics related to difficulties in the qualitative study, indeed led to more neutral and no-opinion responding in a large and real-life dataset with people’s actual responses to VAA questions. For the municipal elections, for example, we analyzed all answers provided to VAAs during the Dutch municipality elections (34 * 30 questions, of over 300,000 respondents). This confirmed that the mentioning of political terms and locations was correlated with larger proportions of no-opinion responses and neutral responses. We also found that while semantic comprehension problems are more often correlated with no-opinion answers, pragmatic problems more frequently translate to a neutral answer. This is a refinement of results concerning the use of nonsubstantial answering options in existing research (e.g. Sturgis, Roberts & Smith, 2014).
Original languageEnglish
Publication statusPublished - Oct 2018
EventBigSurv - Barcelona, Spain
Duration: 24 Oct 201827 Oct 2018

Conference

ConferenceBigSurv
CountrySpain
CityBarcelona
Period24/10/1827/10/18

Fingerprint

political attitude
voting
comprehension
election
municipality
pragmatics
semantics
political opinion
lack
taxes
Netherlands

Cite this

@conference{c39eccff3e474b1dbca602f89340c374,
title = "I Get It! Using Qualitative and Quantitative Data to Investigate Comprehension Difficulties in Political Attitude Questions",
abstract = "Voting Advice Applications (VAAs) are online tools with attitude questions about political issues, such as “All coffee shops should be closed down. Agree-Disagree/no-opinion”. Users visit these survey tools spontaneously to obtain a voting advice based on a comparison between a user’s answers and issue positions of the political parties. While VAAs have become a central source of political information in many European countries, not much is known about the comprehension and use of political opinion questions in VAAs. We employed cognitive interviewing as well as statistical analyses of large amounts of VAA answers to evaluate comprehension difficulties in VAAs. We collected data during 3 elections in the Netherlands: the national elections (2012), the municipal elections (2014), and the provincial elections (2015). In each election, we asked between 40 and 80 people to fill out a VAA while thinking aloud (Willis 2005). During this process, verbalizations were recorded so that we could interpret verbalizations. Moreover, we gained access to all the VAA answers provided in 34 VAAs in the 2014 Dutch municipality elections, as well as to answers provided in 11 provincial elections, and 4 national elections. This allows us to combine qualitative think-aloud data to analyses of these big datasets. Analyses of the qualitative data show that users encounter comprehension difficulties for about 1 in every 5 VAA-questions. About two-thirds of the comprehension problems are related to the semantic meaning of the question. These problems often encompass a lack of understanding technical political terms (e.g., dog tax). Problems with the pragmatic meaning of the question (about 1/3 of the problems) included having too little information about the current situation or a lack of background information about the reason for a proposal. In case of comprehension problems, VAA users often make assumptions about the meaning of/about the question (“welfare work…do they mean health care with that? They probably do.”). Failing to look for additional information, VAA users nevertheless proceed by supplying an answer, which is disproportionally often a neutral or a no-opinion answer. A drawback of these qualitative analyses is that they rely on relatively few respondents. We therefore conducted quantitative analyses for each election, to investigate if the question characteristics related to difficulties in the qualitative study, indeed led to more neutral and no-opinion responding in a large and real-life dataset with people’s actual responses to VAA questions. For the municipal elections, for example, we analyzed all answers provided to VAAs during the Dutch municipality elections (34 * 30 questions, of over 300,000 respondents). This confirmed that the mentioning of political terms and locations was correlated with larger proportions of no-opinion responses and neutral responses. We also found that while semantic comprehension problems are more often correlated with no-opinion answers, pragmatic problems more frequently translate to a neutral answer. This is a refinement of results concerning the use of nonsubstantial answering options in existing research (e.g. Sturgis, Roberts & Smith, 2014).",
author = "Naomi Kamoen and Bregje Holleman",
year = "2018",
month = "10",
language = "English",
note = "BigSurv ; Conference date: 24-10-2018 Through 27-10-2018",

}

I Get It! Using Qualitative and Quantitative Data to Investigate Comprehension Difficulties in Political Attitude Questions. / Kamoen, Naomi; Holleman, Bregje.

2018. Abstract from BigSurv, Barcelona, Spain.

Research output: Contribution to conferenceAbstractOther research output

TY - CONF

T1 - I Get It! Using Qualitative and Quantitative Data to Investigate Comprehension Difficulties in Political Attitude Questions

AU - Kamoen, Naomi

AU - Holleman, Bregje

PY - 2018/10

Y1 - 2018/10

N2 - Voting Advice Applications (VAAs) are online tools with attitude questions about political issues, such as “All coffee shops should be closed down. Agree-Disagree/no-opinion”. Users visit these survey tools spontaneously to obtain a voting advice based on a comparison between a user’s answers and issue positions of the political parties. While VAAs have become a central source of political information in many European countries, not much is known about the comprehension and use of political opinion questions in VAAs. We employed cognitive interviewing as well as statistical analyses of large amounts of VAA answers to evaluate comprehension difficulties in VAAs. We collected data during 3 elections in the Netherlands: the national elections (2012), the municipal elections (2014), and the provincial elections (2015). In each election, we asked between 40 and 80 people to fill out a VAA while thinking aloud (Willis 2005). During this process, verbalizations were recorded so that we could interpret verbalizations. Moreover, we gained access to all the VAA answers provided in 34 VAAs in the 2014 Dutch municipality elections, as well as to answers provided in 11 provincial elections, and 4 national elections. This allows us to combine qualitative think-aloud data to analyses of these big datasets. Analyses of the qualitative data show that users encounter comprehension difficulties for about 1 in every 5 VAA-questions. About two-thirds of the comprehension problems are related to the semantic meaning of the question. These problems often encompass a lack of understanding technical political terms (e.g., dog tax). Problems with the pragmatic meaning of the question (about 1/3 of the problems) included having too little information about the current situation or a lack of background information about the reason for a proposal. In case of comprehension problems, VAA users often make assumptions about the meaning of/about the question (“welfare work…do they mean health care with that? They probably do.”). Failing to look for additional information, VAA users nevertheless proceed by supplying an answer, which is disproportionally often a neutral or a no-opinion answer. A drawback of these qualitative analyses is that they rely on relatively few respondents. We therefore conducted quantitative analyses for each election, to investigate if the question characteristics related to difficulties in the qualitative study, indeed led to more neutral and no-opinion responding in a large and real-life dataset with people’s actual responses to VAA questions. For the municipal elections, for example, we analyzed all answers provided to VAAs during the Dutch municipality elections (34 * 30 questions, of over 300,000 respondents). This confirmed that the mentioning of political terms and locations was correlated with larger proportions of no-opinion responses and neutral responses. We also found that while semantic comprehension problems are more often correlated with no-opinion answers, pragmatic problems more frequently translate to a neutral answer. This is a refinement of results concerning the use of nonsubstantial answering options in existing research (e.g. Sturgis, Roberts & Smith, 2014).

AB - Voting Advice Applications (VAAs) are online tools with attitude questions about political issues, such as “All coffee shops should be closed down. Agree-Disagree/no-opinion”. Users visit these survey tools spontaneously to obtain a voting advice based on a comparison between a user’s answers and issue positions of the political parties. While VAAs have become a central source of political information in many European countries, not much is known about the comprehension and use of political opinion questions in VAAs. We employed cognitive interviewing as well as statistical analyses of large amounts of VAA answers to evaluate comprehension difficulties in VAAs. We collected data during 3 elections in the Netherlands: the national elections (2012), the municipal elections (2014), and the provincial elections (2015). In each election, we asked between 40 and 80 people to fill out a VAA while thinking aloud (Willis 2005). During this process, verbalizations were recorded so that we could interpret verbalizations. Moreover, we gained access to all the VAA answers provided in 34 VAAs in the 2014 Dutch municipality elections, as well as to answers provided in 11 provincial elections, and 4 national elections. This allows us to combine qualitative think-aloud data to analyses of these big datasets. Analyses of the qualitative data show that users encounter comprehension difficulties for about 1 in every 5 VAA-questions. About two-thirds of the comprehension problems are related to the semantic meaning of the question. These problems often encompass a lack of understanding technical political terms (e.g., dog tax). Problems with the pragmatic meaning of the question (about 1/3 of the problems) included having too little information about the current situation or a lack of background information about the reason for a proposal. In case of comprehension problems, VAA users often make assumptions about the meaning of/about the question (“welfare work…do they mean health care with that? They probably do.”). Failing to look for additional information, VAA users nevertheless proceed by supplying an answer, which is disproportionally often a neutral or a no-opinion answer. A drawback of these qualitative analyses is that they rely on relatively few respondents. We therefore conducted quantitative analyses for each election, to investigate if the question characteristics related to difficulties in the qualitative study, indeed led to more neutral and no-opinion responding in a large and real-life dataset with people’s actual responses to VAA questions. For the municipal elections, for example, we analyzed all answers provided to VAAs during the Dutch municipality elections (34 * 30 questions, of over 300,000 respondents). This confirmed that the mentioning of political terms and locations was correlated with larger proportions of no-opinion responses and neutral responses. We also found that while semantic comprehension problems are more often correlated with no-opinion answers, pragmatic problems more frequently translate to a neutral answer. This is a refinement of results concerning the use of nonsubstantial answering options in existing research (e.g. Sturgis, Roberts & Smith, 2014).

M3 - Abstract

ER -