Design of Web Questionnaires: A Test for Number of Items per Screen

Research output: Working paperDiscussion paperOther research output

744 Downloads (Pure)


This paper presents results from an experimental manipulation of one versus multiple-items per screen format in a Web survey.The purpose of the experiment was to find out if a questionnaire s format influences how respondents provide answers in online questionnaires and if this is depending on personal characteristics.Four different formats were used, varying the number of items on a screen (1, 4, 10, and 40 items).To test how robust the results were, and to find out whether or not a specific format shows more deviation in answer scores, the experiment was repeated.We found that mean scores, variances and correlations do not differ much in the different formats.In addition, formats show the same deviation of item scores between repeated experiments.In relation to non-response error, we found that the more items appear on a single screen, the higher the number of people with one or more missing values.Placing more items on a single screen a) shortens the duration of the interview, b) negatively influences the respondent's evaluation of the duration of the interview, c) negatively influences the respondent's evaluation of the layout, and d) increases the difficulty in completing the interview.We also found that scrolling negatively influences the evaluation of a questionnaire's layout. Furthermore, the results show that differences between formats are influenced by personal characteristics.
Original languageEnglish
Place of PublicationTilburg
Number of pages43
Publication statusPublished - 2005

Publication series

NameCentER Discussion Paper


  • questionnaires
  • error analysis
  • web surveys
  • questionnaire design
  • measurement errors
  • non-response errors


Dive into the research topics of 'Design of Web Questionnaires: A Test for Number of Items per Screen'. Together they form a unique fingerprint.

Cite this