1932

Abstract

In recent decades, research about survey questions has emphasized decision-based approaches. Current research focuses on identifying and systematizing characteristics of questions that are key in researchers’ decisions. We describe important classes of decisions and question characteristics: topic, type of question (e.g., event or behavior, evaluation or judgment), response dimension (e.g., occurrence, frequency, intensity), conceptualization and operationalization of the target object (e.g., how to label the object being asked about and the response dimension), question structure (e.g., use of a filter question, placement in a battery), question form or response format (e.g., yes–no, selection from ordered categories, choice from a list, discrete value), response categories, question wording, and question implementation. We use the framework of question characteristics to summarize key results in active research areas and provide practical recommendations. Progress depends on recognizing how question characteristics co-occur, using a variety of methods and appropriate models, and implementing study designs with strong criteria.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-soc-121919-054544
2020-07-30
2024-06-24
Loading full text...

Full text loading...

/deliver/fulltext/soc/46/1/annurev-soc-121919-054544.html?itemId=/content/journals/10.1146/annurev-soc-121919-054544&mimeType=html&fmt=ahah

Literature Cited

  1. Al Baghal T. 2014. Is vague valid? The comparative predictive validity of vague quantifiers and numeric response options. Surv. Res. Methods 8:169–79
    [Google Scholar]
  2. Alwin DF. 2007. Margins of Error: A Study of Reliability in Survey Measurement Hoboken, NJ: Wiley
    [Google Scholar]
  3. Alwin DF, Baumgartner EM, Beattie BA 2018. Number of response categories and reliability in attitude measurement. J. Surv. Stat. Methodol. 6:212–39
    [Google Scholar]
  4. Alwin DF, Beattie BA. 2016. The KISS principle in survey design: question length and data quality. Sociol. Methodol. 46:121–52
    [Google Scholar]
  5. Alwin DF, Krosnick JA. 1991. The reliability of survey attitude measurement: the influence of question and respondent attributes. Sociol. Methods Res. 20:139–81
    [Google Scholar]
  6. Bais F, Schouten B, Lugtig P, Toepoel V, Arends-Tòth J et al. 2019. Can survey item characteristics relevant to measurement error be coded reliably? A case study on 11 Dutch general population surveys. Sociol. Methods Res. 48:263–95
    [Google Scholar]
  7. Baka AS, Figgou L, Triga V 2012. ‘Neither agree, nor disagree’: a critical analysis of the middle answer category in Voting Advice Applications. Int. J. Electron. Gov. 5:244–63
    [Google Scholar]
  8. Beatty P, Collins D, Kaye L, Padilla J, Willis G, Wilmot A 2020. Advances in Questionnaire Design, Development, Evaluation, and Testing Hoboken, NJ: Wiley
    [Google Scholar]
  9. Beckstead JW. 2014. On measurements and their quality: verbal anchors and the number of response options in rating scales. Int. J. Nurs. Stud. 51:807–14
    [Google Scholar]
  10. Belli RF, Stafford FP, Alwin DF 2009. Calendar and Time Diary Methods in Life Course Research Thousand Oaks, CA: Sage
    [Google Scholar]
  11. Blasius J, Thiessen V. 2001. The use of neutral responses in survey questions: an application of multiple correspondence analysis. J. Off. Stat. 17:351–67
    [Google Scholar]
  12. Borgers N, Hox J, Sikkel D 2003. Response quality in survey research with children and adolescents: the effect of labeled response options and vague quantifiers. Int. J. Public Opin. Res. 15:83–94
    [Google Scholar]
  13. Cacioppo JT, Gardner WL, Berntson GG 1997. Beyond bipolar conceptualizations and measures: the case of attitudes and evaluative space. Personal. Soc. Psychol. Rev. 1:3–25
    [Google Scholar]
  14. Callegaro M, Murakami MH, Tepman Z, Henderson V 2015. Yes–no answers versus check-all in self-administered modes: a systematic review and analyses. Int. J. Mark. Res. 57:203–23
    [Google Scholar]
  15. Carley-Baxter LR, Peytchev A, Black ML 2010. Effect of questionnaire structure on nonresponse and measurement error: sequential versus grouped placement of filter questions Paper presented at the 65th Annual Conference of the Annual American Association for Public Opinion Research Conference Chicago: May 13–16
    [Google Scholar]
  16. Cernat A, Couper MP, Ofstedal MB 2016. Estimation of mode effects in the Health and Retirement Study using measurement models. J. Surv. Stat. Methodol. 4:501–24
    [Google Scholar]
  17. Christian LM, Dillman DA, Smyth JD 2007. Helping respondents get it right the first time: the influence of words, symbols, and graphics in web surveys. Public Opin. Q. 71:113–25
    [Google Scholar]
  18. Christian LM, Parsons NL, Dillman DA 2009. Designing scalar questions for web surveys. Sociol. Methods Res. 37:393–425
    [Google Scholar]
  19. Colasanto D, Singer E, Rogers TF 1992. Context effects on responses to questions about AIDS. Public Opin. Q. 56:515–18
    [Google Scholar]
  20. Constantinople A. 1973. Masculinity–femininity: an exception to a famous dictum. Psychol. Bull. 80:389–407
    [Google Scholar]
  21. Converse JM, Presser S. 1986. Survey Questions: Handcrafting the Standardized Questionnaire Thousand Oaks, CA: Sage
    [Google Scholar]
  22. Couper MP. 2008. Designing Effective Web Surveys New York: Cambridge Univ. Press
    [Google Scholar]
  23. Couper MP, Tourangeau R, Conrad FG, Singer E 2006. Evaluating the effectiveness of visual analog scales. Soc. Sci. Comput. Rev. 24:227–45
    [Google Scholar]
  24. Couper MP, Tourangeau R, Conrad FG, Zhang C 2013. The design of grids in web surveys. Soc. Sci. Comput. Rev. 31:322–45
    [Google Scholar]
  25. Couper MP, Zhang C. 2016. Helping respondents provide good answers in web surveys. Surv. Res. Methods 10:49–64
    [Google Scholar]
  26. de Leeuw ED, Hox JJ, Boevé A 2016. Handling do-not-know answers: exploring new approaches in online and mixed-mode surveys. Soc. Sci. Comput. Rev. 34:116–32
    [Google Scholar]
  27. DeCastellarnau A. 2018. A classification of response scale characteristics that affect data quality: a literature review. Qual. Quant. 52:1523–59
    [Google Scholar]
  28. Dehaene S, Bossini S, Giraux P 1993. The mental representation of parity and number magnitude. J. Exp. Psychol. Gen. 122:371–96
    [Google Scholar]
  29. Dillman DA, Smyth JD, Christian LM 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method Hoboken, NJ: Wiley
    [Google Scholar]
  30. Dykema J, Garbarski D, Wall IF, Edwards DF 2019. Measuring trust in medical researchers: adding insights from cognitive interviews to examine agree–disagree and construct-specific survey questions. J. Off. Stat. 35:353–86
    [Google Scholar]
  31. Dykema J, Schaeffer NC. 2000. Events, instruments, and reporting errors. Am. Sociol. Rev. 65:619–29
    [Google Scholar]
  32. Dykema J, Schaeffer NC, Garbarski D 2012. Effects of agree–disagree versus construct-specific items on reliability, validity, and interviewer–respondent interaction Paper presented at the 67th Annual Conference of the American Association for Public Opinion Research Orlando, FL: May 17–20
    [Google Scholar]
  33. Dykema J, Schaeffer NC, Garbarski D, Hout M 2020. The role of question characteristics in designing and evaluating survey questions. See Beatty et al. 2020 449–70
  34. Dykema J, Schaeffer NC, Garbarski D, Nordheim EV, Banghart M, Cyffka K 2016. The impact of parenthetical phrases on interviewers’ and respondents’ processing of survey questions. Surv. Pract. 9: https://doi.org/10.29115/SP-2016-0008
    [Crossref] [Google Scholar]
  35. Eckman S, Kreuter F. 2018. Misreporting to looping questions in surveys: recall, motivation and burden. Surv. Res. Methods 12:59–74
    [Google Scholar]
  36. Eckman S, Kreuter F, Kirchner A, Jäckle A, Tourangeau R, Presser S 2014. Assessing the mechanisms of misreporting to filter questions in surveys. Public Opin. Q. 78:721–33
    [Google Scholar]
  37. Eutsler J, Lang B. 2015. Rating scales in accounting research: the impact of scale points and labels. Behav. Res. Account. 27:35–51
    [Google Scholar]
  38. Fowler FJ Jr 1995. Improving Survey Questions: Design and Evaluation Thousand Oaks, CA: Sage
    [Google Scholar]
  39. Fowler FJ Jr., Cosenza C. 2009. Design and evaluation of survey questions. The Sage Handbook of Applied Social Research Methods L Bickman, DJ Rog 375–412 Thousand Oaks, CA: Sage
    [Google Scholar]
  40. Fraenkel T, Schul Y. 2008. The meaning of negated adjectives. Intercult. Pragmat. 5:517–40
    [Google Scholar]
  41. Gannon KM, Ostrom TM. 1996. How meaning is given to rating scales: the effects of response language on category activation. J. Exp. Soc. Psychol. 32:337–60
    [Google Scholar]
  42. Garbarski D, Schaeffer NC, Dykema J 2019. The effects of features of survey measurement on self-rated health: response option order and scale orientation. Appl. Res. Qual. Life 14:545–60
    [Google Scholar]
  43. Geisen E, Romano Bergstrom J 2017. Usability Testing for Survey Research Cambridge, MA: Morgan Kaufmann
    [Google Scholar]
  44. Gilbert EE. 2015. A comparison of branched versus unbranched rating scales for the measurement of attitudes in surveys. Public Opin. Q. 79:443–70
    [Google Scholar]
  45. Graesser AC, Cai Z, Louwerse MM, Daniel F 2006. Question Understanding Aid (QUAID): a web facility that tests question comprehensibility. Public Opin. Q. 70:3–22
    [Google Scholar]
  46. Hanson T. 2015. Comparing agreement and item-specific response scales: results from an experiment. Soc. Res. Pract. 1:17–25
    [Google Scholar]
  47. Harkness JA, Braun M, Edwards B, Johnson TP, Lyberg LE et al. 2010. Survey Methods in Multicultural, Multinational, and Multiregional Contexts Hoboken, NJ: Wiley
    [Google Scholar]
  48. Hofmans J, Theuns P, Baekelandt S, Mairesse O, Schillewaert N, Cools W 2007. Bias and changes in perceived intensity of verbal qualifiers effected by scale orientation. Surv. Res. Methods 1:97–108
    [Google Scholar]
  49. Höhne JK, Krebs D. 2018. Scale direction effects in agree/disagree and item-specific questions: a comparison of question formats. Int. J. Soc. Res. Methodol. 21:91–103
    [Google Scholar]
  50. Höhne JK, Schlosser S, Krebs D 2017. Investigating cognitive effort and response quality of question formats in web surveys using paradata. Field Methods 29:365–82
    [Google Scholar]
  51. Holbrook A, Cho YI, Johnson T 2006. The impact of question and respondent characteristics on comprehension and mapping difficulties. Public Opin. Q. 70:565–95
    [Google Scholar]
  52. Holbrook AL, Krosnick JA, Carson RT, Mitchell RC 2000. Violating conversational conventions disrupts cognitive processing of attitude questions. J. Exp. Soc. Psychol. 36:465–94
    [Google Scholar]
  53. Holbrook AL, Krosnick JA, Moore D, Tourangeau R 2007. Response order effects in dichotomous categorical questions presented orally: the impact of question and respondent attributes. Public Opin. Q. 71:325–48
    [Google Scholar]
  54. Hout M, Hastings OP. 2016. Reliability of the core items in the General Social Survey: estimates from the three-wave panels, 2006–2014. Sociol. Sci. 3:971–1002
    [Google Scholar]
  55. Hox JJ, de Leeuw ED, Klausch T 2017. Mixed mode research: issues in design and analysis. Total Survey Error in Practice P Biemer 511–30 Hoboken, NJ: Wiley
    [Google Scholar]
  56. Hu J. 2019. Horizontal or vertical? The effects of visual orientation of categorical response options on survey responses in web surveys. Soc. Sci. Comput. Rev. 1:1–14
    [Google Scholar]
  57. Jäckle A, Eckman S. 2019. Is that still the same? Has that changed? On the accuracy of measuring change with dependent interviewing. J. Surv. Stat. Methodol. 7:smz021
    [Google Scholar]
  58. Jenkins CR, Dillman DA. 1997. Towards a theory of self-administered questionnaire design. Survey Measurement and Process Quality L Lyberg, P Biemer, M Collins, ED de Leeuw, C Dippo et al.165–96 New York: Wiley Intersci.
    [Google Scholar]
  59. Johnson TP, Pennell B-E, Stoop IAL, Dorer B 2018. Advances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC New York: Wiley
    [Google Scholar]
  60. Kamoen N, Holleman B, Mak P, Sanders T, van den Bergh H 2011. Agree or disagree? Cognitive processes in answering contrastive survey questions. Discourse Process 48:355–85
    [Google Scholar]
  61. Kamoen N, Holleman B, van den Bergh H, Sanders T 2013. Positive, negative, and bipolar questions: the effect of question polarity on ratings of text readability. Surv. Res. Methods 7:181–89
    [Google Scholar]
  62. Keusch F, Yang T. 2018. Is satisficing responsible for response order effects in rating scale questions. Surv. Res. Methods 12:259–70
    [Google Scholar]
  63. Kim Y, Dykema J, Stevenson J, Black P, Moberg DP 2019. Straightlining: overview of measurement, comparison of indicators, and effects in mail–web mixed-mode surveys. Soc. Sci. Comput. Rev. 37:214–33
    [Google Scholar]
  64. Klausch T, Hox JJ, Schouten B 2013. Measurement effects of survey mode on the equivalence of attitudinal rating scale questions. Sociol. Methods Res. 42:227–63
    [Google Scholar]
  65. Kreuter F, McCulloch S, Presser S, Tourangeau R 2011. The effects of asking filter questions in interleafed versus grouped format. Sociol. Methods Res. 40:88–104
    [Google Scholar]
  66. Krosnick JA, Berent MK. 1993. Comparisons of party identification and policy preferences: the impact of survey question format. Am. J. Political Sci. 37:941–64
    [Google Scholar]
  67. Krosnick JA, Presser S. 2010. Question and questionnaire design. Handbook of Survey Research PV Marsden, JD Wright 263–313 Bingley, UK: Emerald
    [Google Scholar]
  68. Kuru O, Pasek J. 2016. Improving social media measurement in surveys: avoiding acquiescence bias in Facebook research. Comput. Hum. Behav. 57:82–92
    [Google Scholar]
  69. Lam TCM, Allen G, Green KE 2010. Is “neutral” on a Likert scale the same as “don't know” for informed and uninformed respondents? Effects of serial position and labeling on selection of response options Paper presented at the Annual Meeting of the National Council on Measurement in Education Denver, CO: May 1–3
    [Google Scholar]
  70. Lau A, Kennedy C. 2019. When online survey respondents only select some that apply: Forced-choice questions yield more accurate data than select-all-that-apply lists Online Rep., Pew Res. Cent Washington, DC: https://www.pewresearch.org/methods/2019/05/09/when-online-survey-respondents-only-select-some-that-apply/
    [Google Scholar]
  71. Lau CQ. 2018. Rating scale design among Ethiopian entrepreneurs: a split-ballot experiment. Int. J. Public Opin. Res. 30:327–41
    [Google Scholar]
  72. Lelkes Y, Weiss R. 2015. Much ado about acquiescence: the relative validity and reliability of construct-specific and agree–disagree questions. Res. Politics 2:3 https://doi.org/10.1177/2053168015604173
    [Crossref] [Google Scholar]
  73. Lenzner T. 2012. Effects of survey question comprehensibility on response quality. Field Methods 24:409–28
    [Google Scholar]
  74. Lenzner T. 2014. Are readability formulas valid tools for assessing survey question difficulty. Sociol. Methods Res. 43:677–98
    [Google Scholar]
  75. Liu M, Keusch F. 2017. Effects of scale direction on response style of ordinal rating scales. J. Off. Stat. 33:137–54
    [Google Scholar]
  76. Liu M, Lee S, Conrad FG 2015. Comparing extreme response styles between agree–disagree and item-specific scales. Public Opin. Q. 79:952–75
    [Google Scholar]
  77. Lu M, Safren SA, Skolnik PR, Rogers WH, Coady W et al. 2008. Optimal recall period and response task for self-reported HIV medication adherence. AIDS Behav 12:86–94
    [Google Scholar]
  78. Maitland A, Presser S. 2016. How accurately do different evaluation methods predict the reliability of survey questions. J. Surv. Stat. Methodol. 4:362–81
    [Google Scholar]
  79. Malhotra N, Krosnick JA, Thomas RK 2009. Optimal design of branching questions to measure bipolar constructs. Public Opin. Q. 73:304–24
    [Google Scholar]
  80. Mavletova A, Couper MP. 2014. Mobile web survey design: scrolling versus paging, SMS versus e-mail invitations. J. Surv. Stat. Methodol. 2:498–518
    [Google Scholar]
  81. Mazaheri M, Theuns P. 2009. Structural Equation Modeling (SEM) for satisfaction and dissatisfaction ratings; multiple group invariance analysis across scales with different response format. Soc. Indic. Res. 90:203–21
    [Google Scholar]
  82. Menold N, Kaczmirek L, Lenzner T, Neusar A 2014. How do respondents attend to verbal labels in rating scales. Field Methods 26:21–39
    [Google Scholar]
  83. Metzler A, Kunz T, Fuchs M 2015. The use and positioning of clarification features in web surveys. Psihologija 48:379–408
    [Google Scholar]
  84. Molenaar NJ. 1982. Response-effects of ‘formal’ characteristics of questions. Response Behavior in the Survey-Interview W Dijkstra, J van der Zouwen 49–89 London: Academic
    [Google Scholar]
  85. Moors G, Kieruj ND, Vermunt JK 2014. The effect of labeling and numbering of response scales on the likelihood of response bias. Sociol. Methodol. 44:369–99
    [Google Scholar]
  86. Nadler JT, Weston R, Voyles EC 2015. Stuck in the middle: the use and interpretation of mid-points in items on questionnaires. J. Gen. Psychol. 142:71–89
    [Google Scholar]
  87. Nicolaas G, Campanelli P, Hope S, Jäckle A, Lynn P 2011. Is it a good idea to optimise question format for mode of data collection? Results from a mixed modes experiment Work. Pap 2011–31 Inst. Soc. Econ. Res Colchester, UK:
    [Google Scholar]
  88. Oksenberg L, Cannell CF, Kalton G 1991. New strategies for pretesting survey questions. J. Off. Stat. 7:349–65
    [Google Scholar]
  89. Olson K. 2010. An examination of questionnaire evaluation by expert reviewers. Field Methods 22:295–318
    [Google Scholar]
  90. Olson K, Smyth JD. 2015. The effect of CATI questions, respondents, and interviewers on response time. J. Surv. Stat. Methodol. 3:361–96
    [Google Scholar]
  91. Olson K, Smyth JD, Cochran B 2018a. Item location, the interviewer–respondent interaction, and responses to battery questions in telephone surveys. Sociol. Methodol. 48:225–68
    [Google Scholar]
  92. Olson K, Smyth JD, Dykema J, Holbrook A, Kreuter F, West BT 2020. Interviewer Effects from a Total Survey Error Perspective Boca Raton, FL: CRC
    [Google Scholar]
  93. Olson K, Watanabe M, Smyth JD 2018b. A comparison of full and quasi filters for autobiographical questions. Field Methods 30:371–85
    [Google Scholar]
  94. O'Muircheartaigh C, Gaskell G, Wright DB 1995. Weighing anchors: verbal and numeric labels for response scales. J. Off. Stat. 11:295–307
    [Google Scholar]
  95. Paradis C, Willners C. 2006. Antonymy and negation—the boundedness hypothesis. J. Pragmat. 38:1051–80
    [Google Scholar]
  96. Paradis C, Willners C, Jones S 2009. Good and bad opposites: using textual and experimental techniques to measure antonym canonicity. Ment. Lex. 4:380–429
    [Google Scholar]
  97. Pasek J, Krosnick JA. 2010. Optimizing survey questionnaire design in political science: insights from psychology. Oxford Handbook of American Elections and Political Behavior JE Leighley 27–50 New York: Oxford Univ. Press
    [Google Scholar]
  98. Peytchev A. 2005. How questionnaire layout induces measurement error Paper presented at the 60th Annual Conference of the American Association for Public Opinion Research Miami Beach, FL:
    [Google Scholar]
  99. Peytchev A, Couper MP, McCabe SE, Crawford SD 2006. Web survey design: paging versus scrolling. Public Opin. Q. 70:596–607
    [Google Scholar]
  100. Rammstedt B, Krebs D. 2007. Does response scale format affect the answering of personality scales? Assessing the Big Five dimensions of personality with different response scales in a dependent sample. Eur. J. Psychol. Assess. 23:32–38
    [Google Scholar]
  101. Redline C. 2013. Clarifying categorical concepts in a web survey. Public Opin. Q. 77:89–105
    [Google Scholar]
  102. Revilla M. 2015. Effect of using different labels for the scales in a web survey. Int. J. Mark. Res. 57:225–38
    [Google Scholar]
  103. Revilla M, Couper MP. 2018. Comparing grids with vertical and horizontal item-by-item formats for PCs and smartphones. Soc. Sci. Comput. Rev. 36:349–68
    [Google Scholar]
  104. Revilla M, Ochoa C. 2015. Quality of different scales in an online survey in Mexico and Colombia. J. Politics Lat. Am. 7:157–77
    [Google Scholar]
  105. Revilla MA, Saris WE, Krosnick JA 2014. Choosing the number of categories in agree–disagree scales. Sociol. Methods Res. 43:73–97
    [Google Scholar]
  106. Rodgers WL, Andrews FM, Herzog AR 1992. Quality of survey measures: a structural modeling approach. J. Off. Stat. 8:251–75
    [Google Scholar]
  107. Saris WE, Gallhofer IN. 2007. Design, Evaluation, and Analysis of Questionnaires for Survey Research Hoboken, NJ: Wiley
    [Google Scholar]
  108. Saris WE, Revilla M, Krosnick JA, Shaeffer EM 2010. Comparing questions with agree/disagree response options to questions with item-specific response options. Surv. Res. Methods 4:61–79
    [Google Scholar]
  109. Schaeffer NC. 1991. Hardly ever or constantly? Group comparisons using vague quantifiers. Public Opin. Q. 55:395–423
    [Google Scholar]
  110. Schaeffer NC. 2020. Interaction before and during the survey interview: insights from conversation analysis. Int. J. Soc. Res. Methodol. In press
    [Google Scholar]
  111. Schaeffer NC, Dykema J. 2004. A multiple-method approach to improving the clarity of closely related concepts: distinguishing legal and physical custody of children. Methods for Testing and Evaluating Survey Questionnaires S Presser, JM Rothgeb, MP Couper, JT Lessler, E Martin et al.475–502 Hoboken, NJ: Wiley
    [Google Scholar]
  112. Schaeffer NC, Dykema J. 2011a. Questions for surveys: current trends and future directions. Public Opin. Q. 75:909–61
    [Google Scholar]
  113. Schaeffer NC, Dykema J. 2011b. Response 1 to Fowler's chapter: coding the behavior of interviewers and respondents to evaluate survey questions. Question Evaluation Methods: Contributing to the Science of Data Quality J Madans, K Miller, A Maitland, G Willis 23–39 Hoboken, NJ: Wiley
    [Google Scholar]
  114. Schaeffer NC, Dykema J. 2015. Question wording and response categories. International Encyclopedia of Social and Behavioral Sciences JD Wright 764–70 Oxford, UK: Elsevier
    [Google Scholar]
  115. Schaeffer NC, Presser S. 2003. The science of asking questions. Annu. Rev. Sociol. 29:65–88
    [Google Scholar]
  116. Schaeffer NC, Thomson E. 1992. The discovery of grounded uncertainty: developing standardized questions about strength of fertility motivation. Sociological Methodology PV Marsden 37–82 Oxford, UK: Blackwell
    [Google Scholar]
  117. Schwarz N, Hippler H-J, Deutsch B, Strack F 1985. Response scales: effects of category range on reported behavior and comparative judgments. Public Opin. Q. 49:388–95
    [Google Scholar]
  118. Smit JH, Dijkstra W, van der Zouwen J 1997. Suggestive interviewer behaviour in surveys: an experimental study. J. Off. Stat. 13:19–28
    [Google Scholar]
  119. Smith TW, Davern M, Freese J, Hout M 2017. General Social Surveys, 1972–2016 Principal Investigator, TW Smith; Co–Principal Investigators, PV Marsden, M Hout. Chicago: NORC 1 data file (62,466 logical records); 1 codebook (3,689 pp.)
    [Google Scholar]
  120. Smyth JD, Christian LM, Dillman DA 2008. Does ‘yes or no’ on the telephone mean the same as ‘check-all-that-apply’ on the Web. Public Opin. Q. 72:103–13
    [Google Scholar]
  121. Smyth JD, Dillman DA, Christian LM, McBride M 2009. Open-ended questions in web surveys: Can increasing the size of answer boxes and providing extra verbal instructions improve response quality. Public Opin. Q. 73:325–37
    [Google Scholar]
  122. Smyth JD, Dillman DA, Christian LM, Stern MJ 2006. Effects of using visual design principles to group response options in web surveys. Int. J. Internet Sci. 1:6–16
    [Google Scholar]
  123. Smyth JD, Israel GD, Newberry MG, Hull RG 2019. Effects of stem and response order on response patterns in satisfaction ratings. Field Methods 31:260–76
    [Google Scholar]
  124. Smyth JD, Olson K. 2019. The effects of mismatches between survey question stems and response options on data quality and responses. J. Surv. Stat. Methodol. 7:34–65
    [Google Scholar]
  125. Smyth JD, Olson K. 2020. A comparison of fully labeled and top-labeled grid question formats. See Beatty et al. 2020 229–57
  126. Solomon S. 1978. Measuring dispositional and situational attributions. Personal. Soc. Psychol. Bull. 4:589–94
    [Google Scholar]
  127. Sterngold A, Warland RH, Herrmann RO 1994. Do surveys overstate public concern. Public Opin. Q. 58:255–63
    [Google Scholar]
  128. Sturgis P, Roberts C, Smith P 2014. Middle alternatives revisited: how the neither/nor response acts as a way of saying “I don't know”. Sociol. Methods Res. 43:15–38
    [Google Scholar]
  129. Sudman S, Bradburn NM, Schwarz N 1996. Thinking About Answers: The Application of Cognitive Processes to Survey Methodology San Francisco: Jossey-Bass
    [Google Scholar]
  130. Toepoel V, Das M, van Soest A 2009a. Design of web questionnaires: the effect of layout in rating scales. J. Off. Stat. 25:509–28
    [Google Scholar]
  131. Toepoel V, Vis C, Das M, van Soest A 2009b. Design of web questionnaires: an information-processing perspective for the effect of response categories. Sociol. Methods Res. 37:371–92
    [Google Scholar]
  132. Tourangeau R, Conrad FG, Couper MP 2013a. The Science of Web Surveys New York: Oxford Univ. Press
    [Google Scholar]
  133. Tourangeau R, Conrad FG, Couper MP, Ye C 2014. The effects of providing examples in survey questions. Public Opin. Q. 78:100–25
    [Google Scholar]
  134. Tourangeau R, Couper MP, Conrad FG 2004. Spacing, position, and order: interpretive heuristics for visual features of survey questions. Public Opin. Q. 68:368–93
    [Google Scholar]
  135. Tourangeau R, Couper MP, Conrad FG 2013b. “Up means good”: the effect of screen position on evaluative ratings in web surveys. Public Opin. Q. 77:69–88
    [Google Scholar]
  136. Tourangeau R, Rips LJ, Rasinski K 2000. The Psychology of Survey Response New York: Cambridge Univ. Press
    [Google Scholar]
  137. Velez P, Ashworth SD. 2007. The impact of item readability on the endorsement of the midpoint response in surveys. Surv. Res. Methods 1:69–74
    [Google Scholar]
  138. Wang R, Krosnick JA. 2019. Middle alternatives and measurement validity: a recommendation for survey researchers. Int. J. Soc. Res. Methodol. 23:169–84
    [Google Scholar]
  139. Weijters B, Cabooter E, Schillewaert N 2010. The effect of rating scale format on response styles: the number of response categories and response category labels. Int. J. Res. Mark. 27:236–47
    [Google Scholar]
  140. Weng L-J. 2004. Impact of the number of response categories and anchor labels on coefficient alpha and test–retest reliability. Educ. Psychol. Meas. 64:956–72
    [Google Scholar]
  141. Weng L-J, Cheng C-P. 2000. Effects of response order on Likert-type scales. Educ. Psychol. Meas. 60:908–24
    [Google Scholar]
  142. Willis GB. 2004. Cognitive Interviewing: A Tool for Improving Questionnaire Design Thousand Oaks, CA: Sage
    [Google Scholar]
  143. Willis GB, Lessler JT. 1999. Question Appraisal System: QAS-99 Rockville, MD: Res. Triangle Inst.
    [Google Scholar]
  144. Willits FK, Theodori GL, Luloff AE 2016. Another look at Likert scales. J. Rural Soc. Sci. 31:126–39
    [Google Scholar]
  145. Yan T, Keusch F. 2015. The effects of the direction of rating scales on survey responses in a telephone survey. Public Opin. Q. 79:145–65
    [Google Scholar]
  146. Yan T, Keusch F, He L 2018. The impact of question and scale characteristics on scale direction effects. Surv. Pract. 11:3126
    [Google Scholar]
  147. Yan T, Tourangeau R. 2008. Fast times and easy questions: the effects of age, experience and question complexity on web survey response times. Appl. Cogn. Psychol. 22:51–68
    [Google Scholar]
  148. Yorke M. 2001. Bipolarity… or not? Some conceptual problems relating to bipolar rating scales. Br. Educ. Res. J. 27:171–86
    [Google Scholar]
  149. Yu JH, Albaum G, Swenson M 2003. Is a central tendency error inherent in the use of semantic differential scales in different cultures?. Int. J. Mark. Res. 45:1–16
    [Google Scholar]
/content/journals/10.1146/annurev-soc-121919-054544
Loading
/content/journals/10.1146/annurev-soc-121919-054544
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error