1932

Abstract

Technology has changed the way that organizational researchers obtain participants for their research studies. Although technology has facilitated the collection of large quantities of data through online platforms, it has also highlighted potential data quality issues for many of our samples. In this article, we review different sampling techniques, including convenience, purposive, probability-based, and snowball sampling. We highlight strengths and weaknesses of each approach to help organizational researchers choose the most appropriate sampling techniques for their research questions. We identify best practices that researchers can use to improve the quality of their samples, including reviewing screening techniques to increase the quality of online sampling. Finally, as part of our review we examined the sampling procedures of all empirical research articles published in the in the past 5 years, and we use observations from these results to make conclusions about the lack of methodological and sample diversity in organizational research, the overreliance on a few sampling techniques, the need to report key aspects of sampling, and concerns about participant quality.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-120920-052946
2023-01-23
2024-04-16
Loading full text...

Full text loading...

/deliver/fulltext/orgpsych/10/1/annurev-orgpsych-120920-052946.html?itemId=/content/journals/10.1146/annurev-orgpsych-120920-052946&mimeType=html&fmt=ahah

Literature Cited

  1. Aguinis H, Villamor I, Ramani RS. 2021. MTurk research: review and recommendations. J. Manag. 47:4823–37
    [Google Scholar]
  2. Anderson CA, Allen JJ, Plante C, Quigley-McBride A, Lovett A, Rokkum JN. 2019. The MTurkification of social and personality psychology. Personal. Soc. Psychol. Bull. 45:6842–50
    [Google Scholar]
  3. Barends AJ, de Vries RE. 2019. Noncompliant responding: comparing exclusion criteria in MTurk personality research to improve data quality. Personal. Individ. Differ. 143:84–89
    [Google Scholar]
  4. Bergman M, Jean V 2016. Where have all the “workers” gone? A critical analysis of the unrepresentativeness of our samples relative to the labor market in the industrial–organizational psychology literature. Ind. Organ. Psychol. 9:184–113
    [Google Scholar]
  5. Bernerth JB, Aguinis H, Taylor EC. 2021. Detecting false identities: a solution to improve web-based surveys and research on leadership and health/well-being. J. Occup. Health Psychol. 26:6564–81
    [Google Scholar]
  6. Bowling NA, Gibson AM, DeSimone JA. 2022. Stop with the questions already! Does data quality suffer for scales positioned near the end of a lengthy questionnaire?. J. Bus. Psychol. 37:1099–116
    [Google Scholar]
  7. Bowling NA, Gibson AM, Houpt JW, Brower CK. 2021. Will the questions ever end? Person-level increases in careless responding during questionnaire completion. Organ. Res. Methods 24:4718–38
    [Google Scholar]
  8. Bowling NA, Huang JL, Bragg CB, Khazon S, Liu M, Blackmore CE. 2016. Who cares and who is careless? Insufficient effort responding as a reflection of respondent personality. J. Personal. Soc. Psychol. 111:2218–29
    [Google Scholar]
  9. Bradley P. 2018. Bots and data quality on crowdsourcing platforms. Prolific Blog Aug. 10. https://blog.prolific.co/bots-and-data-quality-on-crowdsourcing-platforms
    [Google Scholar]
  10. Buchanan EM, Scofield JE. 2018. Methods to detect low quality data and its implication for psychological research. Behav. Res. Methods 50:62586–96
    [Google Scholar]
  11. Chandler J, Mueller P, Paolacci G. 2014. Nonnaïveté among Amazon Mechanical Turk workers: consequences and solutions for behavioral researchers. Behav. Res. Methods 46:1112–30
    [Google Scholar]
  12. Chandler J, Paolacci G. 2017. Lie for a dime: when most prescreening responses are honest but most study participants are impostors. Soc. Psychol. Personal. Sci. 8:5500–8
    [Google Scholar]
  13. Chandler J, Sisso I, Shapiro D. 2020. Participant carelessness and fraud: consequences for clinical research and potential solutions. J. Abnorm. Psychol. 129:149–55
    [Google Scholar]
  14. Costa PT, McCrae RR 2008. The revised NEO personality inventory (NEO-PI-R). The SAGE Handbook of Personality Theory and Assessment. Personality Measurement and Testing DH Saklofske 179–98 Thousand Oaks, CA: SAGE
    [Google Scholar]
  15. Credé M. 2010. Random responding as a threat to the validity of effect size estimates in correlational research. Educ. Psychol. Meas. 70:4596–612
    [Google Scholar]
  16. Curran PG. 2016. Methods for the detection of carelessly invalid responses in survey data. J. Exp. Soc. Psychol. 66:4–19
    [Google Scholar]
  17. Curran PG, Hauser KA. 2019. I'm paid biweekly, just not by leprechauns: evaluating valid-but-incorrect response rates to attention check items. J. Res. Personal. 82:103849
    [Google Scholar]
  18. Dennis SA, Goodson BM, Pearson CA. 2020. Online worker fraud and evolving threats to the integrity of MTurk data: a discussion of virtual private servers and the limitations of IP-based screening procedures. Behav. Res. Account. 32:1119–34
    [Google Scholar]
  19. DeSimone JA, DeSimone AJ, Harms PD, Wood D. 2017. The differential impacts of two forms of insufficient effort responding. Appl. Psychol. 67:2309–38
    [Google Scholar]
  20. DeSimone JA, Harms PD. 2018. Dirty data: the effects of screening respondents who provide low-quality data in survey research. J. Bus. Psychol. 33:5559–77
    [Google Scholar]
  21. DeSimone JA, Harms PD, DeSimone AJ. 2015. Best practices and recommendations for data screening. J. Organ. Behav. 36:2171–81
    [Google Scholar]
  22. Dunn AM, Heggestad ED, Shanock LR, Theilgard N. 2018. Intra-individual response variability as an indicator of insufficient effort responding: comparison to other indicators and relationships with individual differences. J. Bus. Psychol. 33:1105–21
    [Google Scholar]
  23. Dupuis M, Meier E, Cuneo F 2019. Detecting computer-generated random responding in questionnaire-based data: a comparison of seven indices. Behav. Res. Methods 51:52228–37
    [Google Scholar]
  24. Etikan I, Musa SA, Alkassim RS. 2016. Comparison of convenience sampling and purposive sampling. Am. J. Theor. Appl. Stat. 5:11–4
    [Google Scholar]
  25. Fleischer A, Mead AD, Huang J. 2015. Inattentive responding in MTurk and other online samples. Ind. Organ. Psychol. 8:2196–202
    [Google Scholar]
  26. Gibson AM, Bowling NA. 2020. The effects of questionnaire length and behavioral consequences of careless responding. Eur. J. Psychol. Assess. 36:2410–20
    [Google Scholar]
  27. Godinho A, Schell C, Cunningham JA. 2020. Out damn bot, out: recruiting real people into substance use studies on the internet. Subst. Abuse 41:3–5
    [Google Scholar]
  28. Gordon ME, Slade LA, Schmitt N. 1986. The “science of the sophomore” revisited: from conjecture to empiricism. Acad. Manag. Rev. 11:1191–207
    [Google Scholar]
  29. Griffin M, Martino RJ, LoSchiavo C, Comer-Carruthers C, Krause KD et al. 2021. Ensuring survey research data integrity in the era of internet bots. Qual. Quant. 56:2841–52
    [Google Scholar]
  30. Hauser DJ, Schwarz N. 2016. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behav. Res. Methods 48:1400–7
    [Google Scholar]
  31. Highhouse S. 2009. Designing experiments that generalize. Organ. Res. Methods 12:3554–66
    [Google Scholar]
  32. Huang JL, Bowling NA, Liu M, Li Y. 2015a. Detecting insufficient effort responding with an infrequency scale: evaluating validity and participant reactions. J. Bus. Psychol. 30:299–311
    [Google Scholar]
  33. Huang JL, Curran PG, Keeney J, Poposki EM, DeShon RP. 2012. Detecting and deterring insufficient effort responding to surveys. J. Bus. Psychol. 27:199–114
    [Google Scholar]
  34. Huang JL, Liu M, Bowling NA. 2015b. Insufficient effort responding: examining an insidious confound in survey data. J. Appl. Psychol. 100:3828–45
    [Google Scholar]
  35. Johnson JA. 2005. Ascertaining the validity of individual protocols from web-based personality inventories. J. Res. Personal. 39:103–29
    [Google Scholar]
  36. Kam CCS, Chan GHH. 2018. Examination of the validity of instructed response items in identifying careless respondents. Personal. Individ. Differ. 129:83–87
    [Google Scholar]
  37. Kan IP, Drummey AB. 2018. Do imposters threaten data quality? An examination of worker misrepresentation and downstream consequences in Amazon's Mechanical Turk workforce. Comput. Hum. Behav. 83:243–53
    [Google Scholar]
  38. Keith MG, Tay L, Harms PD. 2017. Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations. Front. Psychol. 8:1359
    [Google Scholar]
  39. Kennedy R, Clifford S, Burleigh T, Waggoner PD, Jewell R, Winter NJG 2020. The shape of and solutions to the MTurk quality crisis. Political Sci. Res. Methods 8:614–29
    [Google Scholar]
  40. Kim DS, McCabe CJ, Yamasaki BL, Louie KA, King KM. 2018. Detecting random responders with infrequency scales using an error-balancing threshold. Behav. Res. Methods 50:51960–70
    [Google Scholar]
  41. Kung FY, Kwok N, Brown DJ. 2018. Are attention check questions a threat to scale validity?. Appl. Psychol. 67:2264–83
    [Google Scholar]
  42. MacInnis CC, Boss HCD, Bourdage JS. 2020. More evidence of participant misrepresentation on MTurk and investigating who misrepresents. Personal. Individ. Differ. 152:109603
    [Google Scholar]
  43. Mahalanobis PC. 1936. On the generalized distance in statistics. Proc. Natl. Inst. Sci. India 12:49–55
    [Google Scholar]
  44. Marcus B, Weigelt O, Hergert J, Gurt J, Gelléri P. 2017. The use of snowball sampling for multisource organizational research: some cause for concern. Pers. Psychol. 70:3635–73
    [Google Scholar]
  45. McGonagle AK, Huang JL, Walsh BM. 2016. Insufficient effort survey responding: an under-appreciated problem in work and organisational health psychology research. Appl. Psychol. 65:2287–321
    [Google Scholar]
  46. Meade AW, Craig SB. 2012. Identifying careless responses in survey data. Psychol. Methods 17:3437–55
    [Google Scholar]
  47. Moss AJ, Litman L. 2018. After the bot scare: understanding what's been happening with data collection on MTurk and how to stop it. CloudResearch Blog Sept. 18. https://www.cloudresearch.com/resources/blog/after-the-bot-scare-understanding-whats-been-happening-with-data-collection-on-mturk-and-how-to-stop-it/
    [Google Scholar]
  48. Moss AJ, Rosenzweig C, Jaffe SN, Gautam R, Robinson J, Litman L 2021. Bots or inattentive humans? Identifying sources of low-quality data in online platforms. PsyArXiv wr8ds. https://doi.org/10.31234/osf.io/wr8ds
    [Crossref]
  49. Newman A, Bavik YL, Mount M, Shao B. 2021. Data collection via online platforms: challenges and recommendations for future research. Appl. Psychol. 70:31380–402
    [Google Scholar]
  50. Oppenheimer DM, Meyvis T, Davidenko N. 2009. Instructional manipulation checks: detecting satisficing to increase statistical power. J. Exp. Soc. Psychol. 45:4867–72
    [Google Scholar]
  51. Peer E, Vosgerau J, Acquisti A. 2014. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behav. Res. Methods 46:41023–31
    [Google Scholar]
  52. Pozzar R, Hammer MJ, Underhill-Blazey M, Wright AA, Tulsky JA et al. 2020. Threats of bots and other bad actors to data quality following research participant recruitment through social media: cross-sectional questionnaire. J. Med. Internet Res. 22:e23021
    [Google Scholar]
  53. Ran S, Liu M, Marchiondo LA, Huang JL. 2015. Difference in response effort across sample types: perception or reality?. Ind. Organ. Psychol. 8:2202–8
    [Google Scholar]
  54. Robinson J, Rosenzweig C, Moss AJ, Litman L. 2019. Tapped out or barely tapped? Recommendations for how to harness the vast and largely unused potential of the Mechanical Turk participant pool. PLOS ONE 14:12e0226394
    [Google Scholar]
  55. Sears DO. 1986. College sophomores in the laboratory: influences of a narrow data base on social psychology's view of human nature. J. Personal. Soc. Psychol. 51:3515–30
    [Google Scholar]
  56. Simone M. 2019. Bots started sabotaging my online research. I fought back. First Opinion Blog Novemb. 21. https://www.statnews.com/2019/11/21/bots-started-sabotaging-my-online-research-i-fought-back
    [Google Scholar]
  57. Storozuk A, Ashley M, Delage V, Maloney EA. 2020. Got bots? Practical recommendations to protect online survey data from bot attacks. Quant. Methods Psychol. 16:5472–81
    [Google Scholar]
  58. Tay L, Meade AW, Cao M. 2015. An overview and practical guide to IRT measurement equivalence analysis. Organ. Res. Methods 18:13–46
    [Google Scholar]
  59. Teitcher JE, Bockting WO, Bauermeister JA, Hoefer CJ, Miner MH, Klitzman RL. 2015. Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs. J. Law Med. Ethics 43:1116–33
    [Google Scholar]
  60. Waggoner PD, Kennedy R, Clifford S 2019. Detecting fraud in online surveys by tracing, scoring, and visualizing IP addresses. J. Open Source Softw. 4:371285
    [Google Scholar]
  61. Wood D, Harms PD, Lowman GH, DeSimone JA. 2017. Response speed and response consistency as mutually validating indicators of data quality in online samples. Soc. Psychol. Personal. Sci. 8:4454–64
    [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-120920-052946
Loading
/content/journals/10.1146/annurev-orgpsych-120920-052946
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error