1932

Abstract

Self-report measures are characterized as being susceptible to threats associated with deliberate dissimulation or response distortion (i.e., social desirability responding) and careless responding. Careless responding typically arises in low-stakes settings (e.g., participating in a study for course credit) where some respondents are not motivated to respond in a conscientious manner to the items. In contrast, in high-stakes assessments (e.g., prehire assessments), because of the outcomes associated with their responses, respondents are motivated to present themselves in as favorable a light as possible and, thus, may respond dishonestly in an effort to accomplish this objective. In this article, we draw a distinction between the lazy respondent, which we associate with careless responding, and the dishonest respondent, which we associate with response distortion. We then seek to answer the following questions for both careless responding and response distortion: () What is it? () Why is it a problem or concern? () Why do people engage in it? () How pervasive is it? () Can and how is it prevented or mitigated? () How is it detected? () What does one do when one detects it? We conclude with a discussion of suggested future research directions and some practical guidelines for practitioners and researchers.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-012420-055324
2021-01-21
2024-03-29
Loading full text...

Full text loading...

/deliver/fulltext/orgpsych/8/1/annurev-orgpsych-012420-055324.html?itemId=/content/journals/10.1146/annurev-orgpsych-012420-055324&mimeType=html&fmt=ahah

Literature Cited

  1. Arthur W Jr 2017. Developing a construct-laden SJT of FFM traits: ingenuity or folly? Paper presented in J. Golubovich & C. Anguiano-Carrasco (Chairs), symposium on the Development and Scoring of Construct-Focused Situational Judgment Tests at the 32nd Annual Conference of the Society for Industrial and Organizational Psychology, Orlando, FL, April 27–29
    [Google Scholar]
  2. Arthur W Jr., Doverspike D, Muñoz GJ, Taylor JE, Carr AE. 2014a. The use of mobile devices in high-stakes remotely delivered assessments and testing. Int. J. Sel. Assess. 22:113–23
    [Google Scholar]
  3. Arthur W Jr., Glaze RM. 2011. Cheating and response distortion on remotely delivered assessments. Technology-Enhanced Assessment of Talent NT Tippins, S Alder, AI Kraut 99–152 San Francisco: Jossey-Bass
    [Google Scholar]
  4. Arthur W Jr., Glaze RM, Jarrett SM, White CD, Schurig I, Taylor JE. 2014b. Comparative evaluation of three situational judgment test response formats in terms of construct-related validity, subgroup differences, and susceptibility to response distortion. J. Appl. Psychol. 99:535–45
    [Google Scholar]
  5. Arthur W Jr., Glaze RM, Villado AJ, Taylor JE. 2010. The magnitude and extent of cheating and response distortion effects on unproctored internet-based tests of cognitive ability and personality. Int. J. Sel. Assess. 18:1–16
    [Google Scholar]
  6. Arthur W Jr., Keiser NL, Hagen E, Traylor Z. 2018. Unproctored internet-based device-type effects on test scores: the role of working memory. Intelligence 67:67–75
    [Google Scholar]
  7. Arthur W Jr., Villado AJ. 2008. The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice. J. Appl. Psychol. 93:435–42
    [Google Scholar]
  8. Arthur W Jr., Woehr DJ, Graziano WG. 2001. Personality testing in employment settings: problems and issues in the application of typical selection practices. Pers. Rev. 30:657–76
    [Google Scholar]
  9. Babbie E. 2001. The Practice of Social Research Belmont, CA: Wadsworth 9th ed.
  10. Bagby RM, Gillis JR, Rogers R 1991. Effectiveness of the Millon Clinical Multiaxial Inventory Validity Index in the detection of random responding. Psychol. Assess. 3:285–87
    [Google Scholar]
  11. Bartram D, Brown A. 2004. Online testing: mode of administration and stability of OPQ 32i scores. Int. J. Sel. Assess. 12:278–84
    [Google Scholar]
  12. Beach DA. 1989. Identifying the random responder. J. Psychol. 123:101–3
    [Google Scholar]
  13. Becker TE. 2005. Development and validation of a situational judgment test of employee integrity. Int. J. Sel. Assess. 13:225–32
    [Google Scholar]
  14. Behrend TS, Sharek DJ, Meade AW, Wiebe EN 2011. The viability of crowdsourcing for survey research. Behav. Res. Methods 43:800–13
    [Google Scholar]
  15. Benning SD, Freeman AJ. 2017. The Multidimensional Personality Questionnaire's inconsistency scales identify invalid profiles through internal statistics and external correlates. Psychol. Assess. 29:1531–36
    [Google Scholar]
  16. Berry DTR, Wetter MW, Baer RA, Larsen L, Clark C, Monroe K 1992. MMPI-2 random responding indices: validation using a self-report methodology. Psychol. Assess. 4:340–45
    [Google Scholar]
  17. Beus JM, Payne SC, Arthur W Jr., Muñoz GJ 2019. The development and validation of a cross-industry safety climate measure: resolving conceptual and operational issues. J. Manag. 45:1987–2013
    [Google Scholar]
  18. Birkeland SA, Manson TM, Kisamore JL, Brannick MT, Smith MA 2006. A meta-analytic investigation of job applicant faking on personality measures. Int. J. Sel. Assess. 14:317–35
    [Google Scholar]
  19. Bledow R, Frese M. 2009. A situational judgment test of personal initiative and its relationship to performance. Pers. Psychol. 62:229–58
    [Google Scholar]
  20. Bott J, O'Connell M, Doverspike D 2007. Practical limitations in making decisions regarding the distribution of applicant personality test scores based on incumbent data. J. Bus. Psychol. 22:123–34
    [Google Scholar]
  21. Bruehl S, Lofland KR, Sherman JJ, Carlson CR 1998. The Variable Responding Scale for detection of random responding on the Multidimensional Pain Inventory. Psychol. Assess. 10:3–9
    [Google Scholar]
  22. Burns GN, Christiansen ND, Morris MB, Periard DA, Coaster JA 2014. Effects of applicant personality on resume evaluations. J. Bus. Psychol. 29:573–91
    [Google Scholar]
  23. Burns GN, Fillipowski JN, Morris MB, Shoda EA 2015. Impact of electronic warnings on online personality scores and test-taker reactions in an applicant simulation. Comput. Hum. Behav. 48:163–72
    [Google Scholar]
  24. Cao M, Drasgow F. 2019. Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. J. Appl. Psychol. 104:1347–68
    [Google Scholar]
  25. Cellar DF, Miller ML, Doverspike D, Klawsky JD 1996. Comparison of factor structures and criterion-related validity coefficients for two measures of personality based on the five factor model. J. Appl. Psychol. 81:694–704
    [Google Scholar]
  26. Christiansen ND, Edelstein S, Fleming B 1998. Reconsidering forced-choice formats for applicant personality assessment Paper presented at the 13th Annual Conference of the Society for Industrial and Organizational Psychology, Dallas, TX, April 24–26
  27. Clarke I. 2000. Extreme response style in cross-cultural research: an empirical investigation. J. Soc. Behav. Personal. 15:137–52
    [Google Scholar]
  28. Costa PT, McCrae R. 1992. Revised NEO Personality Inventory (NEO-PI-R) and NEO Five Factor Model (NEO-FFI) Professional Manual Odesa, FL: Psychol. Assess. Resour.
  29. Costa PT Jr., McCrae RR. 1997. Stability and change in personality assessment: the Revised NEO Personality Inventory in the Year 2000. J. Personal. Assess. 68:86–94
    [Google Scholar]
  30. Costa PT Jr., McCrae RR. 2008. The Revised NEO Personality Inventory (NEO-PI-R). The Sage Handbook of Personality Theory and Assessment: Personality Measurement and Testing GJ Boyle, G Matthews, DH Saklofske 179–98 London: Sage
    [Google Scholar]
  31. Credé M. 2010. Random responding as a threat to the validity of effect size estimates in correlational research. Educ. Psychol. Meas. 70:596–612
    [Google Scholar]
  32. Cronbach LJ. 1990. Essentials of Psychological Testing New York: Harper Collins
  33. Crowne DP, Marlowe D. 1960. A new scale of social desirability independent of psychopathology. J. Consult. Psychol. 24:349–54
    [Google Scholar]
  34. Cucina JM, Vasilopoulos NL, Su C, Busciglio HH, Cozma I et al. 2018. The effects of empirical keying of personality measures on faking and criterion-related validity. J. Bus. Psychol. 34:337–56
    [Google Scholar]
  35. Curran PG. 2016. Methods for the detection of carelessly invalid responses in survey data. J. Exp. Soc. Psychol. 66:4–19
    [Google Scholar]
  36. Curran PG, Kotrba L, Denison D 2010. Careless responding in surveys: Applying traditional techniques to organizational settings Paper presented at the 25th Annual Conference of the Society for Industrial and Organizational Psychology, Atlanta, GA, April 8–10
  37. de Meijer LA, Born MP, van Zielst J, van der Molen HT 2010. Construct-driven development of a video-based situational judgment test for integrity. Eur. Psychol. 15:229–36
    [Google Scholar]
  38. DeSimone JA, Davison HK, Schoen JL, Bing MN 2020. Insufficient effort responding as a partial function of implicit aggression. Organ. Res. Methods 23:154–80
    [Google Scholar]
  39. DeSimone JA, DeSimone AJ, Harms PD, Wood D 2018. The differential impacts of two forms of insufficient effort responding. Appl. Psychol. 67:309–38
    [Google Scholar]
  40. DeSimone JA, Harms PD. 2018. Dirty data: the effects of screening respondents who provide low-quality data in survey research. J. Bus. Psychol. 33:559–77
    [Google Scholar]
  41. DeSimone JA, Harms PD, DeSimone AJ 2015. Best practice recommendations for data screening. J. Organ. Behav. 36:171–81
    [Google Scholar]
  42. DeYoung CG, Hirsh JB, Shane MS, Papademetris X, Rajeevan N, Gray JR 2010. Testing predictions from personality neuroscience: brain structure and the Big Five. Psychol. Sci. 21:820–28
    [Google Scholar]
  43. Dolnicar S, Grün B. 2007. Cross-cultural differences in survey response patterns. Int. Mark. Rev. 24:127–43
    [Google Scholar]
  44. Dunn AM, Heggestad ED, Shanock LR, Theilgard N 2018. Intra-individual response variability as an indicator of insufficient effort responding: comparison to other indicators and relationships with individual differences. J. Bus. Psychol. 33:105–21
    [Google Scholar]
  45. Dwight SA, Donovan JJ. 2003. Do warnings not to fake reduce faking?. Hum. Perform. 16:1–23
    [Google Scholar]
  46. Edwards JR. 2019. Response invalidity in empirical research: causes, detection, and remedies. J. Oper. Manag. 65:62–76
    [Google Scholar]
  47. Ehlers C, Greene-Shortridge TM, Weekley JA, Zajack MD 2010. The exploration of statistical methods in detecting random responding Paper presented at the 25th Annual Conference of the Society for Industrial and Organizational Psychology, Atlanta, GA, April 8–10
  48. Eisenberger NI, Lieberman MD, Satpute AB 2005. Personality from a controlled processing perspective: an fMRI study of neuroticism, extraversion, and self-consciousness. Cogn. Affect. Behav. Neurosci. 5:169–81
    [Google Scholar]
  49. Ellingson JE, Sackett PR, Connelly BS 2007. Personality assessment across selection and development contexts: insights into response distortion. J. Appl. Psychol. 92:386–95
    [Google Scholar]
  50. Ellingson JE, Sackett PR, Hough LM 1999. Social desirability corrections in personality measurement: issues of applicant comparison and construct validity. J. Appl. Psychol. 84:155–66
    [Google Scholar]
  51. Elliott ML, Knodt AR, Ireland D, Morris ML, Poulton R et al. 2020. What is the test-retest reliability of common task-functional MRI measures? New empirical evidence and a meta-analysis. Psychol. Sci. 31:792–806
    [Google Scholar]
  52. Enders CK. 2010. Applied Missing Data Analysis New York: Guilford Press
  53. Fan J, Gao D, Carroll SA, Lopez FJ, Tian TS, Meng H 2012. Testing the efficacy of a new procedure for reducing faking on personality tests within selection contexts. J. Appl. Psychol. 97:866–80
    [Google Scholar]
  54. Fell CB, König CJ. 2016. Cross-cultural differences in applicant faking on personality tests: a 43-nation study. Appl. Psychol. 65:671–717
    [Google Scholar]
  55. Fisher CD. 1993. Boredom at work: a neglected concept. Hum. Relat. 46:395–417
    [Google Scholar]
  56. Fisher PA, Robie ND, Christainsen ND, Speer AB, Schneider L 2019. Criterion-related validity of forced-choice personality measures: a cautionary note regarding Thurstonian IRT versus classical test theory scoring. Pers. Assess. Decis. 5:13
    [Google Scholar]
  57. Fluckinger CD, McDaniel MA, Whetzel DL 2008. Review of faking in personnel selection. In Search of the Right Personnel M Mandel 90–109 New Delhi: McMillian
    [Google Scholar]
  58. Forgas JP. 1998. On feeling good and getting your way: mood effects on negotiator cognition and bargaining strategies. J. Personal. Soc. Psychol. 74:565–77
    [Google Scholar]
  59. Furnham A. 1990. Faking personality questionnaires: fabricating different profiles for different purposes. Curr. Psychol. 9:46–55
    [Google Scholar]
  60. Galesic M. 2006. Dropouts on the web: effects of interest and burden experienced during an online survey. J. Off. Stat. 222:313–28
    [Google Scholar]
  61. Galesic M, Bosnjak M. 2009. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin. Q. 73:349–60
    [Google Scholar]
  62. Gibson AM, Bowling NA. 2020. The effects of questionnaire length and behavioral consequences on careless responding. Eur. J. Psychol. Assess. 36:410–20
    [Google Scholar]
  63. Gladstone JJ, Matz SC, Lemaire A 2019. Can psychological traits be inferred from spending? Evidence from transaction data. Psychol. Sci. 30:1087–96
    [Google Scholar]
  64. Glaze R. 2012. The efficacy of profile matching as a means of controlling for the effects of response distortion on personality measures PhD Diss., Texas A&M Univ., College Station, Texas. https://oaktrust.library.tamu.edu/handle/1969.1/148261
  65. Goffin RD, Christiansen ND. 2003. Correcting personality tests for faking: a review of popular personality tests and initial survey of researchers. Int. J. Sel. Assess. 11:340–44
    [Google Scholar]
  66. Goldammer P, Annen H, Stöckli PL, Jonas K 2020. Careless responding in questionnaire measures: detection, impact, and remedies. Leadersh. Q. 31:101384
    [Google Scholar]
  67. Goldberg LR, Kilkowski JM. 1985. The prediction of semantic consistency in self descriptions: characteristics of persons and of terms that affect the consistency of responses to synonym and antonym pairs. J. Personal. Soc. Psychol. 48:82–98
    [Google Scholar]
  68. Gough HG, Bradley P. 1996. California Personality Inventory Manual Palo Alto, CA: Consult. Psychol. Press
  69. Grau I, Ebbeler C, Banse R 2019. Cultural differences in careless responding. J. Cross-Cult. Psychol. 50:336–57
    [Google Scholar]
  70. Green SB, Stutzman T. 1986. An evaluation of methods to select respondents to structured job-analysis questionnaires. Pers. Psychol. 39:543–64
    [Google Scholar]
  71. Greene RL. 1978. An empirically derived MMPI Carelessness Scale. J. Clin. Psychol. 34:407–10
    [Google Scholar]
  72. Griffith RL, Chmielowski T, Yoshita Y 2007. Do applicants fake? An examination of the frequency of applicant faking behavior. Pers. Rev. 36:341–55
    [Google Scholar]
  73. Hartwig F, Dearing BE. 1979. Exploratory Data Analysis Thousand Oaks, CA: Sage
  74. Heggestad ED, Morrison M, Reeve CL, McCloy RA 2006. Forced-choice assessments of personality for selection: evaluating issues of normative assessment and faking resistance. J. Appl. Psychol. 91:9–24
    [Google Scholar]
  75. Herzog AR, Bachman JG. 1981. Effects of questionnaire length on response quality. Public Opin. Q. 45:549–59
    [Google Scholar]
  76. Hinds J, Joinson A. 2019. Human and computer personality prediction from digital footprints. Curr. Dir. Psychol. Sci. 28:204–11
    [Google Scholar]
  77. Hogan J, Barrett P, Hogan R 2007. Personality measurement, faking, and employment selection. J. Appl. Psychol. 92:1270–85
    [Google Scholar]
  78. Hogan J, Holland B. 2003. Using theory to evaluate personality and job-performance relations: a socioanalytic perspective. J. Appl. Psychol. 88:100–12
    [Google Scholar]
  79. Hough LM. 1998. Effects of intentional distortion in personality measurement and evaluation of suggested palliatives. Hum. Perform. 11:209–44
    [Google Scholar]
  80. Hough LM, Oswald FL. 2008. Personality testing and industrial-organizational psychology: reflections, progress, and prospects. Ind. Organ. Psychol. 1:272–90
    [Google Scholar]
  81. Huang JL, Bowling NA, Liu M, Li Y 2014. Detecting insufficient effort responding with an infrequency scale: evaluating validity and participant reactions. J. Bus. Psychol. 30:299–311
    [Google Scholar]
  82. Huang JL, Curran PG, Keeney J, Poposki EM, DeShon RP 2012. Detecting and deterring insufficient effort responding to surveys. J. Bus. Psychol. 27:99–114
    [Google Scholar]
  83. Huang JL, Liu M, Bowling NA 2015. Insufficient effort responding: examining an insidious confound in survey data. J. Appl. Psychol. 100:828–45
    [Google Scholar]
  84. Jackson DN. 1977. Jackson Vocational Interest Survey Manual Port Huron, MI: Res. Psychol. Press
  85. Jackson DN, Wroblewski VR, Ashton MC 2000. The impact of faking on employment tests: Does forced choice offer a solution?. Hum. Perform. 13:371–88
    [Google Scholar]
  86. Johnson JA. 2005. Ascertaining the validity of individual protocols from web-based personality inventories. J. Res. Personal. 39:103–29
    [Google Scholar]
  87. Kam CCS, Chan GHH. 2018. Examination of the validity of instructed response items in identifying careless respondents. Personal. Individ. Differ. 129:83–87
    [Google Scholar]
  88. Kaminski KA, Hemingway MA. 2009. To proctor or not to proctor? Balancing business needs with validity in online assessment. Ind. Organ. Psychol. 2:24–26
    [Google Scholar]
  89. Kasten N, Freund PA, Staufenbiel T 2020. “Sweet little lies”: an in-depth analysis of faking behavior on Situational Judgment Tests compared to personality questionnaires. Eur. J. Psychol. Assess. 36:136–48
    [Google Scholar]
  90. Kline RB. 2009. Becoming a Behavioral Science Researcher: A Guide to Producing Research that Matters New York: Guilford
  91. Kluger AN, Reilly RR, Russell CJ 1991. Faking biodata tests: Are option-keyed instruments more resistant. ? J. Appl. Psychol. 76:889–96
    [Google Scholar]
  92. Krosnick JA. 1991. Response strategies for coping with the cognitive demands of attitude measures in surveys. Appl. Cogn. Psychol. 5:213–36
    [Google Scholar]
  93. Landers RN, Sackett PR, Tuzinski KA 2011. Retesting after initial failure, coaching rumors, and warnings against faking in online personality measures for selection. J. Appl. Psychol. 96:202–10
    [Google Scholar]
  94. Langleben DD, Loughead JW, Bilker WB, Ruparel K, Childress AR et al. 2005. Telling truth from lie in individual subjects with fast event‐related fMRI. Hum. Brain Mapp. 26:262–72
    [Google Scholar]
  95. Levashina J, Morgeson FP, Campion MA 2009. They don't do it often, but they do it well: exploring the relationship between applicant mental abilities and faking. Int. J. Sel. Assess. 17:271–81
    [Google Scholar]
  96. Levashina J, Morgeson FP, Campion MA 2012. Tell me some more: exploring how verbal ability and item verifiability influence responses to biodata questions in a high‐stakes selection context. Pers. Psychol. 65:359–83
    [Google Scholar]
  97. Levin RA, Zickar MJ. 2002. Investigating self-presentation, lies, and bullshit: understanding faking and its effects on selection decisions using theory, field research, and simulation. The Psychology of Work JM Brett, F Drasgow 253–76 Mahwah, NJ: Lawrence Erlbaum Assoc.
    [Google Scholar]
  98. Lievens F, Peeters H, Schollaert E 2008. Situational judgment tests: a review of recent research. Pers. Rev. 37:426–41
    [Google Scholar]
  99. Lilienfeld SO, Alliger G, Mitchell K 1995. Why integrity testing remains controversial. Am. Psychol. 50:457–58
    [Google Scholar]
  100. Liu M, Bowling NA, Huang JL, Kent T 2013. Insufficient effort responding to surveys as a threat to validity: the perceptions and practices of SIOP members. Ind.-Organ. Psychol. 51:32–38
    [Google Scholar]
  101. Lopez FJ, Hou N, Fan J 2019. Reducing faking on personality tests: testing a new faking-mitigation procedure in a U.S. job applicant sample. Int. J. Sel. Assess. 27:371–80
    [Google Scholar]
  102. Lucas RE, Baird BM. 2005. Global self-assessment. Handbook of Multimethod Measurement in Psychology M Eid, E Diener 29–42 Washington, DC: Am. Psychol. Assoc.
    [Google Scholar]
  103. Mahalanobis PC. 1936. On the generalized distance in statistics. Proc. Natl. Inst. Sci. India 12:49–55
    [Google Scholar]
  104. Maniaci MR, Rogge RD. 2014. Caring about carelessness: participant inattention and its effects on research. J. Res. Personal. 48:61–83
    [Google Scholar]
  105. McFarland LA. 2003. Warning against faking on a personality test: effects on applicant reactions and personality tests scores. Int. J. Sel. Assess. 11:265–76
    [Google Scholar]
  106. McFarland LA, Ryan AM. 2000. Variance in faking across noncognitive measures. J. Appl. Psychol. 85:812–21
    [Google Scholar]
  107. McGrath RE, Mitchell M, Kim BH, Hough L 2010. Evidence for response bias as a source of error variance in applied assessment. Psychol. Bull. 136:450–70
    [Google Scholar]
  108. Meade AW, Craig SB. 2012. Identifying careless responses in survey data. Psychol. Methods 17:437–55
    [Google Scholar]
  109. Moore RS, Ames GM. 2002. Survey confidentiality versus anonymity: young men's self-reported substance use. J. Alcohol Drug Educ. 47:32–41
    [Google Scholar]
  110. Morgeson FP, Campion MA, Dipboye RL, Hollenbeck JR, Murphy K, Schmitt N 2007. Reconsidering the use of personality tests in personnel selection contexts. Pers. Psychol. 60:683–729
    [Google Scholar]
  111. Morgeson FP, Delaney-Klinger K, Mayfield MS, Ferrara P, Campion MA 2004. Self-presentation processes in job analysis: a field experiment investigating inflation in abilities, tasks, and competencies. J. Appl. Psychol. 89:674–86
    [Google Scholar]
  112. Motowidlo SJ, Ghosh K, Mendoza AM, Buchanan AE, Lerma MN 2016. A context-independent situational judgment test to measure prosocial implicit trait policy. Hum. Perform. 29:331–46
    [Google Scholar]
  113. Mueller-Hanson RA, Heggestad ED, Thornton GC 2006. Individual differences in impression management: an exploration of the psychological processes underlying faking. Psychol. Sci. 48:288–312
    [Google Scholar]
  114. Nichols DS, Greene RL, Schmolck P 1989. Criteria for assessing inconsistent patterns of item endorsement on the MMPI: rationale, development, and empirical trials. J. Clin. Psychol. 45:239–50
    [Google Scholar]
  115. Ones DS, Viswesvaran C, Reiss AD 1996. Role of social desirability in personality testing for personnel selection: the red herring. J. Appl. Psychol. 81:660–79
    [Google Scholar]
  116. Ong AD, Weiss DJ. 2000. The impact of anonymity on responses to sensitive questions. J. Appl. Soc. Psychol. 30:1691–708
    [Google Scholar]
  117. Oostrom JK, de Vries RE, de Wit M 2019. Development and validation of a HEXACO situational judgment test. Hum. Perform. 32:1–29
    [Google Scholar]
  118. Ophir Y, Sisso I, Asterhan CSC, Tikochinski R, Reichart R 2020. The Turker blues: hidden factors behind increased depression rates among Amazon's Mechanical Turkers. Clin. Psychol. Sci. 8:65–83
    [Google Scholar]
  119. Pace VL, Borman WC. 2006. The use of warnings to discourage faking on noncognitive inventories. A Closer Examination of Applicant Faking Behavior RL Griffith, MH Peterson 283–304 Greenwich, CT: Inf. Age Publ.
    [Google Scholar]
  120. Paulhus DL. 1986. Self-deception and impression management in test responses. Personality Assessment Via Questionnaire A Angleitner, JS Wiggins 143–65 New York: Springer-Verlag
    [Google Scholar]
  121. Paulhus DL. 2002. Socially desirable responding: the evolution of the construct. The Role of Constructs in Psychological and Educational Measurement H Braun, D Jackson, D Wiley 49–69 Mahwah, NJ: Lawrence Erlbaum Assoc.
    [Google Scholar]
  122. Pauls CA, Crost NW. 2005. Cognitive ability and self-reported efficacy of self-presentation predict faking on personality measures. J. Individ. Differ. 26:194–206
    [Google Scholar]
  123. Peck R, Devore JL. 2012. Statistics: The Exploration and Analysis of Data Boston: Brooks/Cole
  124. Piedmont RL, McCrae RR, Riemann R, Angleitner A 2000. On the invalidity of validity scales: evidence from self-reports and observer ratings in volunteer samples. J. Personal. Soc. Psychol. 78:582–93
    [Google Scholar]
  125. Pinsoneault TB. 1998. A Variable Response Inconsistency Scale and a True Response Inconsistency Scale for the Jesness Inventory. Psychol. Assess. 10:21–32
    [Google Scholar]
  126. Protzko J, Zedelius CM, Schooler JW 2019. Rushing to appear virtuous: Time pressure increases socially desirable responding. Psychol. Sci. 30:1584–91
    [Google Scholar]
  127. R Project. 2018. Package “careless.”. https://cran.r-project.org/web/packages/careless/careless.pdf
  128. Robson SM, Jones A, Abraham J 2007. Personality, faking, and convergent validity: a warning concerning warning statements. Hum. Perform. 21:89–106
    [Google Scholar]
  129. Rosse JG, Stecher MD, Miller JL, Levin RA 1998. The impact of response distortion on preemployment personality testing and hiring decisions. J. Appl. Psychol. 83:634–44
    [Google Scholar]
  130. Roth PL, Bobko P, Van Iddekinge CH, Thatcher JB 2016. Social media in employee-selection-related decisions: a research agenda for uncharted territory. J. Manag. 42:269–98
    [Google Scholar]
  131. Schermele Z. 2020. 2020 AP exams and cheating: what students are saying. Teen Vogue May 27. https://www.teenvogue.com/story/2020-ap-exams-cheating
    [Google Scholar]
  132. Schinka JA, Kinder BN, Kremer T 1997. Research validity scales for the NEO-PI-R: development and initial validation. J. Personal. Assess. 68:127–38
    [Google Scholar]
  133. Schmitt N, Oswald FL. 2006. The impact of corrections for faking on the validity of noncognitive measures in selection settings. J. Appl. Psychol. 91:613–21
    [Google Scholar]
  134. Schmitt N, Oswald FL, Kim BH, Gillespie MA, Ramsay LJ, Yoo TY 2003. Impact of elaboration on socially desirable responding and the validity of biodata measures. J. Appl. Psychol. 88:979–88
    [Google Scholar]
  135. Schmitt N, Stuits DM. 1985. Factors defined by negatively keyed items: the result of careless respondents. ? Appl. Psychol. Meas. 9:367–73
    [Google Scholar]
  136. Smith PC, Budzeika KA, Edwards NA, Johnson SM, Bearse LN 1986. Guidelines for clean data: detection of common mistakes. J. Appl. Psychol. 71:457–60
    [Google Scholar]
  137. Stanush PL. 1997. Factors that influence the susceptibility of self-report inventories to distortion: a meta-analytic investigation PhD Diss., Texas A&M Univ., College Station, Texas
  138. Stark S, Chernyshenko OS, Drasgow F 2005. An IRT approach to constructing and scoring pairwise preference items involving stimuli on different dimensions: the multi-unidimensional pairwise-preference model. Appl. Psychol. Meas. 29:184–203
    [Google Scholar]
  139. Stark S, Chernyshenko OS, Drasgow F, Nye CD, White LA et al. 2014. From ABLE to TAPAS: a new generation of personality tests to support military selection and classification decisions. Mil. Psychol. 26:153–64
    [Google Scholar]
  140. Stemmler G, Wacker J. 2010. Personality, emotion, and individual differences in physiological responses. Biol. Psychol. 84:541–51
    [Google Scholar]
  141. Stewart GL, Darnold TC, Zimmerman RD, Parks L, Dustin SL 2010. Exploring how response distortion of personality measures affects individuals. Personal. Individ. Differ. 49:622–28
    [Google Scholar]
  142. Suchotzki K, Verschuere B, Van Bockstaele B, Ben-Shakhar G, Crombez G 2017. Lying takes time: a meta-analysis on reaction time measures of deception. Psychol. Bull. 143:428–53
    [Google Scholar]
  143. Templer KJ, Lange SR. 2008. Internet testing: equivalence between proctored lab and unproctored field conditions. Comput. Hum. Behav. 24:1216–28
    [Google Scholar]
  144. Teo TS, Lim VK, Lai RY 1999. Intrinsic and extrinsic motivation in Internet usage. Omega 27:25–37
    [Google Scholar]
  145. Tsaousis I, Nikolaou IE. 2001. The stability of the Five-Factor Model of personality in personnel selection and assessment in Greece. Int. J. Sel. Assess. 9:290–301
    [Google Scholar]
  146. Van Iddekinge CH, Lanvich SE, Roth PL, Junco E 2016. Social media for selection? Validity and adverse impact potential of a Facebook-based assessment. J. Manag. 42:1811–35
    [Google Scholar]
  147. Vasilopoulos NL, Reilly RR, Leaman JA 2000. The influence of job familiarity and impression management on self-report measure scale scores and response latencies. J. Appl. Psychol. 85:50–64
    [Google Scholar]
  148. Viswesvaran C, Ones DS. 1999. Meta-analyses of fakability estimates: implications for personality measurement. Educ. Psychol. Meas. 59:197–210
    [Google Scholar]
  149. Vul E, Harris C, Winkielman P, Pashler H 2009. Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspect. Psychol. Sci. 4:274–90
    [Google Scholar]
  150. Weekley JA, Ployhart RE, Harold CM 2003. Personality and situational judgment tests across applicant and incumbent settings Paper presented at 18th Annual Conference of the Society for Industrial Organizational Psychology, Orlando, FL, April 11–13
  151. Wilkinson L 1999. Statistical methods in psychology journals: guidelines and explanations. Am. Psychol 54:594–604
    [Google Scholar]
  152. Wilson MA, Harvey RJ, Macy BA 1990. Repeating items to estimate the test-retest reliability of task inventory ratings. J. Appl. Psychol. 75:158–63
    [Google Scholar]
  153. Wise SL, Kong X. 2005. Response time effort: a new measure of examinee motivation in computer-based tests. Appl. Meas. Educ. 18:163–83
    [Google Scholar]
  154. Woods CM. 2006. Careless responding to reverse-worded items: implications for confirmatory factor analysis. J. Psychopathol. Behav. Assess. 28:186
    [Google Scholar]
  155. Zhang B, Cao M, Tay L, Luo J, Drasgow F 2020. Examining the item response process to personality measures in high-stakes situations: issues of measurement validity and predictive validity. Pers. Psychol. 73:305–32
    [Google Scholar]
  156. Zickar MJ, Gibby RE, Robie C 2004. Uncovering faking samples in applicant, incumbent, and experimental data sets: an application of mixed-model item response theory. Organ. Res. Methods 7:168–90
    [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-012420-055324
Loading
/content/journals/10.1146/annurev-orgpsych-012420-055324
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error