1932

Abstract

Meta-analysis provides a powerful tool for integrating findings from the research literature and building statistical models to explore trends and inconsistencies in the research base. Meta-analysis starts with a process for translating results from each study into an effect size that represents all findings in a common metric. Statistical models are then applied to estimate the mean, variance, and moderators of effect size. This article explores several key decision points in conducting a meta-analysis, including issues in obtaining a common metric, accounting for psychometric artifacts, and choosing an appropriate statistical model. It provides recommendations for choosing among alternate approaches and reporting results to ensure transparency.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-031921-021922
2023-01-23
2024-04-24
Loading full text...

Full text loading...

/deliver/fulltext/orgpsych/10/1/annurev-orgpsych-031921-021922.html?itemId=/content/journals/10.1146/annurev-orgpsych-031921-021922&mimeType=html&fmt=ahah

Literature Cited

  1. Adams RJ, Smart P, Huff AS. 2017. Shades of grey: guidelines for working with the grey literature in systematic reviews for management and organizational studies. Int. J. Manag. Rev. 19:443254
    [Google Scholar]
  2. Aguinis H. 2001. Estimation of sampling variance of correlations in meta-analysis. Pers. Psychol. 54:356990
    [Google Scholar]
  3. Aguinis H, Dalton DR, Bosco FA, Pierce CA, Dalton CM. 2011. Meta-analytic choices and judgment calls: implications for theory building and testing, obtained effect sizes, and scholarly impact. J. Manag. 37:1538
    [Google Scholar]
  4. Aguinis H, Pierce CA, Culpepper SA. 2009. Scale coarseness as a methodological artifact: correcting correlation coefficients attenuated from using coarse scales. Organ. Res. Methods 12:462352
    [Google Scholar]
  5. APA (Am. Psychol. Assoc.). 2018. JARS Quant table 9: quantitative meta-analysis article reporting standards J. Art. Report. Stand., APA Washington, DC: https://apastyle.apa.org/jars/quant-table-9.pdf
  6. Aytug ZG, Rothstein HR, Zhou W, Kern MC. 2012. Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organ. Res. Methods 15:110333
    [Google Scholar]
  7. Baguley T. 2009. Standardized or simple effect size: What should be reported?. Br. J. Psychol. 100:360317
    [Google Scholar]
  8. Banks GC, Rogelberg SG, Woznyj HM, Landis RS, Rupp DE. 2016. Evidence on questionable research practices: the good, the bad, and the ugly. J. Bus. Psychol. 31:332338
    [Google Scholar]
  9. Barrett GV, Phillips JS, Alexander RA. 1981. Concurrent and predictive validity designs: a critical reanalysis. J. Appl. Psychol. 66:116
    [Google Scholar]
  10. Beatty AS, Barratt CL, Berry CM, Sackett PR. 2014. Testing the generalizability of indirect range restriction corrections. J. Appl. Psychol. 99:458798
    [Google Scholar]
  11. Becker BJ. 1988. Synthesizing standardized mean-change measures. Br. J. Math. Stat. Psychol. 41:225778
    [Google Scholar]
  12. Becker BJ 2000. Multivariate meta-analysis. Handbook of Applied Multivariate Statistics and Mathematical Modeling HEA Tinsley, SD Brown 499525. San Diego, CA: Academic
    [Google Scholar]
  13. Becker BJ, Wu M-J. 2007. The synthesis of regression slopes in meta-analysis. Stat. Sci. 22:341429
    [Google Scholar]
  14. Berry CM, Sackett PR, Landers RN. 2007. Revisiting interview–cognitive ability relationships: attending to specific range restriction mechanisms in meta-analysis. Pers. Psychol. 60:483774
    [Google Scholar]
  15. Biggerstaff BJ, Tweedie RL. 1997. Incorporating variability in estimates of heterogeneity in the random effects model in meta-analysis. Stat. Med. 16:775368
    [Google Scholar]
  16. Bobko P. 1983. An analysis of correlations corrected for attenuation and range restriction. J. Appl. Psychol. 68:458489
    [Google Scholar]
  17. Bobko P, Roth PL, Bobko C. 2001. Correcting the effect size of d for range restriction and unreliability. Organ. Res. Methods 4:14661
    [Google Scholar]
  18. Bond CF, Wiitala WL, Richard FD. 2003. Meta-analysis of raw mean differences. Psychol. Methods 8:440618
    [Google Scholar]
  19. Bonett D. 2008. Meta-analytic interval estimation for bivariate correlations. Psychol. Methods 13:17381
    [Google Scholar]
  20. Bonett D. 2009. Meta-analytic interval estimation for standardized and unstandardized mean differences. Psychol. Methods 14:22538
    [Google Scholar]
  21. Borenstein M. 2019. Common Mistakes in Meta-Analysis and How to Avoid Them. Englewood, NJ: Biostat
  22. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. 2010. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res. Synth. Methods 1:297111
    [Google Scholar]
  23. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. 2021. Introduction to Meta-Analysis New York: Wiley
  24. Borenstein M, Higgins JPT, Hedges LV, Rothstein HR. 2017. Basics of meta-analysis: I2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8:1518
    [Google Scholar]
  25. Brannick MT, French KA, Rothstein HR, Kiselica AM, Apostoloski N. 2021. Capturing the underlying distribution in meta-analysis: credibility and tolerance intervals. Res. Synth. Methods 12:326490
    [Google Scholar]
  26. Brannick MT, Potter SM, Benitez B, Morris SB. 2019a. Bias and precision of alternate estimators in meta-analysis: benefits of blending Schmidt–Hunter and Hedges approaches. Organ. Res. Methods 22:2490514
    [Google Scholar]
  27. Brannick MT, Potter S, Teng Y. 2019b. Quantifying uncertainty in the meta-analytic lower bound estimate. Psychol. Methods 24:675473
    [Google Scholar]
  28. Brown RD, Oswald FL, Converse PD. 2017. Estimating operational validity under incidental range restriction: some important but neglected issues. Pract. Assess. Res. Eval. 22:616
    [Google Scholar]
  29. Brown SH, Stout JD, Dalessio AT, Crosby MM. 1988. Stability of validity indices through test score ranges. J. Appl. Psychol. 73:473642
    [Google Scholar]
  30. Burke MJ, Landis RS, Burke MI. 2014. .80 and beyond: recommendations for disattenuating correlations. Ind. Organ. Psychol. 7:453135
    [Google Scholar]
  31. Burr D, Doss H. 2005. A Bayesian semiparametric model for random-effects meta-analysis. J. Am. Stat. Assoc. 100:46924251
    [Google Scholar]
  32. Carlson KD, Ji FX. 2011. Citing and building on meta-analytic findings: a review and recommendations. Organ. Res. Methods 14:4696717
    [Google Scholar]
  33. Carretta TR, Ree MJ. 2022. Correction for range restriction: lessons from 20 research scenarios. Mil. Psychol. 34:555169
    [Google Scholar]
  34. Carter EC, Schönbrodt FD, Gervais WM, Hilgard J. 2019. Correcting for bias in psychology: a comparison of meta-analytic methods. Adv. Methods Pract. Psychol. Sci. 2:211544
    [Google Scholar]
  35. Cheung MW-L. 2008. A model for integrating fixed-, random-, and mixed-effects meta-analyses into structural equation modeling. Psychol. Methods 13:3182202
    [Google Scholar]
  36. Cheung MW-L. 2013. Multivariate meta-analysis as structural equation models. Struct. Equ. Model. Multidiscip. J. 20:342954
    [Google Scholar]
  37. Cheung MW-L. 2014. Modeling dependent effect sizes with three-level meta-analyses: a structural equation modeling approach. Psychol. Methods 19:221129
    [Google Scholar]
  38. Cheung MW-L. 2019. A guide to conducting a meta-analysis with non-independent effect sizes. Neuropsychol. Rev. 29:438796
    [Google Scholar]
  39. Conway JM, Huffcutt AI. 1997. Psychometric properties of multisource performance ratings: a meta-analysis of subordinate, supervisor, peer, and self-ratings. Hum. Perform. 10:433160
    [Google Scholar]
  40. Cornwell JM, Ladd RT. 1993. Power and accuracy of the Schmidt and Hunter meta-analytic procedures. Educ. Psychol. Meas. 53:487795
    [Google Scholar]
  41. Cortina JM. 1993. What is coefficient alpha? An examination of theory and applications. J. Appl. Psychol. 78:198104
    [Google Scholar]
  42. Cortina JM, Nouri H. 2000. Effect Size for ANOVA Designs Thousand Oaks, CA: SAGE
  43. Cortina JM, Sheng Z, Keener SK, Keeler KR, Grubb LK et al. 2020. From alpha to omega and beyond! A look at the past, present, and (possible) future of psychometric soundness in the. Journal of Applied Psychology. J. Appl. Psychol. 105:12135181
    [Google Scholar]
  44. Coward WM, Sackett PR. 1990. Linearity of ability–performance relationships: a reconfirmation. J. Appl. Psychol. 75:3297300
    [Google Scholar]
  45. Culpepper SA. 2016. An improved correction for range restricted correlations under extreme, monotonic quadratic nonlinearity and heteroscedasticity. Psychometrika 81:255064
    [Google Scholar]
  46. Cumming G. 2012. Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis New York: Routledge/Taylor & Francis
  47. Dahlke JA, Sackett PR. 2018. Refinements to effect sizes for tests of categorical moderation and differential prediction. Organ. Res. Methods 21:122634
    [Google Scholar]
  48. Dahlke JA, Wiernik BM. 2020. Not restricted to selection research: accounting for indirect range restriction in organizational research. Organ. Res. Methods 23:471749
    [Google Scholar]
  49. DeShon RP 2003. A generalizability theory perspective on measurement error corrections in validity generalization. Validity Generalization: A Critical Review KR Murphy 365402. Mahwah, NJ: Erlbaum
    [Google Scholar]
  50. DeSimone JA, Brannick MT, O'Boyle EH, Ryu JW 2021. Recommendations for reviewing meta-analyses in organizational research. Organ. Res. Methods 24:4694717
    [Google Scholar]
  51. DeSimone JA, Köhler T, Schoen JL. 2019. If it were only that easy: the use of meta-analytic research by organizational scholars. Organ. Res. Methods 22:486791
    [Google Scholar]
  52. Edwards JR. 2011. The fallacy of formative measurement. Organ. Res. Methods 14:237088
    [Google Scholar]
  53. Efthimiou O, Debray TPA, van Valkenhoef G, Trelle S, Panayidou K et al. 2016. GetReal in network meta-analysis: a review of the methodology. Res. Synth. Methods 7:323663
    [Google Scholar]
  54. Ellington JK, McAbee ST, Landis RS, Mead AD. 2021. I only have one rater per ratee, so what? The impact of clustered performance rating data on operational validity estimates. J. Bus. Psychol. 36:13354
    [Google Scholar]
  55. Erez A, Bloom MC, Wells MT. 1996. Using random rather than fixed effects models in meta-analysis: implications for situational specificity and validity generalization. Pers. Psychol. 49:2275306
    [Google Scholar]
  56. Feingold A. 2009. Effect sizes for growth-modeling analysis for controlled clinical trials in the same metric as for classical analysis. Psychol. Methods 14:14353
    [Google Scholar]
  57. Field AP. 2001. Meta-analysis of correlation coefficients: a Monte Carlo comparison of fixed- and random-effects methods. Psychol. Methods 6:216180
    [Google Scholar]
  58. Field AP. 2005. Is the meta-analysis of correlation coefficients accurate when population correlations vary?. Psychol. Methods 10:444467
    [Google Scholar]
  59. Fife DA, Hunter MD, Mendoza JL. 2016. Estimating unattenuated correlations with limited information about selection variables: alternatives to Case IV. Organ. Res. Methods 19:4593615
    [Google Scholar]
  60. Fife DA, Mendoza J, Day E, Terry R. 2020. Estimating subgroup differences in staffing research when the selection mechanism is unknown: a response to Li's Case IV correction. Organ. Res. Methods 23:236784
    [Google Scholar]
  61. Fife DA, Mendoza JL, Terry R. 2013. Revisiting Case IV: a reassessment of bias and standard errors of Case IV under range restriction. Br. J. Math. Stat. Psychol. 66:352142
    [Google Scholar]
  62. Finkelstein LM, Burke MJ, Raju MS. 1995. Age discrimination in simulated employment contexts: an integrative analysis. J. Appl. Psychol. 80:665263
    [Google Scholar]
  63. Glass GV, McGaw B, Smith ML. 1981. Meta-Analysis in Social Research Thousand Oaks, CA: SAGE
  64. Gonzalez-Mulé E, Aguinis H. 2018. Advancing theory by assessing boundary conditions with metaregression: a critical review and best-practice recommendations. J. Manag. 44:6224673
    [Google Scholar]
  65. Gooty J, Banks GC, Loignon AC, Tonidandel S, Williams CE. 2021. Meta-analyses as a multi-level model. Organ. Res. Methods 24:2389411
    [Google Scholar]
  66. Gross AL. 1982. Relaxing the assumptions underlying corrections for restriction of range. Educ. Psychol. Meas. 42:3795801
    [Google Scholar]
  67. Gross AL, Fleischman L. 1983. Restriction of range corrections when both distribution and selection assumptions are violated. Appl. Psychol. Meas. 7:222737
    [Google Scholar]
  68. Gusenbauer M, Haddaway NR. 2020. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 11:2181217
    [Google Scholar]
  69. Haddock CK, Rindskopf D, Shadish WR. 1998. Using odds ratios as effect sizes for meta-analysis of dichotomous data: a primer on methods and issues. Psychol. Methods 3:333953
    [Google Scholar]
  70. Hall SM, Brannick MT. 2002. Comparison of two random-effects methods of meta-analysis. J. Appl. Psychol. 87:237789
    [Google Scholar]
  71. Hartung J, Knapp G. 2001. A refined method for the meta-analysis of controlled clinical trials with binary outcome. Stat. Med. 20:24387589
    [Google Scholar]
  72. Hedges LV. 1982. Estimation of effect size from a series of independent experiments. Psychol. Bull. 92:249099
    [Google Scholar]
  73. Hedges LV. 1992. Meta-analysis. J. Educ. Stat. 17:427996
    [Google Scholar]
  74. Hedges LV 2009. Effect sizes in nested designs. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 33755. New York: Russell Sage Found. , 2nd ed..
    [Google Scholar]
  75. Hedges LV, Olkin I. 1985. Statistical Methods for Meta-Analysis San Diego, CA: Academic
  76. Hedges LV, Pigott TD. 2001. The power of statistical tests in meta-analysis. Psychol. Methods 6:320317
    [Google Scholar]
  77. Hedges LV, Pigott TD. 2004. The power of statistical tests for moderators in meta-analysis. Psychol. Methods 9:442645
    [Google Scholar]
  78. Hedges LV, Tipton E, Johnson MC 2010. Robust variance estimation in meta-regression with dependent effect size estimates. Res. Synth. Methods 1:13965
    [Google Scholar]
  79. Hedges LV, Vevea JL. 1998. Fixed- and random-effects models in meta-analysis. Psychol. Methods 3:4486504
    [Google Scholar]
  80. Higgins JPT, Jackson D, Barrett JK, Lu G, Ades AE, White IR. 2012. Consistency and inconsistency in network meta-analysis: concepts and models for multi-arm studies. Res. Synth. Methods 3:298110
    [Google Scholar]
  81. Higgins JPT, Thompson SG. 2002. Quantifying heterogeneity in a meta-analysis. Stat. Med. 21:11153958
    [Google Scholar]
  82. Higgins JPT, Thompson SG, Spiegelhalter DJ. 2009. A re-evaluation of random-effects meta-analysis. J. R. Stat. Soc. A 172:113759
    [Google Scholar]
  83. Highhouse S, Doverspike D, Guion RM. 2015. Essentials of Personnel Assessment and Selection New York: Routledge. , 2nd ed..
  84. Hoffman BJ, Woehr DJ. 2009. Disentangling the meaning of multisource performance rating source and dimension factors. Pers. Psychol. 62:473565
    [Google Scholar]
  85. Huffcutt AI, Culbertson SS, Weyhrauch WS. 2014. Moving forward indirectly: reanalyzing the validity of employment interviews with indirect range restriction methodology. Int. J. Sel. Assess. 22:3297309
    [Google Scholar]
  86. Hunter JE, Schmidt FL, Le H. 2006. Implications of direct and indirect range restriction for meta-analysis methods and findings. J. Appl. Psychol. 91:3594612
    [Google Scholar]
  87. Jackson D. 2006. The power of the standard test for the presence of heterogeneity in meta-analysis. Stat. Med. 25:15268899
    [Google Scholar]
  88. James LR, Demaree RG, Mulaik SA, Ladd RT. 1992. Validity generalization in the context of situational models. J. Appl. Psychol. 77:1314
    [Google Scholar]
  89. Kelley K, Preacher KJ. 2012. On effect size. Psychol. Methods 17:213752
    [Google Scholar]
  90. Kepes S, Banks GC, McDaniel M, Whetzel DL. 2012. Publication bias in the organizational sciences. Organ. Res. Methods 15:462462
    [Google Scholar]
  91. Kepes S, Keener SK, McDaniel MA, Hartman NS. 2022. Questionable research practices among researchers in the most research-productive management programs. J. Organ. Behav. 43:71190208
    [Google Scholar]
  92. Kepes S, McDaniel MA, Brannick MT, Banks GC. 2013. Meta-analytic reviews in the organizational sciences: two meta-analytic schools on the way to MARS (the Meta-Analytic Reporting Standards). J. Bus. Psychol. 28:212343
    [Google Scholar]
  93. Knapp G, Hartung J. 2003. Improved tests for a random effects meta-regression with a single covariate. Stat. Med. 22:172693710
    [Google Scholar]
  94. Köhler T, Cortina JM, Kurtessis JN, Gölz M. 2015. Are we correcting correctly? Interdependence of reliabilities in meta-analysis. Organ. Res. Methods 18:3355428
    [Google Scholar]
  95. Lance CE, Hoffman BJ, Gentry WA, Baranik LE. 2008. Rater source factors represent important subcomponents of the criterion construct space, not rater bias. Hum. Resour. Manag. Rev. 18:422332
    [Google Scholar]
  96. Langan D, Higgins JPT, Jackson D, Bowden J, Veroniki AA et al. 2019. A comparison of heterogeneity variance estimators in simulated random-effects meta-analyses. Res. Synth. Methods 10:18398
    [Google Scholar]
  97. Law KS, Schmidt FL, Hunter JE. 1994. A test of two refinements in procedures for meta-analysis. J. Appl. Psychol. 79:697886
    [Google Scholar]
  98. Law M, Jackson D, Turner R, Rhodes K, Viechtbauer W. 2016. Two new methods to fit models for network meta-analysis with random inconsistency effects. BMC Med. Res. Methodol. 16:87
    [Google Scholar]
  99. Le H, Oh I-S, Schmidt FL, Wooldridge CD. 2016. Correction for range restriction in meta-analysis revisited: improvements and implications for organizational research. Pers. Psychol. 69:49751008
    [Google Scholar]
  100. Le H, Schmidt FL, Putka DJ. 2009. The multifaceted nature of measurement artifacts and its implications for estimating construct-level relationships. Organ. Res. Methods 12:1165200
    [Google Scholar]
  101. LeBreton JM, Burgess JRD, Kaiser RB, Atchley EK, James LR. 2003. The restriction of variance hypothesis and interrater reliability and agreement: Are ratings from multiple sources really dissimilar?. Organ. Res. Methods 6:180128
    [Google Scholar]
  102. LeBreton JM, Scherer KT, James LR. 2014. Corrections for criterion reliability in validity generalization: a false prophet in a land of suspended judgment. Ind. Organ. Psychol. 7:4478500
    [Google Scholar]
  103. LeBreton JM, Schoen JL, James LR 2017. Situational specificity, validity generalization, and the future of psychometric meta-analysis. Handbook of Employee Selection JL Farr, NT Tippins 93114. London: Taylor & Francis. , 2nd ed..
    [Google Scholar]
  104. Lee KJ, Thompson SG. 2008. Flexible parametric models for random-effects distributions. Stat. Med. 27:341834
    [Google Scholar]
  105. Li JC-H. 2015. Cohen's d corrected for Case IV range restriction: a more accurate procedure for evaluating subgroup differences in organizational research. Pers. Psychol. 68:4899927
    [Google Scholar]
  106. Li JC-H, Cui Y, Chan W 2013. Bootstrap confidence intervals for the mean correlation corrected for Case IV range restriction: a more adequate procedure for meta-analysis. J. Appl. Psychol. 98:118393
    [Google Scholar]
  107. Li X, Dusseldorp E, Meulman JJ. 2017. Meta-CART: a tool to identify interactions between moderators in meta-analysis. Br. J. Math. Stat. Psychol. 70:111836
    [Google Scholar]
  108. Li X, Dusseldorp E, Su X, Meulman JJ. 2020. Multiple moderator meta-analysis using the R-package Meta-CART. Behav. Res. Methods 52:6265773
    [Google Scholar]
  109. Lipsey MW, Wilson DB. 1993. The efficacy of psychological, educational, and behavioral treatment: confirmation from meta-analysis. Am. Psychol. 48:121181209
    [Google Scholar]
  110. López-López JA, Marín-Martínez F, Sánchez-Meca J, Van Den Noortgate W, Viechtbauer W. 2014. Estimation of the predictive power of the model in mixed-effects meta-regression: a simulation study. Br. J. Math. Stat. Psychol. 67:13048
    [Google Scholar]
  111. López-López JA, Page MJ, Lipsey MW, Higgins JPT. 2018. Dealing with effect size multiplicity in systematic reviews and meta-analyses. Res. Synth. Methods 9:333651
    [Google Scholar]
  112. López-López JA, Van Den Noortgate W, Tanner-Smith EE, Wilson SJ, Lipsey MW. 2017. Assessing meta-regression methods for examining moderator relationships with dependent effect sizes: a Monte Carlo simulation. Res. Synth. Methods 8:443550
    [Google Scholar]
  113. Lu G, Ades AE. 2004. Combination of direct and indirect evidence in mixed treatment comparisons. Stat. Med. 23:20310524
    [Google Scholar]
  114. MacKenzie SB, Podsakoff PM, Jarvis CB. 2005. The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions. J. Appl. Psychol. 90:471030
    [Google Scholar]
  115. Mavridis D, Giannatsi M, Cipriani A, Salanti G. 2015. A primer on network meta-analysis with emphasis on mental health. Evid. Based Ment. Health 18:24046
    [Google Scholar]
  116. Morris SB. 2008. Estimating effect sizes from pretest-posttest-control group designs. Organ. Res. Methods 11:236486
    [Google Scholar]
  117. Morris SB, Daisley RL, Wheeler M, Boyer P. 2015. A meta-analysis of the relationship between individual assessments and job performance. J. Appl. Psychol. 100:1520
    [Google Scholar]
  118. Morris SB, DeShon RP. 1997. Correcting effect sizes computed from factor analysis of variance for use in meta-analysis. Psychol. Methods 2:2192
    [Google Scholar]
  119. Morris SB, DeShon RP. 2002. Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychol. Methods 7:110525
    [Google Scholar]
  120. Morris SB, McAbee ST, Landis RS, Bauer KN. 2017. Don't get too confident: uncertainty in SDρ. Ind. Organ. Psychol. 10:346772
    [Google Scholar]
  121. Morris SB, Shokri A. 2021. Effect size and effect uncertainty in organizational research methods. Oxford Research Encyclopedia of Business and Management MA Hitt Oxford, UK: Oxford Univ. Press https://doi.org/10.1093/acrefore/9780190224851.013.238
    [Google Scholar]
  122. Murphy KR. 2000. Impact of assessments of validity generalization and situational specificity on the science and practice of personnel selection. Int. J. Sel. Assess. 8:4194206
    [Google Scholar]
  123. Murphy KR, DeShon RP. 2000. Interrater correlations do not estimate the reliability of job performance ratings. Pers. Psychol. 53:4873900
    [Google Scholar]
  124. Murphy KR, Myors B, Wolach A. 2009. Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests New York: Routledge/Taylor & Francis. , 3rd ed..
  125. Nimon K, Zientek L, Henson R. 2012. The assumption of a reliable instrument and other pitfalls to avoid when considering the reliability of data. Front. Psychol. 3:102
    [Google Scholar]
  126. Nye CD, Sackett PR. 2017. New effect sizes for tests of categorical moderation and differential prediction. Organ. Res. Methods 20:463964
    [Google Scholar]
  127. Oh I-S, Roth PL. 2017. On the mystery (or myth) of challenging principles and methods of validity generalization (VG) based on fragmentary knowledge and improper or outdated practices of VG. Ind. Organ. Psychol. 10:347985
    [Google Scholar]
  128. Olejnik S, Algina J. 2000. Measures of effect size for comparative studies: applications, interpretations, and limitations. Contemp. Educ. Psychol. 25:324186
    [Google Scholar]
  129. Olian JD, Schwab DP, Haberfeld Y. 1988. The impact of applicant gender compared to qualifications on hiring recommendations: a meta-analysis of experimental studies. Organ. Behav. Hum. Decis. Process. 41:218095
    [Google Scholar]
  130. O'Neill TA, McLarnon MJW, Carswell JJ. 2015. Variance components of job performance ratings. Hum. Perform. 28:16691
    [Google Scholar]
  131. Ones DS, Viswesvaran C. 2003. Job-specific applicant pools and national norms for personality scales: implications for range-restriction corrections in validation research. J. Appl. Psychol. 88:357077
    [Google Scholar]
  132. Oswald FL, Ercan S, McAbee ST, Ock J, Shaw A. 2015. Imperfect corrections or correct imperfections? Psychometric corrections in meta-analysis. Ind. Organ. Psychol. 8:2e14
    [Google Scholar]
  133. Oswald FL, Johnson JW. 1998. On the robustness, bias, and stability of statistics from meta-analysis of correlation coefficients: some initial Monte Carlo findings. J. Appl. Psychol. 83:216478
    [Google Scholar]
  134. Oswald FL, McCloy RA 2003. Meta-analysis and the art of the average. Validity Generalization: A Critical Review KR Murphy 31138. Mahwah, NJ: Erlbaum
    [Google Scholar]
  135. Park HSH, Wiernik BM, Oh I-S, Gonzalez-Mulé E, Ones DS, Lee Y. 2020. Meta-analytic five-factor model personality intercorrelations: eeny, meeny, miney, moe, how, which, why, and where to go. J. Appl. Psychol. 105:121490529
    [Google Scholar]
  136. Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. 2019. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res. Synth. Methods 10:333042
    [Google Scholar]
  137. Putka DJ, Hoffman BJ, Carter NT. 2014. Correcting the correction: when individual raters offer distinct but valid perspectives. Ind. Organ. Psychol. 7:454348
    [Google Scholar]
  138. Raju NS, Anselmi TV, Goodman JS, Thomas A. 1998. The effect of correlated artifacts and true validity on the accuracy of parameter estimation in validity generalization. Pers. Psychol. 51:245365
    [Google Scholar]
  139. Raju NS, Brand PA. 2003. Determining the significance of correlations corrected for unreliability and range restriction. Appl. Psychol. Meas. 27:15271
    [Google Scholar]
  140. Raju NS, Burke MJ, Normand J, Langlois GM 1991. A new meta-analytic approach. J. Appl. Psychol. 76:343246
    [Google Scholar]
  141. Raju NS, Drasgow F 2003. Maximum likelihood estimation in validity generalization. Validity Generalization: A Critical Review KR Murphy 26385. Mahwah, NJ: Erlbaum
    [Google Scholar]
  142. Raju NS, Fralicx R, Steinhaus SD. 1986. Covariance and regression slope models for studying validity generalization. Appl. Psychol. Meas. 10:2195211
    [Google Scholar]
  143. Robie C, Ryan AM. 1999. Effects of nonlinearity and heteroscedasticity on the validity of conscientiousness in predicting overall job performance. Int. J. Sel. Assess. 7:315769
    [Google Scholar]
  144. Rodriguez JE, Williams DR, Bürkner P-C. 2021. Heterogeneous heterogeneity by default: testing categorical moderators in random-effects meta-analysis. PsyArXiv tqcka. https://doi.org/10.31234/osf.io/tqcka
  145. Rosenthal R, Rubin DB. 1986. Meta-analytic procedures for combining studies with multiple effect sizes. Psychol. Bull. 99:34006
    [Google Scholar]
  146. Roth PL, Le H, Oh I-S, Van Iddekinge CH, Buster MA et al. 2014. Differential validity for cognitive ability tests in employment and educational settings: not much more than range restriction?. J. Appl. Psychol. 99:1120
    [Google Scholar]
  147. Rothstein HR, Hopewell S 2009. Grey literature. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 10325. New York: Russell Sage Found. , 2nd ed..
    [Google Scholar]
  148. Rubio-Aparicio M, López-López JA, Sánchez-Meca J, Marín-Martínez F, Viechtbauer W, Van Den Noortgate W. 2018. Estimation of an overall standardized mean difference in random-effects meta-analysis if the distribution of random effects departs from normal. Res. Synth. Methods 9:3489503
    [Google Scholar]
  149. Rubio-Aparicio M, Sánchez-Meca J, López-López JA, Botella J, Marín-Martínez F. 2017. Analysis of categorical moderators in mixed-effects meta-analysis: consequences of using pooled versus separate estimates of the residual between-studies variances. Br. J. Math. Stat. Psychol. 70:343956
    [Google Scholar]
  150. Rudolph CW, Chang CK, Rauvola RS, Zacher H. 2020. Meta-analysis in vocational behavior: a systematic review and recommendations for best practices. J. Vocat. Behav. 118:103397
    [Google Scholar]
  151. Sackett PR. 2014. When and why correcting validity coefficients for interrater reliability makes sense. Ind. Organ. Psychol. 7:45016
    [Google Scholar]
  152. Sackett PR, Laczo RM, Arvey RD. 2002. The effects of range restriction on estimates of criterion interrater reliability: implications for validation research. Pers. Psychol. 55:480725
    [Google Scholar]
  153. Sackett PR, Lievens F, Berry CM, Landers RN. 2007. A cautionary note on the effects of range restriction on predictor intercorrelations. J. Appl. Psychol. 92:253844
    [Google Scholar]
  154. Sackett PR, Ostgaard DJ. 1994. Job-specific applicant pools and national norms for cognitive ability tests: implications for range restriction corrections in validation research. J. Appl. Psychol. 79:568084
    [Google Scholar]
  155. Sackett PR, Yang H. 2000. Correction for range restriction: an expanded typology. J. Appl. Psychol. 85:111218
    [Google Scholar]
  156. Sackett PR, Zhang C, Berry CM. 2021. Challenging conclusions about predictive bias against Hispanic test takers in personnel selection. J. Appl. Psychol. https://doi.org/10.1037/apl0000978
    [Google Scholar]
  157. Sackett PR, Zhang C, Berry CM, Lievens F. 2022. Revisiting meta-analytic estimates of validity in personnel selection: addressing systematic overcorrection for restriction of range. J. Appl. Psychol. 107:11204068
    [Google Scholar]
  158. Salanti G. 2012. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res. Synth. Methods 3:28097
    [Google Scholar]
  159. Salgado JF, Moscoso S. 2019. Meta-analysis of interrater reliability of supervisory performance ratings: effects of appraisal purpose, scale type, and range restriction. Front. Psychol. 10:2281
    [Google Scholar]
  160. Schmidt FL. 2017. Statistical and measurement pitfalls in the use of meta-regression in meta-analysis. Career Dev. Int. 22:546976
    [Google Scholar]
  161. Schmidt FL, Hunter JE. 1996. Measurement error in psychological research: lessons from 26 research scenarios. Psychol. Methods 1:2199223
    [Google Scholar]
  162. Schmidt FL, Hunter JE. 2015. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings Thousand Oaks, CA: SAGE
  163. Schmidt FL, Hunter JE, Pearlman K, Hirsh HR, Sackett PR et al. 1985. Forty questions about validity generalization and meta-analysis. Pers. Psychol. 38:4697798
    [Google Scholar]
  164. Schmidt FL, Le H, Ilies R. 2003. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs. Psychol. Methods 8:220624
    [Google Scholar]
  165. Schmidt FL, Oh I-S, Hayes TL. 2009. Fixed- versus random-effects models in meta-analysis: model properties and an empirical comparison of differences in results. Br. J. Math. Stat. Psychol. 62:197128
    [Google Scholar]
  166. Schulze R. 2004. Meta-Analysis: A Comparison of Approaches Göttingen, Ger.: Hogrefe
  167. Senior AM, Viechtbauer W, Nakagawa S. 2020. Revisiting and expanding the meta-analysis of variation: the log coefficient of variation ratio. Res. Synth. Methods 11:455367
    [Google Scholar]
  168. Shen W, Cucina JM, Walmsley PT, Seltzer BK. 2014. When correcting for unreliability of job performance ratings, the best estimate is still.52. Ind. Organ. Psychol. 7:451924
    [Google Scholar]
  169. Siddaway AP, Wood AM, Hedges LV. 2019. How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annu. Rev. Psychol. 70:74770
    [Google Scholar]
  170. Sidik K, Jonkman JN. 2002. A simple confidence interval for meta-analysis. Stat. Med. 21:21315359
    [Google Scholar]
  171. SIOP (Soc. Ind. Organ. Psychol.). 2018. Principles for the validation and use of personnel selection procedures. Ind. Organ. Psychol. Perspect. Sci. Pract. 11:Suppl. 1297
    [Google Scholar]
  172. Steel PD, Beugelsdijk S, Aguinis H. 2021. The anatomy of an award-winning meta-analysis: recommendations for authors, reviewers, and readers of meta-analytic reviews. J. Int. Bus. Stud. 52:12344
    [Google Scholar]
  173. Steel PD, Kammeyer-Mueller JD. 2002. Comparing meta-analytic moderator estimation techniques under realistic conditions. J. Appl. Psychol. 87:196111
    [Google Scholar]
  174. Tanner-Smith EE, Tipton E, Polanin JR. 2016. Handling complex meta-analytic data structures using robust variance estimates: a tutorial in R. J. Dev. Life-Course Criminol. 2:185112
    [Google Scholar]
  175. Tett RP, Hundley NA, Christiansen ND. 2017. Meta-analysis and the myth of generalizability. Ind. Organ. Psychol. 10:342156
    [Google Scholar]
  176. Thomas A, Raju NS. 2004. An evaluation of James et al.'s 1992 VG estimation procedure when artifacts and true validity are correlated. Int. J. Sel. Assess. 12:4299311
    [Google Scholar]
  177. Thomas H. 1990. A likelihood-based model for validity generalization. J. Appl. Psychol. 75:11320
    [Google Scholar]
  178. Tipton E. 2015. Small sample adjustments for robust variance estimation with meta-regression. Psychol. Methods 20:337593
    [Google Scholar]
  179. Tipton E, Pustejovsky JE, Ahmadi H. 2019. A history of meta-regression: technical, conceptual, and practical developments between 1974 and 2018. Res. Synth. Methods 10:216179
    [Google Scholar]
  180. Tziner A, Murphy KR, Cleveland JN. 2005. Contextual and rater factors affecting rating behavior. Group Organ. Manag. 30:18998
    [Google Scholar]
  181. Van Den Noortgate W, López-López JA, Marín-Martínez F, Sánchez-Meca J. 2015. Meta-analysis of multiple outcomes: a multilevel approach. Behav. Res. Methods 47:4127494
    [Google Scholar]
  182. Van Den Noortgate W, Onghena P. 2003. Estimating the mean effect size in meta-analysis: bias, precision, and mean squared error of different weighting methods. Behav. Res. Methods Instrum. Comput. 35:450411
    [Google Scholar]
  183. van Zundert CHJ, Miočević M. 2020. A comparison of meta-methods for synthesizing indirect effects. Res. Synth. Methods 11:684965
    [Google Scholar]
  184. Veroniki AA, Jackson D, Viechtbauer W, Bender R, Bowden J et al. 2016. Methods to estimate the between-study variance and its uncertainty in meta-analysis. Res. Synth. Methods 7:15579
    [Google Scholar]
  185. Viechtbauer W. 2005. Bias and efficiency of meta-analytic variance estimators in the random-effects model. J. Educ. Behav. Stat. 30:326193
    [Google Scholar]
  186. Viechtbauer W. 2007a. Confidence intervals for the amount of heterogeneity in meta-analysis. Stat. Med. 26:13752
    [Google Scholar]
  187. Viechtbauer W. 2007b. Hypothesis tests for population heterogeneity in meta-analysis. Br. J. Math. Stat. Psychol. 60:12960
    [Google Scholar]
  188. Viechtbauer W. 2010. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36:148
    [Google Scholar]
  189. Viechtbauer W, Cheung MW-L. 2010. Outlier and influence diagnostics for meta-analysis. Res. Synth. Methods 1:211225
    [Google Scholar]
  190. Viechtbauer W, López-López JA, Sánchez-Meca J, Marín-Martínez F. 2015. A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychol. Methods 20:336074
    [Google Scholar]
  191. Viswesvaran C, Ones DS, Schmidt FL. 1996. Comparative analysis of the reliability of job performance ratings. J. Appl. Psychol. 81:555774
    [Google Scholar]
  192. Viswesvaran C, Ones DS, Schmidt FL, Le H, Oh I-S. 2014. Measurement error obfuscates scientific knowledge: Path to cumulative knowledge requires corrections for unreliability and psychometric meta-analyses. Ind. Organ. Psychol. 7:450718
    [Google Scholar]
  193. Viswesvaran C, Sanchez JI. 1998. Moderator search in meta-analysis: a review and cautionary note on existing approaches. Educ. Psychol. Meas. 58:17787
    [Google Scholar]
  194. von Hippel PT. 2015. The heterogeneity statistic I2 can be biased in small meta-analyses. BMC Med. Res. Methodol. 15:35
    [Google Scholar]
  195. Wanous JP, Sullivan SE, Malinak J. 1989. The role of judgment calls in meta-analysis. J. Appl. Psychol. 74:225964
    [Google Scholar]
  196. Whitener EM. 1990. Confusion of confidence intervals and credibility intervals in meta-analysis. J. Appl. Psychol. 75:331521
    [Google Scholar]
  197. Wiernik BM, Dahlke JA. 2020. Obtaining unbiased results in meta-analysis: the importance of correcting for statistical artifacts. Adv. Methods Pract. Psychol. Sci. 3:194123
    [Google Scholar]
  198. Wiernik BM, Kostal JW, Wilmot MP, Dilchert S, Ones DS. 2017. Empirical benchmarks for interpreting effect size variability in meta-analysis. Ind. Organ. Psychol. 10:347279
    [Google Scholar]
  199. Yuan Z, Morgeson FP, LeBreton JM. 2020. Maybe not so independent after all: the possibility, prevalence, and consequences of violating the independence assumptions in psychometric meta-analysis. Pers. Psychol. 73:3491516
    [Google Scholar]
  200. Zhang N, Wang M, Xu H. 2022. Disentangling effect size heterogeneity in meta-analysis: a latent mixture approach. Psychol. Methods 27:337399
    [Google Scholar]
  201. Zimmerman DW. 2007. Correction for attenuation with biased reliability estimates and correlated errors in populations and samples. Educ. Psychol. Meas. 67:692039
    [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-031921-021922
Loading
  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error