1932

Abstract

Despite psychological scientists’ increasing interest in replicability, open science, research transparency, and the improvement of methods and practices, the clinical psychology community has been slow to engage. This has been shifting more recently, and with this review, we hope to facilitate this emerging dialogue. We begin by examining some potential areas of weakness in clinical psychology in terms of methods, practices, and evidentiary base. We then discuss a select overview of solutions, tools, and current concerns of the reform movement from a clinical psychological science perspective. We examine areas of clinical science expertise (e.g., implementation science) that should be leveraged to inform open science and reform efforts. Finally, we reiterate the call to clinical psychologists to increase their efforts toward reform that can further improve the credibility of clinical psychological science.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-clinpsy-050718-095710
2019-05-07
2024-04-16
Loading full text...

Full text loading...

/deliver/fulltext/clinpsy/15/1/annurev-clinpsy-050718-095710.html?itemId=/content/journals/10.1146/annurev-clinpsy-050718-095710&mimeType=html&fmt=ahah

Literature Cited

  1. Abram SV, DeYoung CG 2017. Using personality neuroscience to study personality disorder. Personal. Disord. Theory Res. Treat. 8:2–13
    [Google Scholar]
  2. Aiken LS, West SG, Millsap RE 2008. Doctoral training in statistics, measurement, and methodology in psychology: replication and extension of Aiken, West, Sechrest, and Reno's 1990 survey of PhD programs in North America. Am. Psychol. 63:132–50
    [Google Scholar]
  3. Aiken LS, West SGSechrest L, Reno RR, Roediger HL III, et al 1990. Graduate training in statistics, methodology, and measurement in psychology: a survey of PhD programs in North America. Am. Psychol. 45:6721–34
    [Google Scholar]
  4. Altman DG, Goodman SN 1994. Transfer of technology from statistical journals to the biomedical literature. JAMA 272:2129–32
    [Google Scholar]
  5. Am. Psychol. Assoc. 2010. Publication Manual of the American Psychological Association Washington, DC: Am. Psychol. Assoc, 6th ed..
  6. Am. Psychol. Assoc. 2017. Ethical principles of psychologists and code of conduct. American Psychological Association https://www.apa.org/ethics/code/
  7. APA Publ. Commun. Board Work. Group J. Article Rep. Stand. 2008. Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol 63:9839–51
    [Google Scholar]
  8. Bakker M, Hartgerink CHJ, Wicherts JM, van der Maas HLJ 2016. Researchers’ intuitions about power in psychological research. Psychol. Sci. 27:1069–77
    [Google Scholar]
  9. Bakker M, van Dijk A, Wicherts JM 2012. The rules of the game called psychological science. Perspect. Psychol. Sci. 7:6543–54
    [Google Scholar]
  10. Bakker M, Wicherts JM 2011. The (mis)reporting of statistical results in psychology journals. Behav. Res. Methods 43:3666–78
    [Google Scholar]
  11. Bauer DJ 2007. Observations on the use of growth mixture models in psychological research. Multivar. Behav. Res. 42:4757–86
    [Google Scholar]
  12. Becker EM, Smith AM, Jensen-Doss A 2013. Who's using treatment manuals? A national survey of practicing therapists. Behav. Res. Ther. 51:10706–10
    [Google Scholar]
  13. Beidas R, Kendall P 2010. Training therapists in evidence-based practice: a critical review of studies from a systems contextual perspective. Clin. Psychol. Sci. Pract. 17:1–30
    [Google Scholar]
  14. Belia S, Fidler F, Williams J, Cumming G 2005. Researchers misunderstand confidence intervals and standard error bars. Psychol. Methods 10:4389–96
    [Google Scholar]
  15. Berle D, Starcevic V 2007. Inconsistencies between reported test statistics and p-values in two psychiatry journals. Int. J. Methods Psychiatr. Res. 16:4202–7
    [Google Scholar]
  16. Blohowiak BB, Cohoon J, de-Wit L, Eich E, Farach FJ et al. 2018. Badges to acknowledge open practices Open Sci. Framew. Proj., Cent. Open Sci. Charlottesville, VA: https://osf.io/tvyxz/
  17. Bransford JD, Brown AL, Cocking RR 2000. How People Learn: Brain, Mind, Experience, and School Washington, DC: Natl. Acad. Press. Expand. , ed..
  18. Brown NJL, Heathers JAJ 2017. The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Soc. Psychol. Personal. Sci. 8:4363–69
    [Google Scholar]
  19. Bulteel K, Mestdagh M, Tuerlinckx F, Ceulemans E 2018. VAR(1) based models do not outpredict AR(1) models in current psychological applications. Psychol. Methods 23:4740–56
    [Google Scholar]
  20. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J et al. 2013. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14:5365–76
    [Google Scholar]
  21. Calamia M, Markon K, Tranel D 2012. Scoring higher the second time around: meta-analyses of practice effects in neuropsychological assessment. Clin. Neuropsychol. 26:4543–70
    [Google Scholar]
  22. Campbell DT, Fiske DW 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 56:281–105
    [Google Scholar]
  23. Chambers DA, Glasgow RE, Stange KC 2013. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement. Sci. 8:1117
    [Google Scholar]
  24. Chance BL 2002. Components of statistical thinking and implications for instruction and assessment. J. Stat. Educ. 10:3
    [Google Scholar]
  25. Chartier CR, McCarthy RJ 2018. StudySwap: a platform for inter-lab replication, collaboration, and research resource exchange Open Sci. Framew. Proj., Cent. Open Sci. Charlottesville, VA: https://osf.io/rd37b/
  26. Chorpita BF, Becker KD, Daleiden EL 2007. Understanding the common elements of evidence-based practice: misconceptions and clinical examples. J. Am. Acad. Child Adolesc. Psychiatry 46:5647–52
    [Google Scholar]
  27. Cohen J 1962. The statistical power of abnormal-social psychological research: a review. J. Abnorm. Soc. Psychol. 65:3145–53
    [Google Scholar]
  28. Cohen J 1990. Things I have learned (so far). Am. Psychol. 45:121304–12
    [Google Scholar]
  29. Colquhoun D 2017. The reproducibility of research and the misinterpretation of p-values. R. Soc. Open Sci. 4:12171085
    [Google Scholar]
  30. Contopoulos-Ioannidis DG, Alexiou G, Gouvias TC, Ioannidis JP 2008. Life cycle of translational research for medical interventions. Science 321:1298–99
    [Google Scholar]
  31. Conway CC, Tackett JL, Skodol AE 2017. Are personality disorders assessed in young people. ? Am. J. Psychiatry 174:101000–1
    [Google Scholar]
  32. Crable EL, Biancarelli D, Walkey AJ, Allen CG, Proctor EK, Drainoni ML 2018. Standardizing an approach to the evaluation of implementation science proposals. Implement. Sci. 13:171
    [Google Scholar]
  33. Cremers HR, Wager TD, Yarkoni T 2017. The relation between statistical power and inference in fMRI. PLOS ONE 12:e0184923
    [Google Scholar]
  34. Cronbach LJ, Meehl PE 1955. Construct validity in psychological tests. Psychol. Bull. 52:281–302
    [Google Scholar]
  35. Cuijpers P 2016. Are all psychotherapies equally effective in the treatment of adult depression? The lack of statistical power of comparative outcome studies. Evid.-Based Ment. Health 19:239–42
    [Google Scholar]
  36. Cuijpers P, Sijbrandij M, Koole S, Huibers M, Berking M, Andersson G 2014. Psychological treatment of generalized anxiety disorder: a meta-analysis. Clin. Psychol. Rev. 34:2130–40
    [Google Scholar]
  37. Cuijpers P, Smit F, Bohlmeijer E, Hollon SD, Andersson G 2010. Efficacy of cognitive-behavioural therapy and other psychological treatments for adult depression: meta-analytic study of publication bias. Br. J. Psychiatry 196:3173–78
    [Google Scholar]
  38. Culverhouse RC, Saccone NL, Horton AC, Ma Y, Anstey KJ et al. 2018. Collaborative meta-analysis finds no evidence of a strong interaction between stress and 5-HTTLPR genotype contributing to the development of depression. Mol. Psychiatry 23:1133–42
    [Google Scholar]
  39. Cumming G, Williams J, Fidler F 2004. Replication and researchers’ understanding of confidence intervals and standard error bars. Underst. Stat. 3:4299–311
    [Google Scholar]
  40. Cybulski L, Mayo-Wilson E, Grant S 2016. Improving transparency and reproducibility through registration: the status of intervention trials published in clinical psychology journals. J. Consult. Clin. Psychol. 84:9753–67
    [Google Scholar]
  41. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC 2009. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement. Sci. 4:150
    [Google Scholar]
  42. Dawson JF 2014. Moderation in management research: what, why, when, and how. J. Bus. Psychol. 29:11–19
    [Google Scholar]
  43. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J et al. 2004. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N. Eng. J. Med. 351:121250–51
    [Google Scholar]
  44. Dorsey S, Pullmann MD, Deblinger E, Berliner L, Kerns SE et al. 2013. Improving practice in community-based settings: a randomized trial of supervision—study protocol. Implement. Sci. 8:89
    [Google Scholar]
  45. Dorsey S, Pullmann MD, Kerns SEU, Jungbluth N, Meza R et al. 2017. The juggling act of supervision in community mental health: implications for supporting evidence-based treatment. Adm. Policy Ment. Health 44:6838–52
    [Google Scholar]
  46. Driessen E, Hollon SD, Bockting CL, Cuijpers P, Turner EH 2015. Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of US National Institutes of Health–funded trials. PLOS ONE 10:9e0137864
    [Google Scholar]
  47. Duncan LE, Keller MC 2011. A critical review of the first 10 years of candidate gene-by-environment interaction research in psychiatry. Am. J. Psychiatry 168:101041–49
    [Google Scholar]
  48. Dwan K, Gamble C, Williamson PR, Kirkham JJRep. Bias Group. 2013. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLOS ONE 8:7e66844
    [Google Scholar]
  49. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ 1999. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 79:3272–99
    [Google Scholar]
  50. Fanelli D 2010. Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLOS ONE 5:4e10271
    [Google Scholar]
  51. Finch S, Cumming G, Williams J, Palmer L, Griffith E et al. 2004. Reform of statistical inference in psychology: the case of Memory & Cognition. Behav. Res. Methods Instrum. . Comput 36:2312–24
    [Google Scholar]
  52. Finkel EJ, Eastwick PW, Reis HT 2015. Best research practices in psychology: illustrating epistemological and pragmatic considerations with the case of relationship science. J. Personal. Soc. Psychol. 108:2275–97
    [Google Scholar]
  53. Fiske DW, Campbell DT 1992. Citations do not solve problems. Psychol. Bull. 112:3393–95
    [Google Scholar]
  54. Forbes MK, Wright AG, Markon KE, Krueger RF 2017. Evidence that psychopathology symptom networks have limited replicability. J. Abnorm. Psychol. 126:7969–88
    [Google Scholar]
  55. Fraley RC, Vazire S 2014. The N-pact factor: evaluating the quality of empirical journals with respect to sample size and statistical power. PLOS ONE 9:10e109019
    [Google Scholar]
  56. Frances AJ, Widiger T 2012. Psychiatric diagnosis: lessons from the DSM-IV past and cautions for the DSM-5 future. Annu. Rev. Clin. Psychol. 8:1109–30
    [Google Scholar]
  57. Frank MC, Bergelson E, Bergmann C, Cristia A, Floccia C et al. 2017. A collaborative approach to infant research: promoting reproducibility, best practices, and theory-building. Infancy 22:4421–35
    [Google Scholar]
  58. Freedman R, Lewis DA, Michels R, Pine DS, Schultz SK et al. 2013. The initial field trials of DSM-5: new blooms and old thorns. Am. J. Psychiatry 170:1–5
    [Google Scholar]
  59. Fried EI, Eidhof MB, Palic S, Costantini G, Huisman-van Dijk HM et al. 2018. Replicability and generalizability of posttraumatic stress disorder (PTSD) networks: a cross-cultural multisite study of PTSD symptoms in four trauma patient samples. Clin. Psychol. Sci. 6:3335–51
    [Google Scholar]
  60. Garb HN 2005. Clinical judgment and decision making. Annu. Rev. Clin. Psychol. 1:67–89
    [Google Scholar]
  61. García-Berthou E, Alcaraz C 2004. Incongruence between test statistics and P values in medical papers. BMC Med. Res. Methodol. 4:13
    [Google Scholar]
  62. Garland AF, Brookman-Frazee L, Hurlburt MS, Accurso EC, Zoffness RJ et al. 2010. Mental health care for children with disruptive behavior problems: a view inside therapists’ offices. Psychiatr. Serv. 61:8788–95
    [Google Scholar]
  63. Gigerenzer G 2004. Mindless statistics. J. Socio-Econ. 33:5587–606
    [Google Scholar]
  64. Gigerenzer G, Gaissmaier W 2011. Heuristic decision making. Annu. Rev. Psychol. 62:451–82
    [Google Scholar]
  65. Glass GV 1976. Primary, secondary, and meta-analysis of research. Educ. Res. 5:103–8
    [Google Scholar]
  66. Glass GV 2015. Meta‐analysis at middle age: a personal history. Res. Synth. Methods 6:3221–31
    [Google Scholar]
  67. Golinski C, Cribbie RA 2009. The expanding role of quantitative methodologists in advancing psychology. Can. Psychol. 50:283–90
    [Google Scholar]
  68. Greenland S, Senn SJ, Rothman KJ, Carlin JB, Poole C et al. 2016. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur. J. Epidemiol. 31:4337–50
    [Google Scholar]
  69. Grimm KJ 2007. Multivariate longitudinal methods for studying developmental relationships between depression and academic achievement. Int. J. Behav. Dev. 31:4328–39
    [Google Scholar]
  70. Gurevitch J, Koricheva J, Nakagawa S, Stewart G 2018. Meta-analysis and the science of research synthesis. Nature 555:7695175–82
    [Google Scholar]
  71. Herschell AD, Kolko DJ, Baumann BL, Davis AC 2010. The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clin. Psychol. Rev. 30:4448–66
    [Google Scholar]
  72. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K 2009. Publication bias in clinical trials due to significance of trial results. Cochrane Database Syst. Rev. 2009:MR000006
    [Google Scholar]
  73. Ioannidis JPA 2018. The proposal to lower P value thresholds to.005. JAMA 319:141429–30
    [Google Scholar]
  74. Jackson DL, Gillaspy JA, Purc-Stephenson R 2009. Reporting practices in confirmatory factor analysis: an overview and some recommendations. Psychol. Methods 14:16–23
    [Google Scholar]
  75. Jenkins MM, Youngstrom EA, Washburn JJ, Youngstrom JK 2011. Evidence-based strategies improve assessment of pediatric bipolar disorder by community practitioners. Prof. Psychol. Res. Pract. 42:2121–29
    [Google Scholar]
  76. Joel S, Eastwick PW, Finkel EJ 2018. Open sharing of data on close relationships and other sensitive social psychological topic: challenges, tools, and future directions. Adv. Methods Pract. Psychol. Sci. 1:86–94
    [Google Scholar]
  77. John LK, Loewenstein G, Prelec D 2012. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23:5524–32
    [Google Scholar]
  78. Jones KD 2012. A critique of the DSM-5 field trials. J. Nerv. Ment. Dis. 200:6517–19
    [Google Scholar]
  79. Joyce B, Showers B 2002. Student Achievement Through Staff Development Alexandria, VA: Assoc. Superv. Curric. Dev.
  80. Kahneman D, Tversky A 1974. Judgment under uncertainty: heuristics and biases. Science 185:41571124–71
    [Google Scholar]
  81. Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S et al. 2016. Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLOS Biol 14:5e1002456
    [Google Scholar]
  82. Kim H, Eaton NR 2015. The hierarchical structure of common mental disorders: connecting multiple levels of comorbidity, bifactor models, and predictive validity. J. Abnorm. Psychol. 124:41064–78
    [Google Scholar]
  83. King KM, Littlefield AK, McCabe CJ, Mills KL, Flournoy J, Chassin L 2018. Longitudinal modeling in developmental neuroimaging research: common challenges, and solutions from developmental psychology. Dev. Cog. Neurosci. 33:54–72
    [Google Scholar]
  84. Kirk RE 1996. Practical significance: a concept whose time has come. Educ. Psychol. Meas. 56:5746–59
    [Google Scholar]
  85. Klein RA, Ratliff KA, Vianello M, Adams RB, Bahník Š et al. 2014. Investigating variation in replicability: a “Many Labs” replication project. Soc. Psychol. 45:3142–52
    [Google Scholar]
  86. Lance CE, Vandenberg RJ 2008. Statistical and Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences New York: Taylor & Francis
  87. Lash TL, Collin LJ, Van Dyke ME 2018. The replication crisis in epidemiology: snowball, snow job, or winter solstice?. Curr. Epidemiol. Rep. 5:2175–83
    [Google Scholar]
  88. Leichsenring F, Abbass A, Hilsenroth MJ, Leweke F, Luyten P et al. 2017. Biases in research: risk factors for non-replicability in psychotherapy and pharmacotherapy research. Psychol. Med. 47:61000–11
    [Google Scholar]
  89. Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG 2015. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement. Sci. 10:1155
    [Google Scholar]
  90. Lilienfeld SO 2017. Psychology's replication crisis and the grant culture: righting the ship. Perspect. Psychol. Sci. 12:660–64
    [Google Scholar]
  91. Lord FM 1967. A paradox in the interpretation of group comparisons. Psychol. Bull. 68:3304–5
    [Google Scholar]
  92. Lovett MC, Greenhouse JB 2000. Applying cognitive theory to statistics instruction. Am. Stat. 54:3196–206
    [Google Scholar]
  93. Lyon AR, Koerner K 2016. User-centered design for psychosocial intervention development and implementation. Clin. Psychol. Sci. Pract. 23:2180–200
    [Google Scholar]
  94. Lyon AR, Stirman SW, Kerns SEU, Bruns EJ 2011. Developing the mental health workforce: review and application of training approaches from multiple disciplines. Adm. Policy Ment. Health 38:4238–53
    [Google Scholar]
  95. MacCallum RC, Zhang S, Preacher KJ, Rucker DD 2002. On the practice of dichotomization of quantitative variables. Psychol. Methods 7:19–40
    [Google Scholar]
  96. Markon KE 2015. Ontology, measurement, and other fundamental problems of scientific inference. Psychol. Inq. 26:3259–62
    [Google Scholar]
  97. Marsh HW, Hau KT, Wen Z 2004. In search of golden rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's 1999 findings. Struct. Equ. Model. 11:3320–41
    [Google Scholar]
  98. Marshall M, Lockwood A, Bradley C, Adams C, Joy C, Fenton M 2000. Unpublished rating scales: a major source of bias in randomised controlled trials of treatments for schizophrenia. Br. J. Psychiatry 176:3249–52
    [Google Scholar]
  99. McNeish D, An J, Hancock GR 2018. The thorny relation between measurement quality and fit index cutoffs in latent variable models. J. Personal. Assess. 100:143–52
    [Google Scholar]
  100. McShane BB, Gal D, Gelman A, Robert C, Tackett JL 2018a. Abandon statistical significance. Am. Stat. In press
  101. McShane BB, Tackett JL, Bockenholt U, Gelman A 2018b. Large scale replication projects in contemporary psychological research. Am. Stat. In press
  102. Meehl PE 1990. Why summaries of research on psychological theories are often uninterpretable. Psychol. Rep. 66:1195–244
    [Google Scholar]
  103. Miller GA, Chapman JP 2001. Misunderstanding analysis of covariance. J. Abnorm. Psychol. 110:140–48
    [Google Scholar]
  104. Mills L, Abdulla E, Cribbie RA 2010. Quantitative methodology research: Is it on psychologists’ reading lists?. Tutor. Quant. Methods Psychol. 6:252–60
    [Google Scholar]
  105. Miloyan B, Fried EI 2017. A reassessment of the relationship between depression and all-cause mortality in 3,604,005 participants from 293 studies. World Psychiatry 16:219–20
    [Google Scholar]
  106. Moberg CA, Humphreys K 2017. Exclusion criteria in treatment research on alcohol, tobacco and illicit drug use disorders: a review and critical analysis. Drug Alcohol Rev 36:3378–88
    [Google Scholar]
  107. Moffitt TE, Caspi A, Taylor A, Kokaua J, Milne BJ et al. 2010. How common are common mental disorders? Evidence that lifetime prevalence rates are doubled by prospective versus retrospective ascertainment. Psychol. Med. 40:6899–909
    [Google Scholar]
  108. Morris ZS, Wooding S, Grant J 2011. The answer is 17 years, what is the question: understanding time lags in translational research. J. R. Soc. Med. 104:12510–20
    [Google Scholar]
  109. Moshontz H, Campbell L, Ebersole CR, IJzerman H, Urry HL et al. 2018. The Psychological Science Accelerator: advancing psychology through a distributed collaborative network. Adv. Methods Pract. Psychol. Sci. 1:501–15
    [Google Scholar]
  110. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD et al. 2017. A manifesto for reproducible science. Nat. Hum. Behav. 1:121
    [Google Scholar]
  111. Munder T, Brütsch O, Leonhart R, Gerger H, Barth J 2013. Researcher allegiance in psychotherapy outcome research: an overview of reviews. Clin. Psychol. Rev. 33:4501–11
    [Google Scholar]
  112. Nickerson RS 2000. Null hypothesis significance testing: a review of an old and continuing controversy. Psychol. Methods 5:2241–301
    [Google Scholar]
  113. Niemeyer H, Musch J, Pietrowsky R 2013. Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for depression. J. Consult. Clin. Psychol. 81:158–74
    [Google Scholar]
  114. NIH (Natl. Inst. Health). 2008. Revised policy on enhancing public access to archived publications resulting from NIH-funded research. Natl. Inst. Health https://grants.nih.gov/grants/guide/notice-files/NOT-OD-08-033.html
  115. NIH (Natl. Inst. Health). 2015. Implementing rigor and transparency in NIH & AHRQ research grant applications. Natl. Inst. Health https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-011.html
  116. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT 2018. The preregistration revolution. PNAS 115:112600–6
    [Google Scholar]
  117. Nuijten MB, Hartgerink CHJ, van Assen MALM, Epskamp S, Wicherts JM 2016. The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods 48:41205–26
    [Google Scholar]
  118. Open Sci. Collab. 2015. Estimating the reproducibility of psychological science. Science 349:6251aac4716
    [Google Scholar]
  119. Pearson K 1904. Report on certain enteric fever inoculation statistics. BMJ 3:1243–46
    [Google Scholar]
  120. Polanczyk GV, Salum GA, Sugaya LS, Caye A, Rohde LA 2015. Annual research review: a meta-analysis of the worldwide prevalence of mental disorders in children and adolescents. J. Child Psychol. Psychiatry 56:3345–65
    [Google Scholar]
  121. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL et al. 2015. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement. Sci. 10:121
    [Google Scholar]
  122. Preacher KJ, Curran PJ, Bauer DJ 2006. Computational tools for probing interactions in multiple linear regression, multilevel modeling, and latent curve analysis. J. Educ. Behav. Stat. 31:4437–48
    [Google Scholar]
  123. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B 2009. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm. Policy Ment. Health 36:124–34
    [Google Scholar]
  124. Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons G et al. 2011. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm. Policy Ment. Health 38:265–76
    [Google Scholar]
  125. Rapport MD, Orban SA, Kofler MJ, Friedman LM 2013. Do programs designed to train working memory, other executive functions, and attention benefit children with ADHD? A meta-analytic review of cognitive, academic, and behavioral outcomes. Clin. Psychol. Rev. 33:81237–52
    [Google Scholar]
  126. Reardon KW, Mercadante EJ, Tackett JL 2017. The assessment of personality disorder: methodological, developmental, and contextual considerations. Curr. Opin. Psychol. 21:39–43
    [Google Scholar]
  127. Regier DA, Narrow WE, Clarke DE, Kraemer HC, Kuramoto SJ et al. 2013. DSM-5 field trials in the United States and Canada, part II: test-retest reliability of selected categorical diagnoses. Am. J. Psychiatry 170:159–70
    [Google Scholar]
  128. Ross-Hellauer T, Deppe A, Schmidt B 2017. Survey on open peer review: attitudes and experience amongst editors, authors, and reviewers. PLOS ONE 12:12e0189311
    [Google Scholar]
  129. Rothwell PM 2005. External validity of randomised controlled trials: “To whom do the results of this trial apply?”. Lancet 365:945382–93
    [Google Scholar]
  130. Schieber TA, Carpi L, Díaz-Guilera A, Pardalos PM, Masoller C, Ravetti MG 2017. Quantification of network structural dissimilarities. Nat. Commun. 8:13928
    [Google Scholar]
  131. Schmitt N 1996. Uses and abuses of coefficient alpha. Psychol. Assess. 8:4350–53
    [Google Scholar]
  132. Sedlmeier P, Gigerenzer G 1989. Do studies of statistical power have an effect on the power of studies?. Psychol. Bull. 105:2309–16
    [Google Scholar]
  133. Shadish WR, Cook T, Campbell D 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference Boston, MA: Houghton Mifflin
  134. Shadish WR, Lecy JD 2015. The meta‐analytic big bang. Res. Synth. Methods 6:3246–64
    [Google Scholar]
  135. Sharpe D 2013. Why the resistance to statistical innovations? Bridging the communication gap. Psychol. Methods 18:4572–82
    [Google Scholar]
  136. Sher KJ, Jackson KM, Steinley D 2011. Alcohol use trajectories and the ubiquitous cat's cradle: cause for concern?. J. Abnorm. Psychol. 120:322–35
    [Google Scholar]
  137. Shrout PE, Bolger N 2002. Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychol. Methods 7:4422–45
    [Google Scholar]
  138. Sijtsma K 2009. On the use, the misuse, and the very limited usefulness of Cronbach's alpha. Psychometrika 74:1107–20
    [Google Scholar]
  139. Simmons JP, Nelson LD, Simonsohn U 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22:111359–66
    [Google Scholar]
  140. Smith ML, Glass GV 1977. Meta-analysis of psychotherapy outcome studies. Am. Psychol. 32:9752–60
    [Google Scholar]
  141. Sohn D 1996. Publication bias and the evaluation of psychotherapy efficacy in reviews of the research literature. Clin. Psychol. Rev. 16:2147–56
    [Google Scholar]
  142. Soto CJ 2018. How replicable are links between personality traits and consequential life outcomes? The Life Outcomes of Personality Replication Project Presented at the European Conference on Personality, Zadar, Croat., July 17–21 https://osf.io/9dta7/
  143. Staines GL, Cleland CM 2007. Bias in meta-analytic estimates of the absolute efficacy of psychotherapy. Rev. Gen. Psychol. 11:4329–47
    [Google Scholar]
  144. Sternberg RJ 1992. Psychological Bulletin's top 10 “hit parade.”. Psychol. Bull. 112:3387–88
    [Google Scholar]
  145. Stirman SW, DeRubeis RJ, Crits-Christoph P, Brody PE 2003. Are samples in randomized controlled trials of psychotherapy representative of community outpatients? A new methodology and initial findings. J. Consult. Clin. Psychol. 71:6963–72
    [Google Scholar]
  146. Stirman SW, DeRubeis RJ, Crits-Christoph P, Rothman A 2005. Can the randomized controlled trial literature generalize to nonrandomized patients?. J. Consult. Clin. Psychol. 73:1127–35
    [Google Scholar]
  147. Stuart EA, Bradshaw CP, Leaf PJ 2015. Assessing the generalizability of randomized trial results to target populations. Prev. Sci. 16:3475–85
    [Google Scholar]
  148. Tackett JL, Brandes CM, Reardon KW 2018. Leveraging the Open Science Framework in clinical psychological assessment research. Psychol. Assess. In press
  149. Tackett JL, Lilienfeld SO, Patrick CJ, Johnson SL, Krueger RF et al. 2017. It's time to broaden the replicability conversation: thoughts for and from clinical psychological science. Perspect. Psychol. Sci. 12:5742–56
    [Google Scholar]
  150. Tackett JL, McShane BB 2018. Conceptualizing and evaluating replication across domains of behavioral research. Behav. Brain Sci. 41:e152
    [Google Scholar]
  151. Tomarken AJ, Waller NG 2003. Potential problems with “well fitting” models. J. Abnorm. Psychol. 112:578–71
    [Google Scholar]
  152. Turner L, Shamseer L, Altman DG, Schulz KF, Moher D 2012. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst. Rev. 1:60
    [Google Scholar]
  153. van der Lem R, de Wever WWH, van der Wee NJA, van Veen T, Cuijpers P, Zitman FG 2012. The generalizability of psychotherapy efficacy trials in major depressive disorder: an analysis of the influence of patient selection in efficacy trials on symptom outcome in daily practice. BMC Psychiatry 12:192
    [Google Scholar]
  154. Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, Ugurbil K 2013. The WU-Minn Human Connectome Project: an overview. NeuroImage 80:62–79
    [Google Scholar]
  155. von Wolff A, Jansen M, Hölzel LP, Westphal A, Härter M, Kriston L 2014. Generalizability of findings from efficacy trials for chronic depression: an analysis of eligibility criteria. Psychiatr. Serv. 65:7897–904
    [Google Scholar]
  156. Walsh CG, Xia W, Li M, Denny JC, Harris PA, Malin BA 2018. Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: current practices and future challenges. Adv. Methods Pract. Psychol. Sci. 1:1104–14
    [Google Scholar]
  157. Wampold BE, Mondin GW, Moody M, Stich F, Benson K, Ahn H 1997. A meta-analysis of outcome studies comparing bona fide psychotherapies: empirically, “all must have prizes.”. Psychol. Bull. 122:3203–15
    [Google Scholar]
  158. Weisz JR, Donenberg GR, Han SS, Weiss B 1995. Bridging the gap between laboratory and clinic in child and adolescent psychotherapy. J. Consult. Clin. Psychol. 63:5688–701
    [Google Scholar]
  159. Westfall J, Yarkoni T 2016. Statistically controlling for confounding constructs is harder than you think. PLOS ONE 11:3e0152719
    [Google Scholar]
  160. Wicherts JM, Borsboom D, Kats J, Molenaar D 2006. The poor availability of psychological research data for reanalysis. Am. Psychol. 61:7726–28
    [Google Scholar]
  161. Williams M, Grajales CAG, Kurkiewicz D 2013. Assumptions of multiple regression: correcting two misconceptions. Pract. Assess. Res. Eval. 18:111–14
    [Google Scholar]
  162. Yates BT, Taub J 2003. Assessing the costs, benefits, cost-effectiveness, and cost-benefit of psychological assessment: We should, we can, and here's how. Psychol. Assess. 15:4478–95
    [Google Scholar]
  163. Zhao X, Lynch JG Jr., Chen Q 2010. Reconsidering Baron and Kenny: myths and truths about mediation analysis. J. Consum. Res. 37:2197–206
    [Google Scholar]
  164. Zima BT, Hurlburt MS, Knapp P, Ladd H, Tang L et al. 2005. Quality of publicly-funded outpatient specialty mental health care for common childhood psychiatric disorders in California. J. Am. Acad. Child Adolesc. Psychiatry 44:2130–44
    [Google Scholar]
  165. Zwaan RA, Etz A, Lucas RE, Donnellan MB 2018. Making replication mainstream. Behav. Brain Sci. 41:e120
    [Google Scholar]
/content/journals/10.1146/annurev-clinpsy-050718-095710
Loading
  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error