1932

Abstract

In 2010–2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call Psychology's Renaissance We begin by describing how psychologists’ concerns with publication bias shifted from worrying about file-drawered studies to worrying about -hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and preregistration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that meta-analytical thinking the prevalence of false positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-psych-122216-011836
2018-01-04
2024-10-10
Loading full text...

Full text loading...

/deliver/fulltext/psych/69/1/annurev-psych-122216-011836.html?itemId=/content/journals/10.1146/annurev-psych-122216-011836&mimeType=html&fmt=ahah

Literature Cited

  1. Achenbach J. 2011. Diederik Stapel: the lying Dutchman. The Washington Post Blog Nov. 1. http://web.archive.org/web/20170418235730/https://www.washingtonpost.com/blogs/achenblog/post/diederik-stapel-the-lying-dutchman/2011/11/01/gIQA86XOdM_blog.html [Google Scholar]
  2. Alogna VK, Attaya MK, Aucoin P, Bahník Š, Birch S. et al. 2014. Registered replication report: Schooler and Engstler-Schooler (1990).. Perspect. Psychol. Sci. 9:556–78 [Google Scholar]
  3. Anderson CJ, Bahník Š, Barnett-Cowan M, Bosco FA, Chandler J. et al. 2016. Response to comment on “Estimating the reproducibility of psychological science. Science 351:1037 [Google Scholar]
  4. Asendorpf JB, Conner M, De Fruyt F, De Houwer J, Denissen JJ. et al. 2013. Recommendations for increasing replicability in psychology. Eur. J. Personal. 27:108–19 [Google Scholar]
  5. Bakan D. 1966. The test of significance in psychological research. Psychol. Bull. 66:423–37 [Google Scholar]
  6. Bakker M, van Dijk A, Wicherts JM. 2012. The rules of the game called psychological science. Perspect. Psychol. Sci. 7:543–54 [Google Scholar]
  7. Bakker M, Wicherts JM. 2011. The (mis)reporting of statistical results in psychology journals. Behav. Res. Methods 43:666–78 [Google Scholar]
  8. Bargh JA, Chen M, Burrows L. 1996. Automaticity of social behavior: direct effects of trait construct and stereotype activation on action. J. Personal. Soc. Psychol. 71:230–44 [Google Scholar]
  9. Bem D, Tressoldi PE, Rabeyron T, Duggan M. 2016. Feeling the future: a meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Res 4:1188 [Google Scholar]
  10. Bem DJ. 2011. Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. J. Personal. Soc. Psychol. 100:407–25 [Google Scholar]
  11. Berle D, Starcevic V. 2007. Inconsistencies between reported test statistics and p‐values in two psychiatry journals. Int. J. Methods Psychiatric Res. 16:202–7 [Google Scholar]
  12. Bones AK. 2012. We knew the future all along. Perspect. Psychol. Sci. 7:307–9 [Google Scholar]
  13. Brandt MJ, IJzerman H, Dijksterhuis A, Farach FJ, Geller J. et al. 2014. The replication recipe: What makes for a convincing replication?. J. Exp. Soc. Psychol. 50:217–24 [Google Scholar]
  14. Braver SL, Thoemmes FJ, Rosenthal R. 2014. Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9:333–42 [Google Scholar]
  15. Brown NJ, Heathers JA. 2016. The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Soc. Psychol. Personal. Sci. 8:4363–69 [Google Scholar]
  16. Carey B. 2011a. Fraud case seen as a red flag for psychology research. The New York Times Nov. 2 [Google Scholar]
  17. Carey B. 2011b. Journal's paper on ESP expected to prompt outrage. The New York Times Jan. 5 [Google Scholar]
  18. Carey B. 2015. Many psychology findings not as strong as claimed, study says. The New York Times Aug. 27. https://web.archive.org/web/20170714054021/https://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html [Google Scholar]
  19. Carney DR, Cuddy AJ, Yap AJ. 2010. Power posing: Brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychol. Sci. 21:1363–68 [Google Scholar]
  20. Carney DR, Cuddy AJ, Yap AJ. 2015. Review and summary of research on the embodied effects of expansive (versus contractive) nonverbal displays. Psychol. Sci. 26:657–63 [Google Scholar]
  21. Chase LJ, Chase RB. 1976. A statistical power analysis of applied psychological research. J. Appl. Psychol. 61:234–37 [Google Scholar]
  22. Cheung I, Campbell L, LeBel EP. 2016. Registered replication report: Study 1 from Finkel, Rusbult, Kumashiro, & Hannon 2002. Perspect. Psychol. Sci. 11:750–64 [Google Scholar]
  23. Cohen J. 1962. The statistical power of abnormal-social psychological research: a review. J. Abnorm. Soc. Psychol. 65:145–53 [Google Scholar]
  24. Cole LC. 1957. Biological clock in the unicorn. Science 125:874–76 [Google Scholar]
  25. Cumming G. 2014. The new statistics: why and how. Psychol. Sci. 25:7–29 [Google Scholar]
  26. Doyen S, Klein O, Pichon CL, Cleeremans A. 2012. Behavioral priming: It's all in the mind, but whose mind?. PLOS ONE 7:e29081 [Google Scholar]
  27. Duval S, Tweedie R. 2000. Trim and fill: a simple funnel‐plot–based method of testing and adjusting for publication bias in meta‐analysis. Biometrics 56:455–63 [Google Scholar]
  28. Ebersole CR, Atherton OE, Belanger AL, Skulborstad HM, Allen JM. et al. 2016. Many Labs 3: evaluating participant pool quality across the academic semester via replication. J. Exp. Soc. Psychol. 67:68–82 [Google Scholar]
  29. Egger M, Smith GD, Schneider M, Minder C. 1997. Bias in meta-analysis detected by a simple, graphical test. BMJ 315:629–34 [Google Scholar]
  30. Eich E. 2014. Business not as usual. Psychol. Sci. 25:3–6 [Google Scholar]
  31. Etz A, Vandekerckhove J. 2016. A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11:e0149794 [Google Scholar]
  32. Fanelli D. 2012. Negative results are disappearing from most disciplines and countries. Scientometrics 90:891–904 [Google Scholar]
  33. Fiedler K. 2011. Voodoo correlations are everywhere—not only in neuroscience. Perspect. Psychol. Sci. 6:163–71 [Google Scholar]
  34. Fiedler K, Kutzner F, Krueger JI. 2012. The long way from α-error control to validity proper: problems with a short-sighted false-positive debate. Perspect. Psychol. Sci. 7:661–69 [Google Scholar]
  35. Fiedler K, Schwarz N. 2016. Questionable research practices revisited. Soc. Psychol. Personal. Sci. 7:45–52 [Google Scholar]
  36. Francis G. 2012. Too good to be true: publication bias in two prominent studies from experimental psychology. Psychon. Bull. Rev. 19:2151–56 [Google Scholar]
  37. Galak J, LeBoeuf RA, Nelson LD, Simmons JP. 2012. Correcting the past: failures to replicate ψ. J. Personal. Soc. Psychol. 103:933–48 [Google Scholar]
  38. García-Berthou E, Alcaraz C. 2004. Incongruence between test statistics and P values in medical papers. BMC Med. Res. Methodol. 4:13 [Google Scholar]
  39. Gervais W. 2015. Putting PET-PEESE to the test. Will Gervais Blog June 25. https://web.archive.org/web/20170326200626/http://willgervais.com/blog/2015/6/25/putting-pet-peese-to-the-test-1 [Google Scholar]
  40. Gilbert DT, King G, Pettigrew S, Wilson TD. 2016. Comment on “Estimating the reproducibility of psychological science.”. Science 351:1037 [Google Scholar]
  41. Goldin-Meadow S. 2016. Why preregistration makes me nervous. Association for Psychological Science Observer Sept. https://web.archive.org/web/20170227180244/http://www.psychologicalscience.org/observer/why-preregistration-makes-me-nervous [Google Scholar]
  42. Greenwald AG. 1975. Consequences of prejudice against the null hypothesis. Psychol. Bull. 82:1–20 [Google Scholar]
  43. Inbar Y. 2016. Association between contextual dependence and replicability in psychology may be spurious. PNAS 113:43E4933–34 [Google Scholar]
  44. Ioannidis JPA. 2005. Why most published research findings are false. PLOS Med 2:696–701 [Google Scholar]
  45. Ioannidis JPA, Trikalinos TA. 2007. An exploratory test for an excess of significant findings. Clin. Trials 4:245–53 [Google Scholar]
  46. John L, Loewenstein GF, Prelec D. 2012. Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychol. Sci. 23:524–32 [Google Scholar]
  47. K12 Reader. 2012. The scientific method. 2nd Grade Reading Comprehension Worksheets http://www.k12reader.com/worksheet/the-scientific-method/view [Google Scholar]
  48. Kerr NL. 1998. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2:196–217 [Google Scholar]
  49. Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S. et al. 2016. Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLOS Biol 14:e1002456 [Google Scholar]
  50. Klein RA, Ratliff K, Vianello M, Reginald B, Adams J, Bahnik S. et al. 2014. Investigating variation in replicability: a “Many Labs” replication project Open Sci. Found. Proj., Cent. Open Sci./Va. Commonwealth Univ. Charlottesville, VA/Richmond, VA: [Google Scholar]
  51. Koole SL, Lakens D. 2012. Rewarding replications: a sure and simple way to improve psychological science. Perspect. Psychol. Sci. 7:608–14 [Google Scholar]
  52. Kruschke JK. 2013. Bayesian estimation supersedes the t test. J. Exp. Psychol. Gen. 142:573–603 [Google Scholar]
  53. Kunda Z. 1990. The case for motivated reasoning. Psychol. Bull. 108:480–98 [Google Scholar]
  54. LaCour MJ, Green DP. 2014. When contact changes minds: an experiment on transmission of support for gay equality. Science 346:1366–69 [Google Scholar]
  55. Lakatos I. 1970. Falsification and the methodology of scientific research programmes. Criticism and the Growth of Knowledge I Lakatos, A Musgrave 170–96 Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  56. Lakens D. 2014. Performing high‐powered studies efficiently with sequential analyses. Eur. J. Soc. Psychol. 44:701–10 [Google Scholar]
  57. Leamer EE. 1983. Let's take the con out of econometrics. Am. Econ. Rev. 73:131–43 [Google Scholar]
  58. Levelt WJ, Drenth P, Noort E. 2012. Flawed science: the fraudulent research practices of social psychologist Diederik Stapel Rep., Tilburg Univ./Univ. Amsterdam/Univ. Groningen Tilburg, Neth./Amsterdam/Groningen, Neth.: [Google Scholar]
  59. Lindsay DS. 2015. Replication in psychological science. Psychol. Sci. 26:1827–32 [Google Scholar]
  60. Luce MF, McGill A, Peracchio L. 2012. Promoting an Environment of Scientific Integrity: Individual and Community Responsibilities Oxford, UK: Oxford Univ. Press [Google Scholar]
  61. Mahoney MJ. 1979. Review paper: psychology of the scientist: an evaluative review. Soc. Stud. Sci. 9:349–75 [Google Scholar]
  62. McShane BB, Böckenholt U, Hansen KT. 2016. Adjusting for publication bias in meta-analysis: an evaluation of selection methods and some cautionary notes. Perspect. Psychol. Sci. 11:730–49 [Google Scholar]
  63. Miguel E, Camerer CF, Casey K, Cohen J, Esterling K. et al. 2014. Promoting transparency in social science research. Science 343:30–31 [Google Scholar]
  64. Moore DA. 2016. Preregister if you want to. Am. Psychol. 71:238–39 [Google Scholar]
  65. Morey RD. 2013. The consistency test does not—and cannot—deliver what is advertised: a comment on Francis 2013. J. Math. Psychol. 57:180–83 [Google Scholar]
  66. Nosek BA, Lakens DE. 2013. Call for proposals: special issue of Social Psychology on “replications of important results in social psychology.”. Soc. Psychol. 44:59–60 [Google Scholar]
  67. Nuijten MB, Hartgerink CH, van Assen MA, Epskamp S, Wicherts JM. 2016. The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods 48:1205–26 [Google Scholar]
  68. Open Sci. Collab. 2012. An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspect. Psychol. Sci. 7:657–60 [Google Scholar]
  69. Open Sci. Collab. 2015. Estimating the reproducibility of psychological science. Science 349:6251aac4716 [Google Scholar]
  70. Pashler H, Harris CR. 2012. Is the replicability crisis overblown? Three arguments examined. Perspect. Psychol. Sci. 7:531–36 [Google Scholar]
  71. Pashler H, Wagenmakers EJ. 2012. Editors’ introduction to the special section on replicability in psychological science: a crisis of confidence?. Perspect. Psychol. Sci. 7:528–30 [Google Scholar]
  72. Patil P, Peng RD, Leek JT. 2016. What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11:539–44 [Google Scholar]
  73. Phillips CV. 2004. Publication bias in situ. BMC Med. Res. Methodol. 4:20 [Google Scholar]
  74. Popper KR. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge New York: Basic Books [Google Scholar]
  75. Ranehill E, Dreber A, Johannesson M, Leiberg S, Sul S, Weber RA. 2015. Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women. Psychol. Sci. 26:5653–56 [Google Scholar]
  76. Rosenthal R. 1979. The “file drawer problem” and tolerance for null results. Psychol. Bull. 86:638–41 [Google Scholar]
  77. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. 2009. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16:225–37 [Google Scholar]
  78. Schimmack U. 2012. The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17:551–66 [Google Scholar]
  79. Schmidt S. 2009. Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Rev. Gen. Psychol. 13:90–100 [Google Scholar]
  80. Sedlmeier P, Gigerenzer G. 1989. Do studies of statistical power have an effect on the power of studies. Psychol. Bull. 105:309–16 [Google Scholar]
  81. Serlin RC, Lapsley DK. 1985. Rationality in psychological research: the good-enough principle. Am. Psychol. 40:73–83 [Google Scholar]
  82. Serlin RC, Lapsley DK. 1992. Rational appraisal of psychological research and the good-enough principle. A Handbook for Data Analysis in the Behavioral Sciences: Methodological Issues G Keren, C Lewis 199–228 Mahwah, NJ: Lawrence Erlbaum Assoc. [Google Scholar]
  83. Sharpe D. 1997. Of apples and oranges, file drawers and garbage: why validity issues in meta-analysis will not go away. Clin. Psychol. Rev. 17:881–901 [Google Scholar]
  84. Simmons JP, Nelson LD, Simonsohn U. 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22:1359–66 [Google Scholar]
  85. Simmons JP, Nelson LD, Simonsohn U. 2012. A 21 word solution. Dialogue 26:24–7 [Google Scholar]
  86. Simmons JP, Nelson LD, Simonsohn U. 2017. False-positive citations. Perspect. Psychol. Sci. In press [Google Scholar]
  87. Simmons JP, Simonsohn U. 2017. Power posing: p-curving the evidence. Psychol. Sci. 28:5687–93 [Google Scholar]
  88. Simons DJ. 2014. The value of direct replication. Perspect. Psychol. Sci. 9:76–80 [Google Scholar]
  89. Simons DJ, Holcombe AO, Spellman BA. 2014. An introduction to registered replication reports at Perspectives on Psychological Science. Perspect. Psychol. Sci. 9:552–55 [Google Scholar]
  90. Simonsohn U. 2012. It does not follow: evaluating the one-off publication bias critiques by Francis (2012a, 2012b, 2012c, 2012d, 2012e, in press). Perspect. Psychol. Sci. 7:597–99 [Google Scholar]
  91. Simonsohn U. 2013. Just post it: the lesson from two cases of fabricated data detected by statistics alone. Psychol. Sci. 24:1875–88 [Google Scholar]
  92. Simonsohn U. 2014. [13] Posterior-hacking. Data Colada Jan. 13. https://web.archive.org/web/http://datacolada.org/13 [Google Scholar]
  93. Simonsohn U. 2015a. [33] The effect size does not exist. Data Colada Feb. 9. https://web.archive.org/web/http://datacolada.org/33 [Google Scholar]
  94. Simonsohn U. 2015b. [35] The default Bayesian test is prejudiced against small effects. Data Colada Apr. 9. https://web.archive.org/web/http://datacolada.org/35 [Google Scholar]
  95. Simonsohn U. 2015c. Small telescopes: detectability and the evaluation of replication results. Psychol. Sci. 26:559–69 [Google Scholar]
  96. Simonsohn U. 2016a. [47] Evaluating replications: 40% full ≠ 60% empty. Data Colada March 3. https://web.archive.org/web/http://datacolada.org/47 [Google Scholar]
  97. Simonsohn U. 2016b. Each reader decides if a replication counts: reply to Schwarz and Clore 2016. Psychol. Sci. 27:1410–12 [Google Scholar]
  98. Simonsohn U. 2017. [59] PET-PEESE is not like homeopathy. Data Colada Apr. 12. https://web.archive.org/web/http://datacolada.org/59 [Google Scholar]
  99. Simonsohn U, Nelson LD, Simmons JP. 2014a. P-curve: a key to the file drawer. J. Exp. Psychol. Gen. 143:534–47 [Google Scholar]
  100. Simonsohn U, Nelson LD, Simmons JP. 2014b. P-curve and effect size: correcting for publication bias using only significant results. Perspect. Psychol. Sci. 9:666–81 [Google Scholar]
  101. Simonsohn U, Simmons JP, Nelson LD. 2015. Better p-curves: making p-curve analysis more robust to errors, fraud, and ambitions p-hacking, a reply to Ulrich and Miller (2015).. J. Exp. Psychol. Gen. 144:1146–52 [Google Scholar]
  102. Singal J. 2015. The case of the amazing gay-marriage data: how a graduate student reluctantly uncovered a huge scientific fraud. New York Magazine May 29 [Google Scholar]
  103. Stanley DJ, Spence JR. 2014. Expectations for replications: Are yours realistic. Perspect. Psychol. Sci. 9:305–18 [Google Scholar]
  104. Stanley T, Doucouliagos H. 2014. Meta‐regression approximations to reduce publication selection bias. Res. Synth. Methods 5:60–78 [Google Scholar]
  105. Sterling TD. 1959. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54:28530–34 [Google Scholar]
  106. Strack F. 2016. Reflection on the smiling registered replication report. Perspect. Psychol. Sci. 11:6929–30 [Google Scholar]
  107. Strack F, Martin L, Stepper S. 1988. Inhibiting and facilitating conditions of the human smile: a nonobtrusive test of the facial feedback hypothesis. J. Personal. Soc. Psychol. 54:768–77 [Google Scholar]
  108. Stroebe W. 2016. Are most published social psychological findings false. J. Exp. Soc. Psychol. 66:134–44 [Google Scholar]
  109. Stroebe W, Strack F. 2014. The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9:59–71 [Google Scholar]
  110. Tuk MA, Zhang K, Sweldens S. 2015. The propagation of self-control: Self-control in one domain simultaneously improves self-control in other domains. J. Exp. Psychol. Gen. 144:3639–54 [Google Scholar]
  111. Tversky A, Kahneman D. 1971. Belief in the law of small numbers. Psychol. Bull. 76:105–10 [Google Scholar]
  112. Ueno T, Fastrich GM, Murayama K. 2016. Meta-analysis to integrate effect sizes within an article: possible misuse and Type I error inflation. Am. Psychol. Assoc. 145:5643–54 [Google Scholar]
  113. Ulrich R, Miller J. 2015. P-hacking by post hoc selection with multiple opportunities: detectability by skewness test? Comment on Simonsohn, Nelson, and Simmons (2014). J. Exp. Psychol. Gen. 144:1137–45 [Google Scholar]
  114. Valentine JC, Biglan A, Boruch RF, Castro FG, Collins LM. et al. 2011. Replication in prevention science. Prev. Sci. 12:103–17 [Google Scholar]
  115. van Aert RC, Wicherts JM, van Assen MA. 2016. Conducting meta-analyses based on p values: reservations and recommendations for applying p-uniform and p-curve. Perspect. Psychol. Sci. 11:713–29 [Google Scholar]
  116. Van Bavel JJ, Mende-Siedlecki P, Brady WJ, Reinero DA. 2016. Contextual sensitivity in scientific reproducibility. PNAS 113:236454–59 [Google Scholar]
  117. van der Zee T, Anaya J, Brown NJL. 2017. Statistical heartburn: an attempt to digest four pizza publications from the Cornell Food and Brand Lab. PeerJ Preprints 5:e2748v1 [Google Scholar]
  118. van't Veer AE, Giner-Sorolla R. 2016. Pre-registration in social psychology—a discussion and suggested template. J. Exp. Soc. Psychol. 67:2–12 [Google Scholar]
  119. Vazire S. 2015. This is what p-hacking looks like. Sometimes I'm Wrong Feb. https://web.archive.org/web/http://sometimesimwrong.typepad.com/wrong/2015/02/this-is-what-p-hacking-looks-like.html [Google Scholar]
  120. Vazire S. 2016. Editorial. Soc. Psychol. Personal. Sci. 7:3–7 [Google Scholar]
  121. Verhagen J, Wagenmakers EJ. 2014. A Bayesian test to quantify the success or failure of a replication attempt. J. Exp. Psychol. Gen. 143:1457–75 [Google Scholar]
  122. Vosgerau J, Simonsohn U, Nelson LD, Simmons JP. 2017. Don't do internal meta-analysis: it makes false-positives easier to produce and harder to correct Open Sci. Found. Proj., Cent. Open Sci. Charlottesville, VA: [Google Scholar]
  123. Vul E, Harris CR, Winkielman P, Pashler H. 2009. Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspect. Psychol. Sci. 4:274–90 [Google Scholar]
  124. Wagenmakers EJ, Beek T, Dijkhoff L, Gronau QF, Acosta A. et al. 2016. Registered replication report: Strack, Martin, & Stepper 1988. Perspect. Psychol. Sci. 11:6917–28 [Google Scholar]
  125. Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HL. 2011. Why psychologists must change the way they analyze their data: the case of psi: comment on Bem 2011. J. Personal. Soc. Psychol. 100:3426–32 [Google Scholar]
  126. Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HL, Kievit RA. 2012. An agenda for purely confirmatory research. Perspect. Psychol. Sci. 7:632–38 [Google Scholar]
  127. Wicherts JM. 2011. Psychology must learn a lesson from fraud case. Nature 480:7 [Google Scholar]
  128. Wicherts JM, Bakker M. 2012. Publish (your data) or (let the data) perish! Why not publish your data too. Intelligence 40:73–76 [Google Scholar]
  129. Wicherts JM, Bakker M, Molenaar D. 2011. Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLOS ONE 6:e26828 [Google Scholar]
  130. Wicherts JM, Borsboom D, Kats J, Molenaar D. 2006. The poor availability of psychological research data for reanalysis. Am. Psychol. 61:726–28 [Google Scholar]
/content/journals/10.1146/annurev-psych-122216-011836
Loading
/content/journals/10.1146/annurev-psych-122216-011836
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error