1932

Abstract

Concern over social scientists’ inability to reproduce empirical research has spawned a vast and rapidly growing literature. The size and growth of this literature make it difficult for newly interested academics to come up to speed. Here, we provide a formal text modeling approach to characterize the entirety of the field, which allows us to summarize the breadth of this literature and identify core themes. We construct and analyze text networks built from 1,947 articles to reveal differences across social science disciplines within the body of reproducibility publications and to discuss the diversity of subtopics addressed in the literature. This field-wide view suggests that reproducibility is a heterogeneous problem with multiple sources for errors and strategies for solutions, a finding that is somewhat at odds with calls for largely passive remedies reliant on open science. We propose an alternative rigor and reproducibility model that takes an active approach to rigor prior to publication, which may overcome some of the shortfalls of the postpublication model.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-soc-090221-035954
2022-07-29
2024-04-24
Loading full text...

Full text loading...

/deliver/fulltext/soc/48/1/annurev-soc-090221-035954.html?itemId=/content/journals/10.1146/annurev-soc-090221-035954&mimeType=html&fmt=ahah

Literature Cited

  1. Alsheikh-Ali AA, Qureshi W, Al-Mallah MH, Ioannidis JP. 2011. Public availability of published research data in high-impact journals. PLOS ONE 6:e24357
    [Google Scholar]
  2. Anderson RG, Dewald WG. 1994. Replication and scientific standards in applied economics a decade after the Journal of Money, Credit and Banking project. Fed. Reserve Bank St. Louis Rev. 76:79–83
    [Google Scholar]
  3. Anvari F, Lakens D. 2018. The replicability crisis and public trust in psychological science. Compr. Results Soc. Psychol. 3:266–86
    [Google Scholar]
  4. Auspurg K, Brüderl J. 2021. Has the credibility of the social sciences been credibly destroyed? Reanalyzing the “Many Analysts, One Data Set” project. Socius 7:1–14
    [Google Scholar]
  5. Bail CA. 2016. Combining natural language processing and network analysis to examine how advocacy organizations stimulate conversation on social media. PNAS 113:11823–28
    [Google Scholar]
  6. Becker G, Gentleman R, Barr C, Lawrence M. 2017. Enhancing reproducibility and collaboration via management of R package cohorts. J. Stat. Softw. 82:1
    [Google Scholar]
  7. Bedeian A, Taylor S, Miller A. 2010. Management science on the credibility bubble: cardinal sins and various misdemeanors. Acad. Manag. Learn. Educ. 9:715–25
    [Google Scholar]
  8. Benjamin DJ, Berger JO, Johannesson M, Nosek BA, Wagenmakers EJ et al. 2018. Redefine statistical significance. Nat. Hum. Behav. 2:6–10
    [Google Scholar]
  9. Bernecker SL, Constantino MJ, Atkinson LR, Bagby RM, Ravitz P, McBride C. 2016. Attachment style as a moderating influence on the efficacy of cognitive-behavioral and interpersonal psychotherapy for depression: a failure to replicate. Psychotherapy 53:22–33
    [Google Scholar]
  10. Billheimer D. 2019. Predictive inference and scientific reproducibility. Am. Stat. 73:291–95
    [Google Scholar]
  11. Brown AW, Kaiser KA, Allison DB 2018. Issues with data and analyses: errors, underlying themes, and potential solutions. PNAS 115:2563–70
    [Google Scholar]
  12. Bruns SB, Ioannidis JP. 2016. p-Curve and p-hacking in observational research. PLOS ONE 11:e0149144
    [Google Scholar]
  13. Butler N, Delaney H, Spoelstra S. 2017. The gray zone: questionable research practices in the business school. Acad. Manag. Learn. Educ. 16:94–109
    [Google Scholar]
  14. Byington EK, Felps W. 2017. Solutions to the credibility crisis in management science. Acad. Manag. Learn. Educ. 16:142–62
    [Google Scholar]
  15. Carp J. 2012. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage 63:289–300
    [Google Scholar]
  16. Caruso EM, Shapira O, Landy JF. 2017. Show me the money: a systematic exploration of manipulations, moderators, and mechanisms of priming effects. Psychol. Sci. 28:1148–59
    [Google Scholar]
  17. Cesario J. 2014. Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9:40–48
    [Google Scholar]
  18. Christian T-ML, Lafferty-Hess S, Jacoby WG, Carsey TM. 2018. Operationalizing the replication standard: a case study of the data curation and verification workflow for scholarly journals Work. Pap., SocArXiv. https://doi.org/10.31235/osf.io/cfdba
    [Crossref]
  19. Colaresi M. 2016. Preplication, replication: a proposal to efficiently upgrade journal replication standards. Int. Stud. Perspect. 17:367–78
    [Google Scholar]
  20. Cropley A. 2017. The specter of scholarship without novel ideas: replication, hyperauthorship and the danger of stagnation. Psychol. Aesthet. Creat. Arts 11:69–76
    [Google Scholar]
  21. De Winter JCF, Happee R. 2013. Why selective publication of statistically significant results can be effective. PLOS ONE 8:e66463
    [Google Scholar]
  22. Dewald WG, Thursby JG, Anderson RG. 1986. Replication in empirical economics: the Journal of Money, Credit and Banking project. Am. Econ. Rev. 76:587–603
    [Google Scholar]
  23. Dörrenberg S, Rakoczy H, Liszkowski U. 2018. How (not) to measure infant theory of mind: testing the replicability and validity of four non-verbal measures. Cogn. Dev. 46:12–30
    [Google Scholar]
  24. DuBois JM, Antes AL. 2018. Five dimensions of research ethics: a stakeholder framework for creating a climate of research integrity. Acad. Med. 93:550–55
    [Google Scholar]
  25. Ducke B. 2012. Natives of a connected world: free and open source software in archaeology. World Archaeol. 44:571–79
    [Google Scholar]
  26. Eberlen J, Scholz G, Gagliolo M. 2017. Simulate this! An introduction to agent-based models and their power to improve your research practice. Int. Rev. Soc. Psychol. 30:149–60
    [Google Scholar]
  27. Eubank N. 2016. Lessons from a decade of replications at the Quarterly Journal of Political Science. Political Sci. Politics 49:273–76
    [Google Scholar]
  28. Fanelli D, Costas R, Larivière V. 2015. Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PLOS ONE 10:e0127556
    [Google Scholar]
  29. Fang FC, Steen RG, Casadevall A. 2012. Misconduct accounts for the majority of retracted scientific publications. PNAS 109:17028–33
    [Google Scholar]
  30. Fecher B, Friesike S, Hebing M. 2015. What drives academic data sharing?. PLOS ONE 10:e0118053
    [Google Scholar]
  31. Ferguson CJ, Heene M. 2012. A vast graveyard of undead theories: publication bias and psychological science's aversion to the null. Perspect. Psychol. Sci. 7:555–61
    [Google Scholar]
  32. Fiedler K, Prager J. 2018. The regression trap and other pitfalls of replication science—illustrated by the report of the Open Science Collaboration. Basic Appl. Soc. Psychol. 40:115–24
    [Google Scholar]
  33. Fisher JC. 2019. Data-specific functions: a comment on Kindel et al. Socius 5:1–6
    [Google Scholar]
  34. Frank KA, Maroulis SJ, Duong MQ, Kelcey BM. 2013. What would it take to change an inference? Using Rubin's causal model to interpret the robustness of causal inferences. Educ. Eval. Policy Anal. 35:437–60
    [Google Scholar]
  35. Freese J. 2007. Replication standards for quantitative social science: Why not sociology?. Sociol. Methods Res. 36:153–72
    [Google Scholar]
  36. Freese J, Peterson D. 2017. Replication in social science. Annu. Rev. Sociol. 43:147–65
    [Google Scholar]
  37. Garijo D, Kinnings S, Xie L, Xie L, Zhang Y et al. 2013. Quantifying reproducibility in computational biology: the case of the tuberculosis drugome. PLOS ONE 8:e80278
    [Google Scholar]
  38. Gelman A, Geurts HM. 2017. The statistical crisis in science: How is it relevant to clinical neuropsychology?. Clin. Neuropsychol. 31:1000–14
    [Google Scholar]
  39. Gerber AS, Malhotra N. 2008. Publication bias in empirical sociological research: Do arbitrary significance levels distort published results?. Sociol. Methods Res. 37:3–30
    [Google Scholar]
  40. Gigerenzer G, Marewski JN. 2015. Surrogate science: the idol of a universal method for scientific inference. J. Manag. 41:421–40
    [Google Scholar]
  41. Grant S, Mayo-Wilson E, Hopewell S, Macdonald G, Moher D, Montgomery P. 2013. Developing a reporting guideline for social and psychological intervention trials. J. Exp. Criminol. 9:355–67
    [Google Scholar]
  42. Hansen PR. 2005. A test for superior predictive ability. J. Bus. Econ. Stat. 23:365–80
    [Google Scholar]
  43. Hardwicke TE, Mathur MB, MacDonald K, Nilsonne G, Banks GC et al. 2018. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R. Soc. Open Sci. 5:180448
    [Google Scholar]
  44. Herring C. 2017. Is diversity still a good thing?. Am. Sociol. Rev. 82:868–77
    [Google Scholar]
  45. Hlavac M. 2018. stargazer: Well-formatted regression and summary statistics tables. R package version 5.2.1. https://CRAN.R-project.org/package=stargazer
    [Google Scholar]
  46. Hopp C, Hoover GA. 2019. What crisis? Management researchers’ experiences with and views of scholarly misconduct. Sci. Eng. Ethics 25:1549–88
    [Google Scholar]
  47. Hopwood CJ, Zanarini MC. 2010. Five-factor trait instability in borderline relative to other personality disorders. Personal. Disord. 1:58–66
    [Google Scholar]
  48. Hsu PH, Kuan CM. 2005. Reexamining the profitability of technical analysis with data snooping checks. J. Financ. Econom. 3:606–28
    [Google Scholar]
  49. Hubbard R, Vetter DE, Little EL. 1998. Replication in strategic management: scientific testing for validity, generalizability, and usefulness. Strateg. Manag. J. 19:243–54
    [Google Scholar]
  50. Huey L, Bennell C. 2017. Replication and reproduction in Canadian policing research: a note. Can. J. Criminol. Crim. Justice 59:123–38
    [Google Scholar]
  51. Ioannidis JPA, Allison DB, Ball CA, Coulibaly I, Cui X et al. 2009. Repeatability of published microarray gene expression analyses. Nat. Genet. 41:149–55
    [Google Scholar]
  52. Iverson GJ, Lee MD, Wagenmakers EJ. 2009. prep misestimates the probability of replication. Psychon. Bull. Rev. 16:424–29
    [Google Scholar]
  53. Jamieson KH. 2018. Crisis or self-correction: rethinking media narratives about the well-being of science. PNAS 115:2620–27
    [Google Scholar]
  54. Kerr NL. 1998. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2:196–217
    [Google Scholar]
  55. Killeen PR. 2005. An alternative to null-hypothesis significance tests. Psychol. Sci. 16:345–53
    [Google Scholar]
  56. King G. 1995. Replication, replication. Political Sci. Politics 28:444–52
    [Google Scholar]
  57. Klein O, Doyen S, Leys C, Magalhães de Saldanha da Gama PA, Miller S et al. 2012. Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7:572–84
    [Google Scholar]
  58. Köhler T, Cortina JM. 2019. Play it again, Sam! An analysis of constructive replication in the organizational sciences. J. Manag. 47:488–518
    [Google Scholar]
  59. Kruschke JK, Liddell TM. 2018. Bayesian data analysis for newcomers. Psychon. Bull. Rev. 25:155–77
    [Google Scholar]
  60. Kühberger A, Fritz A, Scherndl T. 2014. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size. PLOS ONE 9:e105825
    [Google Scholar]
  61. Lacetera N, Zirulia L. 2011. The economics of scientific misconduct. J. Law Econ. Organ. 27:568–603
    [Google Scholar]
  62. Leeper TJ. 2014. Archiving reproducible research with R and Dataverse. R J. 6:151–58
    [Google Scholar]
  63. Leung K. 2011. Presenting post hoc hypotheses as a priori: ethical and theoretical issues. Manag. Organ. Rev. 7:471–79
    [Google Scholar]
  64. Levine TR, Asada KJ, Carpenter C. 2009. Sample sizes and effect sizes are negatively correlated in meta-analyses: evidence and implications of a publication bias against nonsignificant findings. Commun. Monogr. 76:286–302
    [Google Scholar]
  65. Light R. 2014. From words to networks and back. Soc. Curr. 1:111–29
    [Google Scholar]
  66. Liu DM, Salganik MJ. 2019. Successes and struggles with computational reproducibility: lessons from the Fragile Families Challenge. Socius 5:1–21
    [Google Scholar]
  67. Longo DL, Drazen JM. 2016. Data sharing. N. Engl. J. Med. 374:276–77
    [Google Scholar]
  68. Lucas JW, Morrell K, Posard M. 2013. Considerations on the ‘replication problem’ in sociology. Am. Sociol. 44:217–32
    [Google Scholar]
  69. Lundberg I, Narayanan A, Levy K, Salganik MJ. 2019. Privacy, ethics, and data access: a case study of the Fragile Families Challenge. Socius 5:1–25
    [Google Scholar]
  70. Machado PPP, Grilo CM, Crosby RD. 2018. Replication of a modified factor structure for the Eating Disorder Examination-Questionnaire: extension to clinical eating disorder and non-clinical samples in Portugal. Eur. Eat. Disord. Rev. 26:75–80
    [Google Scholar]
  71. Mack RW. 1951. The need for replication research in sociology. Am. Sociol. Rev. 16:93–94
    [Google Scholar]
  72. Madden CS, Easley RW, Dunn MG. 1995. How journal editors view replication research. J. Advert. 24:77–87
    [Google Scholar]
  73. Makel MC, Plucker JA. 2014. Facts are more important than novelty: replication in the education sciences. Educ. Res. 43:304–16
    [Google Scholar]
  74. Maner JK. 2014. Let's put our money where our mouth is: If authors are to change their ways, reviewers (and editors) must change with them. Perspect. Psychol. Sci. 9:343–51
    [Google Scholar]
  75. Maniadis Z, Tufano F, List JA. 2017. To replicate or not to replicate? Exploring reproducibility in economics through the lens of a model and a pilot study. Econ. J. 127:F209–35
    [Google Scholar]
  76. Maraun M, Gabriel S 2010. Killeen's 2005 prep coefficient: logical and mathematical problems. Psychol. Methods 15:182–91
    [Google Scholar]
  77. Mathur A, Lean SF, Maun C, Walker N, Cano A, Wood ME. 2019. Research ethics in inter- and multi-disciplinary teams: differences in disciplinary interpretations. PLOS ONE 14:e0225837
    [Google Scholar]
  78. McCullough BD, McGeary KA, Harrison TD. 2006. Lessons from the JMCB archive. J. Money Credit Banking 38:1093–107
    [Google Scholar]
  79. Montgomery P, Grant S, Hopewell S, Macdonald G, Moher D et al. 2013. Protocol for CONSORT-SPI: an extension for social and psychological interventions. Implement. Sci. 8:99
    [Google Scholar]
  80. Moody J, Light R. 2006. A view from above: the evolving sociological landscape. Am. Sociol. 37:67–86
    [Google Scholar]
  81. Morey RD, Chambers CD, Etchells PJ, Harris CR, Hoekstra R et al. 2016. The peer reviewers’ openness initiative: incentivizing open research practices through peer review. R. Soc. Open Sci. 3:150547
    [Google Scholar]
  82. Mueller-Langer F, Fecher B, Harhoff D, Wagner GG. 2019. Replication studies in economics—How many and which papers are chosen for replication, and why?. Res. Policy 48:62–83
    [Google Scholar]
  83. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD et al. 2017. A manifesto for reproducible science. Nat. Hum. Behav. 1:0021
    [Google Scholar]
  84. Nelson LK. 2019. To measure meaning in big data, don't give me a map, give me transparency and reproducibility. Sociol. Methodol. 49:139–43
    [Google Scholar]
  85. Nicholson S. 2000. Raising reliability of web search tool research through replication and chaos theory. J. Am. Soc. Inf. Sci. Technol. 51:724–29
    [Google Scholar]
  86. Nosek BA, Bar-Anan Y. 2012. Scientific utopia: I. Opening scientific communication. Psychol. Inq. 23:217–43
    [Google Scholar]
  87. Nuijten MB, Borghuis J, Veldkamp CLS, Dominguez-Alvarez L, Van Assen MALM, Wicherts JM. 2017. Journal data sharing policies and statistical reporting inconsistencies in psychology. Collabra Psychol. 3:31
    [Google Scholar]
  88. Nuijten MB, Hartgerink CHJ, van Assen MALM, Epskamp S, Wicherts JM. 2016. The prevalence of statistical reporting errors in psychology (1985–2013). Behav. Res. Methods 48:1205–26
    [Google Scholar]
  89. Open Science Collaboration. 2015. Estimating the reproducibility of psychological science. Science 349:aac4716
    [Google Scholar]
  90. Park JH, Venger O, Park DY, Reid LN. 2015. Replication in advertising research, 1980–2012: a longitudinal analysis of leading advertising journals. J. Curr. Issues Res. Advert. 36:115–35
    [Google Scholar]
  91. Pashler H, Coburn N, Harris CR. 2012. Priming of social distance? Failure to replicate effects on social and food judgments. PLOS ONE 7:e42510
    [Google Scholar]
  92. Peterson RR. 1996. Statistical errors, faulty conclusions, misguided policy: reply to Weitzman. Am. Sociol. Rev. 61:539–40
    [Google Scholar]
  93. Pickett JT, Roche SP. 2018. Questionable, objectionable or criminal? Public opinion on data fraud and selective reporting in science. Sci. Eng. Ethics 24:151–71
    [Google Scholar]
  94. Piwowar HA, Day RS, Fridsma DB. 2007. Sharing detailed research data is associated with increased citation rate. PLOS ONE 2:e308
    [Google Scholar]
  95. Powell LJ, Hobbs K, Bardis A, Carey S, Saxe R 2018. Replications of implicit theory of mind tasks with varying representational demands. Cogn. Dev. 46:40–50
    [Google Scholar]
  96. Prosperi M, Bian J, Buchan IE, Koopman JS, Sperrin M, Wang M 2019. Raiders of the lost HARK: a reproducible inference framework for big data science. Palgrave Commun. 5:125
    [Google Scholar]
  97. Ram K. 2013. Git can facilitate greater reproducibility and increased transparency in science. Source Code Biol. Med. 8:7
    [Google Scholar]
  98. Reid LN, Soley LC, Wimmer RD. 1981. Replication in advertising research. J. Advert. 10:3–13
    [Google Scholar]
  99. Rodgers JL. 2010. The epistemology of mathematical and statistical modeling: a quiet methodological revolution. Am. Psychol. 65:1–12
    [Google Scholar]
  100. Rosenthal R. 1979. The file drawer problem and tolerance for null results. Psychol. Bull. 86:638–41
    [Google Scholar]
  101. Rozeboom WW. 1960. The fallacy of the null-hypothesis significance test. Psychol. Bull. 57:416–28
    [Google Scholar]
  102. Rubin M. 2017. When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Rev. Gen. Psychol. 21:308–20
    [Google Scholar]
  103. Rubio-Fernández P. 2018. Publication standards in infancy research: three ways to make violation-of-expectation studies more reliable. Infant Behav. Dev. 54:177–88
    [Google Scholar]
  104. Santana-Perez I, Pérez-Hernández MS. 2015. Towards reproducibility in scientific workflows: an infrastructure-based approach. Sci. Program. 2015:243180
    [Google Scholar]
  105. Schank JC, Koehnle TJ. 2009. Pseudoreplication is a pseudoproblem. J. Comp. Psychol. 123:421–33
    [Google Scholar]
  106. Schimmack U. 2012. The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17:551–66
    [Google Scholar]
  107. Schmidt FL. 1996. Statistical significance testing and cumulative knowledge in psychology: implications for training of researchers. Psychol. Methods 1:115–29
    [Google Scholar]
  108. Schultz DE, Kerr G, Kitchen P. 2019. Replication and George the Galapagos tortoise. J. Mark. Commun. https://doi.org/10.1080/13527266.2019.1658465
    [Crossref] [Google Scholar]
  109. Shiffrin RM, Borner K, Stigler SM. 2018. Scientific progress despite irreproducibility: a seeming paradox. PNAS 115:2632–39
    [Google Scholar]
  110. Shorey RC, Dawson AE, Haynes E, Strauss C, Elmquist J et al. 2016. Is general or alcohol-specific perceived social support associated with depression among adults in substance use treatment?. J. Psychoact. Drugs 48:359–68
    [Google Scholar]
  111. Sijtsma K. 2016. Playing with data—or how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika 81:1–15
    [Google Scholar]
  112. Silberzahn R, Uhlmann EL, Martin DP, Anselmi P, Aust F et al. 2018. Many analysts, one data set: making transparent how variations in analytic choices affect results. Adv. Methods Pract. Psychol. Sci. 1:337–56
    [Google Scholar]
  113. Simons DJ. 2014. The value of direct replication. Perspect. Psychol. Sci. 9:76–80
    [Google Scholar]
  114. Simonsohn U, Nelson LD, Simmons JP. 2014. p-Curve: a key to the file-drawer. J. Exp. Psychol. Gen. 143:534–47
    [Google Scholar]
  115. Smaldino PE, McElreath R. 2016. The natural selection of bad science. R. Soc. Open Sci. 3:160384
    [Google Scholar]
  116. Stanley TD, Doucouliagos C, Jarrell SB. 2008. Meta-regression analysis as the socio-economics of economics research. J. Socio-Econ. 37:276–92
    [Google Scholar]
  117. Sterling TD. 1959. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54:30–34
    [Google Scholar]
  118. Sterling TD, Rosenbaum WL, Weinkam JJ. 1995. Publication decisions revisited: the effect of the outcome of statistical tests on the decision to publish and vice versa. Am. Stat. 49:108–12
    [Google Scholar]
  119. Stockemer D, Koehler S, Lentz T. 2018. Data access, transparency, and replication: new insights from the political behavior literature. Political Sci. Politics 51:4799–803
    [Google Scholar]
  120. Stojmenovska D, Bol T, Leopold T 2017. Does diversity pay? A replication of Herring 2009. Am. Sociol. Rev. 82:857–67
    [Google Scholar]
  121. Strupler N, Wilkinson TC. 2017. Reproducibility in the field: transparency, version control and collaboration on the Project Panormos Survey. Open Archaeol. 3:279–304
    [Google Scholar]
  122. Uhlmann EL, Ebersole CR, Chartier CR et al. 2019. Scientific utopia III: crowdsourcing science. Perspect. Psychol. Sci. 14:711–33
    [Google Scholar]
  123. Valentine JC, Aloe AM, Lau TS. 2015. Life after NHST: how to describe your data without “p-ing” everywhere. Basic Appl. Soc. Psychol. 37:260–73
    [Google Scholar]
  124. von Hippel PT. 2022. Is psychological science self-correcting? Citations before and after successful and failed replications. Perspect. Psychol. Sci. In press
    [Google Scholar]
  125. Wagenmakers EJ, Marsman M, Jamil T, Ly A, Verhagen J et al. 2018. Bayesian inference for psychology. Part I: theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25:35–57
    [Google Scholar]
  126. Waldman ID, Lilienfeld SO. 2016. Thinking about data, research methods, and statistical analyses: commentary on Sijtsma's 2014 “Playing with Data. .” Psychometrika 81:16–26
    [Google Scholar]
  127. Warnholz S, Schmid T. 2016. Simulation tools for small area estimation: introducing the R-package saeSim. Österr. Z. Stat. 45:55–69
    [Google Scholar]
  128. Weinstein S, Drozdenko R, Weinstein C. 1984. Brain wave analysis in advertising research: validation from basic research & independent replications. Psychol. Mark. 1:83–95
    [Google Scholar]
  129. Weitzman LJ. 1996. The economic consequences of divorce are still unequal: comment on Peterson. Am. Sociol. Rev. 61:537
    [Google Scholar]
  130. Wernicke J, Li M, Sha P, Zhou M, Sindermann C et al. 2019. Individual differences in tendencies to attention-deficit/hyperactivity disorder and emotionality: empirical evidence in young healthy adults from Germany and China. Atten. Deficit Hyperact. Disord. 11:167–82
    [Google Scholar]
  131. Wetzels R, Matzke D, Lee MD, Rouder JN, Iverson GJ, Wagenmakers EJ. 2011. Statistical evidence in experimental psychology: an empirical comparison using 855 t tests. Perspect. Psychol. Sci. 6:291–98
    [Google Scholar]
  132. White H. 2000. A reality check for data snooping. Econometrica 68:1097–126
    [Google Scholar]
  133. Wicherts JM, Bakker M, Molenaar D. 2011. Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLOS ONE 6:e26828
    [Google Scholar]
  134. Wicherts JM, Borsboom D, Kats J, Molenaar D. 2006. The poor availability of psychological research data for reanalysis. Am. Psychol. 61:726–28
    [Google Scholar]
  135. Williams LE, Bargh JA. 2008. Keeping one's distance: the influence of spatial distance cues on affect and evaluation. Psychol. Sci. 19:302–08
    [Google Scholar]
  136. Wilson BA, Dhamapurkar S, Tunnard C, Watson P, Florschutz G. 2013. The effect of positioning on the level of arousal and awareness in patients in the vegetative state or the minimally conscious state: a replication and extension of a previous finding. Brain Impair. 14:475–79
    [Google Scholar]
  137. Wilson FD, Smoke GL, Martin JD. 1973. The replication problem in sociology: a report and a suggestion. Sociol. Inquiry 43:141–49
    [Google Scholar]
  138. Wingen T, Berkessel JB, Englich B. 2020. No replication, no trust? How low replicability influences trust in psychology. Soc. Psychol. Pers. Sci. 11:454–63
    [Google Scholar]
  139. Winship C. 2007. Introduction to the special section on replication and data access. Sociol. Methods Res. 36:151–52
    [Google Scholar]
  140. Young C. 2009. Model uncertainty in sociological research: an application to religion and economic growth. Am. Sociol. Rev. 74:380–97
    [Google Scholar]
  141. Young C. 2018. Model uncertainty and the crisis in science. Socius 4:1–7
    [Google Scholar]
  142. Young C, Horvath A. 2015. Sociologists need to be better at replication. Orgtheory.net blog Aug. 11. https://orgtheory.wordpress.com/2015/08/11/sociologists-need-to-be-better-at-replication-a-guest-post-by-cristobal-young/
    [Google Scholar]
/content/journals/10.1146/annurev-soc-090221-035954
Loading
/content/journals/10.1146/annurev-soc-090221-035954
Loading

Data & Media loading...

Supplemental Material

Supplementary Data

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error