1932

Abstract

Meta-analyses contribute critically to cumulative science, but they can produce misleading conclusions if their constituent primary studies are biased, for example by unmeasured confounding in nonrandomized studies. We provide practical guidance on how meta-analysts can address confounding and other biases that affect studies’ internal validity, focusing primarily on sensitivity analyses that help quantify how biased the meta-analysis estimates might be. We review a number of sensitivity analysis methods to do so, especially recent developments that are straightforward to implement and interpret and that use somewhat less stringent statistical assumptions than do earlier methods. We give recommendations for how these newer methods could be applied in practice and illustrate using a previously published meta-analysis. Sensitivity analyses can provide informative quantitative summaries of evidence strength, and we suggest reporting them routinely in meta-analyses of potentially biased studies. This recommendation in no way diminishes the importance of defining study eligibility criteria that reduce bias and of characterizing studies’ risks of bias qualitatively.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-publhealth-051920-114020
2022-04-05
2024-04-20
Loading full text...

Full text loading...

/deliver/fulltext/publhealth/43/1/annurev-publhealth-051920-114020.html?itemId=/content/journals/10.1146/annurev-publhealth-051920-114020&mimeType=html&fmt=ahah

Literature Cited

  1. 1. 
    Arah OA, Chiba Y, Greenland S. 2008. Bias formulas for external adjustment and sensitivity analysis of unmeasured confounders. Ann. Epidemiol. 18:8637–46
    [Google Scholar]
  2. 2. 
    Audigier V, White IR, Jolani S, Debray TPA, Quartagno M et al. 2018. Multiple imputation for multilevel data with continuous and binary variables. Stat. Sci. 33:2160–83
    [Google Scholar]
  3. 3. 
    Baumeister SE, Leitzmann MF, Linseisen J, Schlesinger S 2019. Physical activity and the risk of liver cancer: a systematic review and meta-analysis of prospective studies and a bias analysis. J. Natl. Cancer Inst. 111:111142–51
    [Google Scholar]
  4. 4. 
    Cumming G. 2014. The new statistics: why and how. Psychol. Sci. 25:17–29
    [Google Scholar]
  5. 5. 
    Dahabreh IJ, Petito LC, Robertson SE, Hernán MA, Steingrimsson JA. 2020. Toward causally interpretable meta-analysis: transporting inferences from multiple randomized trials to a new target population. Epidemiology 31:3334–44
    [Google Scholar]
  6. 6. 
    Dahabreh IJ, Robins JM, Hernán MA. 2020. Benchmarking observational methods by comparing randomized trials and their emulations. Epidemiology 31:5614–19
    [Google Scholar]
  7. 7. 
    Ding P, VanderWeele TJ. 2016. Sensitivity analysis without assumptions. Epidemiology 27:3368–77
    [Google Scholar]
  8. 8. 
    Franklin JM, Patorno E, Desai RJ, Glynn RJ, Martin D et al. 2021. Emulating randomized clinical trials with nonrandomized real-world evidence studies: first results from the RCT DUPLICATE initiative. Circulation 143:1002–13
    [Google Scholar]
  9. 9. 
    Fu R, Sekercioglu N, Mathur MB, Couban R, Coyte PC 2020. Dialysis initiation and all-cause mortality among incident adult patients with advanced CKD: a meta-analysis with bias analysis. Kidney Med. 3:64–75.e1
    [Google Scholar]
  10. 10. 
    Golder S, Loke YK, Bland M. 2011. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLOS Med 8:5e1001026
    [Google Scholar]
  11. 11. 
    Goto A, Arah OA, Goto M, Terauchi Y, Noda M. 2013. Severe hypoglycaemia and cardiovascular disease: systematic review and meta-analysis with bias analysis. BMJ 347:f4533
    [Google Scholar]
  12. 12. 
    Greenland S, O'Rourke K 2008. Meta-analysis. Modern Epidemiology KJ Rothman, S Greenland, TL Lash 652–82 Philadelphia: Lippincott Williams & Wilkins. , 3rd ed..
    [Google Scholar]
  13. 13. 
    Gurevitch J, Koricheva J, Nakagawa S, Stewart G 2018. Meta-analysis and the science of research synthesis. Nature 555:7695175–82
    [Google Scholar]
  14. 14. 
    Hardwicke TE, Mathur MB, MacDonald K, Nilsonne G, Banks GC et al. 2018. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R. Soc. Open Sci. 5:8180448
    [Google Scholar]
  15. 15. 
    Hardy RJ, Thompson SG. 1998. Detecting and describing heterogeneity in meta-analysis. Stat. Med. 17:8841–56
    [Google Scholar]
  16. 16. 
    Hedges LV, Tipton E, Johnson MC 2010. Robust variance estimation in meta-regression with dependent effect size estimates. Res. Synth. Methods 1:139–65
    [Google Scholar]
  17. 17. 
    Hernan MA, Robins JM. 1999. Letter to the editor of Biometrics. Biometrics 55:41316
    [Google Scholar]
  18. 18. 
    Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T et al. 2019. Cochrane Handbook for Systematic Reviews of Interventions Chichester, UK: Wiley
  19. 19. 
    Jolani S, Debray TPA, Koffijberg H, van Buuren S, Moons KGM. 2015. Imputation of systematically missing predictors in an individual participant data meta-analysis: a generalized approach using MICE. Stat. Med. 34:111841–63
    [Google Scholar]
  20. 20. 
    Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S et al. 2016. Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLOS Biol 14:5e1002456
    [Google Scholar]
  21. 21. 
    Kodama S, Saito K, Tanaka S, Maki M, Yachi Y et al. 2009. Cardiorespiratory fitness as a quantitative predictor of all-cause mortality and cardiovascular events in healthy men and women: a meta-analysis. JAMA 301:192024–35
    [Google Scholar]
  22. 22. 
    Lash TL. 2021. Bias analysis. See Ref. 23 711–54
    [Google Scholar]
  23. 23. 
    Lash TL, VanderWeele TJ, Haneuse S, Rothman KJ 2021. Modern Epidemiology. Philadelphia: Wolters Kluwer. , 4th ed..
  24. 24. 
    Lewis M, Mathur MB, VanderWeele TJ, Frank MC. 2021. The puzzling relationship between multi-lab replications and meta-analyses of the published literature. PsyArXiv. https://psyarxiv.com/pbrdk/
  25. 25. 
    Lin DY, Psaty BM, Kronmal RA. 1998. Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics 54:948–63
    [Google Scholar]
  26. 26. 
    Ling S, Brown K, Miksza JK, Howells L, Morrison A et al. 2020. Association of type 2 diabetes with cancer: a meta-analysis with bias analysis for unmeasured confounding in 151 cohorts comprising 32 million people. Diabetes Care 43:92313–22
    [Google Scholar]
  27. 27. 
    Manski CF. 2020. Toward credible patient-centered meta-analysis. Epidemiology 31:3345–52
    [Google Scholar]
  28. 28. 
    Mathur MB, Ding P, Riddell CA, VanderWeele TJ. 2018. Web site and R package for computing E-values. Epidemiology 29:5e45–47
    [Google Scholar]
  29. 29. 
    Mathur MB, Ding P, VanderWeele TJ. 2021. EValue 4.1.2: Sensitivity analyses for unmeasured confounding and other biases in observational studies and meta-analyses. CRAN https://cran.r-project.org/web/packages/EValue/
    [Google Scholar]
  30. 30. 
    Mathur MB, VanderWeele TJ. 2019. Finding common ground in meta-analysis “wars” on violent video games. Perspect. Psychol. Sci. 14:4705–8
    [Google Scholar]
  31. 31. 
    Mathur MB, VanderWeele TJ. 2019. New metrics for meta-analyses of heterogeneous effects. Stat. Med. 38:81336–42
    [Google Scholar]
  32. 32. 
    Mathur MB, VanderWeele TJ. 2020. Robust metrics and sensitivity analyses for meta-analyses of heterogeneous effects. Epidemiology 31:3356–58
    [Google Scholar]
  33. 33. 
    Mathur MB, VanderWeele TJ. 2020. Sensitivity analysis for unmeasured confounding in meta-analyses. J. Am. Stat. Assoc. 115:529163–72
    [Google Scholar]
  34. 34. 
    Mathur MB, VanderWeele TJ. 2021. Estimating publication bias in meta-analyses of peer-reviewed studies: a meta-meta-analysis across disciplines and journal tiers. Res. Synth. Methods 12:176–91
    [Google Scholar]
  35. 35. 
    Mathur MB, VanderWeele TJ. 2021. Meta-regression methods to characterize evidence strength using meaningful-effect percentages conditional on study characteristics. Res. Synth. Methods 12:6731–49
    [Google Scholar]
  36. 36. 
    McCandless LC. 2012. Meta-analysis of observational studies with unmeasured confounders. Int. J. Biostat. 8:25
    [Google Scholar]
  37. 37. 
    Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A et al. 2015. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4:11
    [Google Scholar]
  38. 38. 
    Polanin JR, Hennessy EA, Tsuji S. 2020. Transparency and reproducibility of meta-analyses in psychology: a meta-review. Perspect. Psychol. Sci. 15:41026–41
    [Google Scholar]
  39. 39. 
    Pustejovsky JE, Tipton E. 2021. Meta-analysis with robust variance estimation: expanding the range of working models. Prev. Sci. https://doi.org/10.1007/s11121-021-01246-3
    [Crossref] [Google Scholar]
  40. 40. 
    Resche-Rigon M, White IR, Bartlett JW, Peters SA, Thompson SGPROG-IMT Study Group 2013. Multiple imputation for handling systematically missing confounders in meta-analysis of individual participant data. Stat. Med. 32:284890–905
    [Google Scholar]
  41. 41. 
    Rubin DB. 2004. Multiple Imputation for Nonresponse in Surveys 81 New York: Wiley
  42. 42. 
    Schmidt FL, Hunter JE. 2015. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings Los Angeles: Sage. , 3rd ed..
  43. 43. 
    Schünemann HJ, Cuello C, Akl EA, Mustafa RA, Meerpohl JJ et al. 2019. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. J. Clin. Epidemiol. 111:105–14
    [Google Scholar]
  44. 44. 
    Simmons JP, Nelson LD, Simonsohn U 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22:111359–66
    [Google Scholar]
  45. 45. 
    Smith LH, Mathur MB, VanderWeele TJ. 2021. Multiple-bias sensitivity analysis using bounds. Epidemiology 32:625–34
    [Google Scholar]
  46. 46. 
    Smith LH, VanderWeele TJ. 2019. Bounding bias due to selection. Epidemiology 30:4509–16
    [Google Scholar]
  47. 47. 
    Spiegelhalter DJ, Best NG. 2003. Bayesian approaches to multiple sources of evidence and uncertainty in complex cost-effectiveness modelling. Stat. Med. 22:233687–709
    [Google Scholar]
  48. 48. 
    Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND et al. 2016. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919
    [Google Scholar]
  49. 49. 
    Thompson SG, Higgins JPT. 2002. How should meta-regression analyses be undertaken and interpreted?. Stat. Med. 21:111559–73
    [Google Scholar]
  50. 50. 
    Turner RM, Spiegelhalter DJ, Smith GCS, Thompson SG. 2009. Bias modelling in evidence synthesis. J. R. Stat. Soc. Ser. A. Stat. Soc. 172:121–47
    [Google Scholar]
  51. 51. 
    van Buuren S, Groothuis-Oudshoorn K. 2011. mice: Multivariate imputation by chained equations in R. J. Stat. Softw. 45: https://doi.org/10.18637/jss.v045.i03
    [Crossref] [Google Scholar]
  52. 52. 
    VanderWeele T. 2015. Explanation in Causal Inference: Methods for Mediation and Interaction Oxford, UK: Oxford Univ. Press
  53. 53. 
    VanderWeele TJ. 2008. Sensitivity analysis: distributional assumptions and confounding assumptions. Biometrics 64:2645–49
    [Google Scholar]
  54. 54. 
    VanderWeele TJ. 2021. Causal inference with time-varying exposures. See Ref. 23 605–18
  55. 55. 
    VanderWeele TJ, Ding P. 2017. Sensitivity analysis in observational research: introducing the E-value. Ann. Intern. Med. 167:4268–74
    [Google Scholar]
  56. 56. 
    VanderWeele TJ, Ding P, Mathur M. 2019. Technical considerations in the use of the E-value. J. Causal Inference 7:2 https://doi.org/10.1515/jci-2018-0007
    [Crossref] [Google Scholar]
  57. 57. 
    VanderWeele TJ, Li Y. 2019. Simple sensitivity analysis for differential measurement error. Am. J. Epidemiol. 188:101823–29
    [Google Scholar]
  58. 58. 
    VanderWeele TJ, Mathur MB. 2020. Commentary: developing best-practice guidelines for the reporting of E-values. Int. J. Epidemiol. 49:51495–97
    [Google Scholar]
  59. 59. 
    VanderWeele TJ, Mathur MB, Chen Y et al. 2020. Outcome-wide longitudinal designs for causal inference: a new template for empirical studies. Stat. Sci. 35:3437–66
    [Google Scholar]
  60. 60. 
    Wang C-C, Lee W-C. 2020. Evaluation of the normality assumption in meta-analyses. Am. J. Epidemiol. 189:3235–42
    [Google Scholar]
  61. 61. 
    Wolpert RL, Mengersen KL. 2004. Adjusted likelihoods for synthesizing empirical evidence from studies that differ in quality and design: effects of environmental tobacco smoke. Stat. Sci. 19:3450–71
    [Google Scholar]
  62. 62. 
    Zhang X, Stamey JD, Mathur MB. 2020. Assessing the impact of unmeasured confounders for credible and reliable real-world evidence. Pharmacoepidemiol. Drug Saf. 29:101219–27
    [Google Scholar]
/content/journals/10.1146/annurev-publhealth-051920-114020
Loading

Supplemental Material

Supplementary Data

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error