1932

Abstract

Precision medicine seeks to maximize the quality of health care by individualizing the health-care process to the uniquely evolving health status of each patient. This endeavor spans a broad range of scientific areas including drug discovery, genetics/genomics, health communication, and causal inference, all in support of evidence-based, i.e., data-driven, decision making. Precision medicine is formalized as a treatment regime that comprises a sequence of decision rules, one per decision point, which map up-to-date patient information to a recommended action. The potential actions could be the selection of which drug to use, the selection of dose, the timing of administration, the recommendation of a specific diet or exercise, or other aspects of treatment or care. Statistics research in precision medicine is broadly focused on methodological development for estimation of and inference for treatment regimes that maximize some cumulative clinical outcome. In this review, we provide an overview of this vibrant area of research and present important and emerging challenges.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-030718-105251
2019-03-07
2024-06-17
Loading full text...

Full text loading...

/deliver/fulltext/statistics/6/1/annurev-statistics-030718-105251.html?itemId=/content/journals/10.1146/annurev-statistics-030718-105251&mimeType=html&fmt=ahah

Literature Cited

  1. Athey S, Wager S. 2017. Efficient policy learning. arXiv:1702.02896 [math.ST]
    [Google Scholar]
  2. Bai X, Tsiatis AA, Lu W, Song R. 2017. Optimal treatment regimes for survival endpoints using a locally-efficient doubly-robust estimator from a classification perspective. Lifetime Data Anal. 23:585–604
    [Google Scholar]
  3. Bekiroglu K, Lagoa C, Murphy SA, Lanza ST. 2017. Control engineering methods for the design of robust behavioral treatments. IEEE Trans. Control Syst. Technol. 25:979–90
    [Google Scholar]
  4. Bellman RE 1957. Dynamic Programming Princeton, NJ: Princeton Univ. Press
    [Google Scholar]
  5. Bembom O, van der Laan MJ. 2008. Analyzing sequentially randomized trials based on causal effect models for realistic individualized treatment rules. Stat. Med. 27:3689–716
    [Google Scholar]
  6. Bertsekas DP 2005. Dynamic Programming and Optimal Control 1 Belmont, MA: Athena Scientific. 3rd ed.
    [Google Scholar]
  7. Biernot P, Moodie EE 2010. A comparison of variable selection approaches for dynamic treatment regimes. Int. J. Biostat. 6:16
    [Google Scholar]
  8. Busoniu L, Babuska R, De Schutter B, Ernst D 2010. Reinforcement Learning and Dynamic Programming Using Function Approximators New York: CRC
    [Google Scholar]
  9. Butler EL. 2016. Using patient preferences to estimate optimal treatment strategies for competing outcomes PhD Diss., Univ. N. C. Chapel Hill
    [Google Scholar]
  10. Butler EL, Laber EB, Davis SM, Kosorok MR. 2018. Incorporating patient preferences into estimation of optimal individualized treatment rules. Biometrics 74:18–26
    [Google Scholar]
  11. Cesa-Bianchi N, Lugosi G 2006. Prediction, Learning, and Games Cambridge, UK: Cambridge Univ. Press
    [Google Scholar]
  12. Chakraborty B, Laber EB, Zhao YQ. 2013. Inference for dynamic treatment regimes using an m-out-of-n bootstrap scheme. Biometrics 69:714–23
    [Google Scholar]
  13. Chakraborty B, Laber EB, Zhao YQ 2014. Inference about the expected performance of a data-driven dynamic treatment regime. Clin. Trials 11:4408–17
    [Google Scholar]
  14. Chakraborty B, Moodie EE 2013. Statistical Methods for Dynamic Treatment Regimes New York: Springer
    [Google Scholar]
  15. Chakraborty B, Murphy SA, Strecher V. 2010. Inference for non-regular parameters in optimal dynamic treatment regimes. Stat. Methods Med. Res. 19:317–43
    [Google Scholar]
  16. Chen G, Zeng D, Kosorok MR. 2016. Personalized dose finding using outcome weighted learning (with discussion and rejoinder). J. Am. Stat. Assoc. 111:1509–47
    [Google Scholar]
  17. Chen J, Fu H, He X, Kosorok MR, Liu Y. 2018. Estimating individualized treatment rules for ordinal treatments. Biometrics. 74:924–33
    [Google Scholar]
  18. Cook RD, Ni L. 2005. Sufficient dimension reduction via inverse regression: a minimum discrepancy approach. J. Am. Stat. Assoc. 100:410–28
    [Google Scholar]
  19. Cui Y, Zhu R, Kosorok MR. 2017. Tree based weighted learning for estimating individualized treatment rules with censored data. Electron. J. Stat. 11:3927–53
    [Google Scholar]
  20. Dawid AP. 2015. Statistical causality from a decision-theoretic perspective. Annu. Rev. Stat. Appl. 2:273–303
    [Google Scholar]
  21. Eddy DM. 1990. Practice policies: guidelines for methods. J. Am. Med. Assoc. 263:1939–41
    [Google Scholar]
  22. Elkan C. 2001. The foundations of cost-sensitive learning. Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence973–78 San Francisco: Morgan Kaufmann
    [Google Scholar]
  23. Ertefaie A. 2015. Constructing dynamic treatment regimes in infinite-horizon settings. arXiv:1406.0764v2 [stat.ME]
    [Google Scholar]
  24. Fan A, Lu W, Song R. 2016. Sequential advantage selection for optimal treatment regime. Ann. Appl. Stat. 10:32–53
    [Google Scholar]
  25. Fard MM. 2009. Non-deterministic policies in Markovian processes PhD Diss., McGill Univ.
    [Google Scholar]
  26. FDA-NIH (Food. Drug. Adm. Natl. Inst. Health) Biomarker Working Group 2016. BEST (Biomarkers, Endpoints, and Other Tools) Resource Silver Spring, MD: FDA
    [Google Scholar]
  27. Fisher B, Redmond C, Brown A, Wickerham DL, Wolmark N et al. 1983. Influence of tumor estrogen and progesterone receptor levels on the response to Tamoxifen and chemotherapy in primary breast cancer. J. Clin. Oncol. 1:227–41
    [Google Scholar]
  28. Fonteneau R, Wehenkel L, Ernst D. 2008. Variable selection for dynamic treatment regimes: a reinforcement learning approach Paper presented at European Workshop on Reinforcement Learning 2008, Villeneuve d'Ascq, France, June 30–July 3
    [Google Scholar]
  29. Ford I, Norrie J. 2016. Pragmatic trials. N. Engl. J. Med. 375:454–63
    [Google Scholar]
  30. Gail M, Simon R. 1985. Testing for qualitative interactions between treatment effects and patient subsets. Biometrics 41:361–72
    [Google Scholar]
  31. Goldberg Y, Kosorok MR. 2012. Q-learning with censored data. Ann. Stat. 40:529–60
    [Google Scholar]
  32. Gordon AD. 1987. A review of hierarchical classification. J. R. Stat. Soc. A 150:119–37
    [Google Scholar]
  33. Gunter L, Zhu J, Murphy SA. 2011. Variable selection for qualitative interactions. Stat. Methodol. 8:42–55
    [Google Scholar]
  34. Hastie T, Tisbhsirani R, Friedman J 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction New York: Springer
    [Google Scholar]
  35. Henderson R, Ansell P, Alshibani D. 2010. Regret-regression for optimal dynamic treatment regimes. Biometrics 66:1192–201
    [Google Scholar]
  36. Hillier FS, Lieberman GJ 1990. Introduction to Operations Research New York: McGraw-Hill. 5th ed.
    [Google Scholar]
  37. Hripcsak G, Ryan PB, Duke JK, Shah NH, Park RW et al. 2016. Characterizing treatment pathways at scale using the OHDSI network. PNAS 113:7329–36
    [Google Scholar]
  38. Jeng XJ, Lu W, Peng H. 2018. High-dimensional inference for personalized treatment decision. Electron. J. Stat. 12:2074–89
    [Google Scholar]
  39. Kallus N. 2016. Recursive partitioning for personalization using observational data. arXiv:1608.08925 [stat.ML]
    [Google Scholar]
  40. Kallus N. 2017. Balanced policy evaluation and learning. arXiv:1705.07384 [stat.ML]
    [Google Scholar]
  41. Kallus N. 2018. Policy evaluation and optimization with continuous treatment. arXiv:1802.06037 [stat.ML]
    [Google Scholar]
  42. Kang C, Janes H, Huang Y. 2014. Combining biomarkers to optimize patient treatment recommendations (with discussion and rejoinder). Biometrics 70:695–707
    [Google Scholar]
  43. Kelleher SA, Dorfman CS, Vilardaga JCP, Majestic C, Winger J et al. 2017. Optimizing delivery of a behavioral pain intervention in cancer patients using a sequential multiple assignment randomized trial SMART. Contemp. Clin. Trials 57:51–7
    [Google Scholar]
  44. Kidwell KM. 2014. SMART designs in cancer research: past, present, and future. Clin. Trials 11:445–56
    [Google Scholar]
  45. Kidwell KM. 2016. DTRs and SMARTs: definitions, designs, and applications. See Kosorok & Moodie 2016 7–24
    [Google Scholar]
  46. Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D et al. 2015. Micro-randomized trials: an experimental design for developing just-in-time adaptive interventions. Health Psychol. 34:Suppl.1220–28
    [Google Scholar]
  47. Kosorok MR, Moodie EEM 2016. Adaptive Treatment Strategies in Practice: Planning Trials and Analyzing Data for Personalized Medicine Philadelphia: SIAM
    [Google Scholar]
  48. Kravitz RL, Duan N, Braslow J. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Q. 82:661–87
    [Google Scholar]
  49. Laber EB, Linn KA, Stefanski LA. 2014a. Interactive model building for Q-learning. Biometrika 101:831–47
    [Google Scholar]
  50. Laber EB, Lizotte DJ, Ferguson B. 2014b. Set-valued dynamic treatment regimes for competing outcomes. Biometrics 70:53–61
    [Google Scholar]
  51. Laber EB, Lizotte DJ, Qian M, Pelham WE, Murphy SA. 2014c. Dynamic treatment regimes: technical challenges and applications. Electron. J. Stat. 8:1225–72
    [Google Scholar]
  52. Laber EB, Meyer NJ, Reich BJ, Pacifici K, Collazon JA, Drake J. 2018a. Optimal treatment allocations in space and time for on-line control of an emerging infectious disease. J. R. Stat. Soc. C 67:743–89
    [Google Scholar]
  53. Laber EB, Murphy SA. 2011. Adaptive confidence intervals for the test error in classification (with discussion and rejoinder). J. Am. Stat. Assoc. 106:904–45
    [Google Scholar]
  54. Laber EB, Staicu AM. 2018. Functional feature construction for personalized dynamic treatment regimes. J. Am. Stat. Assoc. 113:1219–27
    [Google Scholar]
  55. Laber EB, Wu F, Munera C, Lipkovich I, Colucci S, Ripa S. 2018b. Identifying optimal dosage regimes under safety constraints: an application to long term opioid treatment of chronic pain. Stat. Med. 37:1407–18
    [Google Scholar]
  56. Laber EB, Zhao YQ. 2015. Tree-based methods for individualized treatment regimes. Biometrika 102:501–14
    [Google Scholar]
  57. Laber EB, Zhao YQ, Regh T, Davidian M, Tsiatis A et al. 2016. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy. Stat. Med. 35:1245–56
    [Google Scholar]
  58. Lakkaraju H, Rudin C. 2017. Learning cost-effective and interpretable treatment regimes. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics A Singh, J Zhu166–75 Brookline, MA: Microtome
    [Google Scholar]
  59. Lavori PW, Dawson R. 2000. A design for testing clinical strategies: biased adaptive within-subject randomization. J. R. Stat. Soc. A 163:29–38
    [Google Scholar]
  60. Lavori PW, Dawson R. 2004. Dynamic treatment regimes: practical design considerations. Clin. Trials 1:9–20
    [Google Scholar]
  61. Lee JA, Verleysen M 2007. Nonlinear Dimensionality Reduction New York: Springer
    [Google Scholar]
  62. Li L. 2007. Sparse sufficient dimension reduction. Biometrika 94:603–13
    [Google Scholar]
  63. Liang F, Li Q, Zhou L. 2018. Bayesian neural networks for selection of drug sensitive genes. J. Am. Stat. Assoc. 113:955–72
    [Google Scholar]
  64. Linn KA, Laber EB, Stefanski LA. 2015. Estimation of dynamic treatment regimes for complex outcomes: balancing benefits and risks. See Kosorok & Moodie 2016 249–62
    [Google Scholar]
  65. Linn KA, Laber EB, Stefanski LA. 2017. Interactive Q-learning for quantiles. J. Am. Stat. Assoc. 112:638–49
    [Google Scholar]
  66. Liu Y, Wang Y, Zeng D. 2017. Sequential multiple assignment randomized trials with enrichment design. Biometrics 73:378–90
    [Google Scholar]
  67. Lizotte DJ, Bowling M, Murphy SA. 2012. Linear fitted-Q iteration with multiple reward functions. J. Mach. Learn. Res. 13:3252–95
    [Google Scholar]
  68. Lizotte DJ, Laber EB. 2016. Multi-objective Markov decision processes for data-driven decision support. J. Mach. Learn. Res. 17:7378–405
    [Google Scholar]
  69. Lonergan M, Senn SJ, McNamee C, Daly AK, Sutton R et al. 2017. Defining drug response for stratified medicine. Drug Discov. Today 22:173–79
    [Google Scholar]
  70. Longford NT, Nelder JA. 1999. Statistics versus statistical science in the regulatory process. Stat. Med. 18:2311–20
    [Google Scholar]
  71. Lu W, Zhang HH, Zeng D. 2013. Variable selection for optimal treatment decision. Stat. Methods Med. Res. 22:493–504
    [Google Scholar]
  72. Luckett DJ, Laber EB, Kahkoska AR, Maahs DM, Mayer-Davis E, Kosorok MR. 2017. Estimating dynamic treatment regimes in mobile health using V-learning. arXiv:1611.03531v2 [stat.ML]
    [Google Scholar]
  73. Luckett DJ, Laber EB, Kosorok MR. 2018. Estimation and optimization of composite outcomes. arXiv:1711.10581v2 [stat.ML]
    [Google Scholar]
  74. Luedtke AR, van der Laan MJ. 2016a. Optimal individualized treatments in resource-limited settings. Int. J. Biostat. 12:283–303
    [Google Scholar]
  75. Luedtke AR, van der Laan MJ. 2016b. Super-learning of an optimal dynamic treatment rule. Int. J. Biostat. 12:305–32
    [Google Scholar]
  76. Lund JL, Richardson DB, Stürmer T. 2015. The active comparator, new user study design in pharmacoepidemiology: historical foundations and contemporary application. Curr. Epidemiol. Rep. 2:221–8
    [Google Scholar]
  77. Macia NF, Thaler GJ 2005. Modeling and Control of Dynamic Systems New York: Thompson Delmar Learning
    [Google Scholar]
  78. Moodie EEM, Chakraborty B, Kramer MS. 2012. Q-learning for estimating optimal dynamic treatment rules from observational data. Can. J. Stat. 40:629–45
    [Google Scholar]
  79. Moodie EEM, Dean N, Sun YR. 2014. Q-learning: flexible learning about useful utilities. Stat. Biosci. 6:223–43
    [Google Scholar]
  80. Moodie EEM, Richardson TS. 2010. Estimating optimal dynamic regimes: correcting bias under the null. Scand. J. Stat. 37:126–46
    [Google Scholar]
  81. Murphy SA. 2003. Optimal dynamic treatment regimes (with discussion and rejoinder). J. R. Stat. Soc. B 65:331–66
    [Google Scholar]
  82. Murphy SA. 2005a. A generalization error for Q-learning. J. Mach. Learn. Res. 6:1073–97
    [Google Scholar]
  83. Murphy SA. 2005b. An experimental design for the development of adaptive treatment strategies. Stat. Med. 24:1455–81
    [Google Scholar]
  84. Murphy SA, van der Laan MJ, Robins JM, Conduct Problems Prevention Research Group. 2001. Marginal mean models for dynamic regimes. J. Am. Stat. Assoc. 96:1410–23
    [Google Scholar]
  85. Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K et al. 2017. Just-in-time adaptive interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann. Behav. Med. 52:446–62
    [Google Scholar]
  86. Nisio M 2015. Stochastic Control Theory: Dynamic Programming Principle New York: Springer. 2nd ed.
    [Google Scholar]
  87. Non-Hodgkin's Lymphoma Prognostic Factors Project. 1993. A prediction model for aggressive non-Hodgkin's lymphoma: The international non-Hodgkin's lymphoma prognostic factors project. N. Engl. J. Med. 329:987–94
    [Google Scholar]
  88. Olsen KS, Lund E. 2017. Population-based precision cancer screening—Letter. Cancer Epidemiol. Biomark. Prev. 26:975
    [Google Scholar]
  89. Paduraru C. 2013. Off-policy evaluation in Markov decision processes PhD Thesis, McGill Univ.
    [Google Scholar]
  90. Pearl J. 2009. Causal inference in statistics: an overview. Stat. Surv. 3:96–146
    [Google Scholar]
  91. Penn State Methodology Center. 2018. Projects using SMARTs. Penn State Methodology Center. https://methodology.psu.edu/ra/adap-inter/projects
    [Google Scholar]
  92. Perou CM, Sørlie T, Eisen MB, van de Rijn M, Jeffrey SS et al. 2000. Molecular portraits of human breast tumors. Nature 406:747–52
    [Google Scholar]
  93. Petersen ML, Porter KE, Gruber S, Wang Y, van der Laan MJ. 2012. Diagnosing and responding to violations in the positivity assumption. Stat. Methods Med. Res. 21:31–54
    [Google Scholar]
  94. Pires BA, Szepesvari C, Ghavamzadeh M. 2013. Cost-sensitive multiclass classification risk bounds. Proceedings of the 30th International Conference on International Conference on Machine Learning1391–9 Brookline, MA: Microtome
    [Google Scholar]
  95. Powell WB 2007. Approximate Dynamic Programming: Solving the Curses of Dimensionality New York: Wiley
    [Google Scholar]
  96. Precup D. 2000. Temporal abstraction in reinforcement learning PhD Thesis, Univ. Mass. Amherst
    [Google Scholar]
  97. Puterman ML 2005. Markov Decision Processes: Discrete Stochastic Dynamic Programming New York: Wiley. 2nd ed.
    [Google Scholar]
  98. Qian M, Murphy SA. 2011. Performance guarantees for individualized treatment rules. Ann. Stat. 39:1180–2011
    [Google Scholar]
  99. Rich B, Moodie EE, Stephens DA, Platt RW. 2010. Model checking with residuals for g-estimation of optimal dynamic treatment regimes. Int. J. Biostat. 6:2 Article 12
    [Google Scholar]
  100. Robins JM. 1986. A new approach to causal inference in mortality studies with sustained exposure periods—application to control of the health worker survivor effect. Comput. Math. Appl. 7:1393–512
    [Google Scholar]
  101. Robins JM. 1997. Causal inference from complex longitudinal data. Lect. Notes Stat. 120:69–117
    [Google Scholar]
  102. Robins JM. 2004. Optimal structural nested models for optimal sequential decisions. Proceedings of the Second Seattle Symposium in Biostatistics DY Lin, PJ Heagerty189–326 New York: Springer
    [Google Scholar]
  103. Robins JM, Hernan MA, Brumback B. 2000. Marginal structural models and causal inference in epidemiology. Epidemiology 11:550–60
    [Google Scholar]
  104. Rubin DB. 1978. Bayesian inference for causal effects: the role of randomization. Ann. Stat. 6:34–58
    [Google Scholar]
  105. Rubin DB. 2005. Causal inference using potential outcomes: designs, modeling, decisions. J. Am. Stat. Assoc. 100:322–31
    [Google Scholar]
  106. Sachs GS, Nierenberg AA, Calabrese JR, Marangell LB, Wisniewski SR et al. 2007. Effectiveness of adjunctive antidepressant treatment for bipolar depression. N. Engl. J. Med. 356:1711–22
    [Google Scholar]
  107. Schulte PJ, Tsiatis AA, Laber EB, Davidian M. 2014. Q- and A-learning methods for estimating optimal dynamic treatment regimes. Stat. Sci. 29:640–61
    [Google Scholar]
  108. Shani G, Pineau J, Kaplow R. 2013. A survey of point-based POMDP solvers. Auton. Agents Multi-Agent Syst. 27:1–51
    [Google Scholar]
  109. Shen Y, Cai T. 2016. Identifying predictive markers for personalized treatment selection. Biometrics 72:1017–25
    [Google Scholar]
  110. Shortreed SM, Laber EB, Lizotte DJ, Stroup TS, Pineau J, Murphy SA. 2011. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Mach. Learn. 84:109–36
    [Google Scholar]
  111. Shortreed SM, Laber EB, Scott ST, Pineau J, Murphy SA. 2014. A multiple imputation strategy for sequential multiple assignment randomized trials. Stat. Med. 33:4202–14
    [Google Scholar]
  112. Shortreed SM, Moodie EE. 2012. Estimating the optimal dynamic antipsychotic treatment regime: evidence from the sequential multiple assignment randomized Clinical Antipsychotic Trials of Intervention and Effectiveness schizophrenia study. J. R. Stat. Soc. C 61:577–99
    [Google Scholar]
  113. Si J, ed. 2004. Handbook of Learning and Approximate Dynamic Programming 2 New York: Wiley
    [Google Scholar]
  114. Song R, Kosorok MR, Zeng D, Zhao YQ, Laber EB, Yuan M. 2015a. On sparse representation for optimal individualized treatment selection with penalized outcome weighted learning. Stat 4:59–68
    [Google Scholar]
  115. Song R, Wang W, Zeng D, Kosorok MR. 2015b. Penalized Q-learning for dynamic treatment regimens. Stat. Sin. 25:901–20
    [Google Scholar]
  116. Sorensen TIA. 1996. Which patients may be harmed by good treatments?. Lancet 348:351–2
    [Google Scholar]
  117. Stephens DA. 2016. G-estimation for dynamic treatment regimes in the longitudinal setting. See Kosorok & Moodie 2016 89–117
    [Google Scholar]
  118. Stusser RJ. 2006. Reflections on the scientific method in medicine. Medical Sciences 1 YLG Verhasselt, AV Jablensky, A Pellegrini, C Sayegh, A Kazanjian23–47 Paris: UNESCO-EOLSS Joint Comm. Secr.
    [Google Scholar]
  119. Sutton RS. 1997. On the significance of Markov decision processes. Proceedings of the 7th International Conference on Artificial Neural Networks273–82 New York: Springer
    [Google Scholar]
  120. Sutton RS, Barto AG 1998. Reinforcement Learning: An Introduction Cambridge, MA: MIT Press
    [Google Scholar]
  121. Szepesvari C 2010. Algorithms for Reinforcement Learning Williston, VT: Morgan & Claypool
    [Google Scholar]
  122. Tang Y, Kosorok MR. 2012. Developing adaptive personalized therapy for cystic fibrosis using reinforcement learning Work. Pap. 30, Dep. Biostat., Univ. N.C., Chapel Hill. http://biostats.bepress.com/uncbiostat/art30
    [Google Scholar]
  123. Taylor JM, Cheng W, Foster JC. 2015. Reader reaction to “A robust method for estimating optimal treatment regimes” by Zhang et al. (2012). Biometrics 71:267–73
    [Google Scholar]
  124. Teran Hidalgo SJ, Lawson MT, Luckett DJ, Chaudhari M, Chen J et al. 2016. A master pipeline for discovery and validation of biomarkers. Machine Learning for Health Informatics: State-of-the-Art and Future Challenges A Holzinger259–88 New York: Springer
    [Google Scholar]
  125. Tewari A, Murphy SA. 2017. From ads to interventions: contextual bandits in mobile health. Mobile Health—Sensors, Analytic Methods, and Applications J Regh, S Murphy, S Kumar495–517 New York: Springer
    [Google Scholar]
  126. Thall PF, Leiko HW, Logothetis CJ, Millikan RE, Tannir NM. 2007. Bayesian and frequentist two-stage treatment strategies based on sequential failure times subject to interval censoring. Stat. Med. 26:4687–702
    [Google Scholar]
  127. Thiese MS. 2014. Observational and interventional study design types: an overview. Biochem. Medica 24:199–210
    [Google Scholar]
  128. Tian L, Alizadeh AA, Gentles AJ, Tibshirani R. 2014. A simple method for estimating interactions between a treatment and a large number of covariates. J. Am. Stat. Assoc. 109:1517–32
    [Google Scholar]
  129. Torkamani A, Wineinger NE, Topol EJ. 2018. The personal and clinical utility of polygenic risk scores. Nat. Rev. Genet. 19:581–90
    [Google Scholar]
  130. van der Laan MJ, Murphy SA, Robins JM. 2001. Analyzing dynamic regimes using structural nested mean models. Unpublished manuscript
    [Google Scholar]
  131. van der Laan MJ, Peterson ML. 2004. History-adjusted marginal structural models and statistically-optimal dynamic treatment regimes Work. Pap. 158, Div. Biostat., Univ. Calif., Berkeley. https://biostats.bepress.com/ucbbiostat/paper158
    [Google Scholar]
  132. Wager A, Athey S. 2018. Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. 113:1228–42
    [Google Scholar]
  133. Wallace MP, Moodie EEM, Stephens DA. 2018. Reward ignorant modeling of dynamic treatment regimes. Biom. J. 60:991–1002
    [Google Scholar]
  134. Wang J, Shen X, Pan W. 2009. On large margin hierarchical classification with multiple paths. J. Am. Stat. Assoc. 104:1213–23
    [Google Scholar]
  135. Wang L, Laber EB, Witkiewitz K. 2018a. Sufficient Markov decision processes. arXiv:1704.07531v2 [stat.ME]
    [Google Scholar]
  136. Wang L, Zhou Y, Song R, Sherwood B. 2018b. Quantile-optimal treatment regimes. J. Am. Stat. Assoc. 113:1243–54
    [Google Scholar]
  137. Wang Y, Fu H, Zeng D. 2018c. Learning optimal personalized treatment rules in consideration of benefit and risk: with an application to treating type 2 diabetes patients with insulin therapies. J. Am. Stat. Assoc. 113:113
    [Google Scholar]
  138. Wang Y, Wu P, Liu Y, Weng C, Zeng D. 2016. Learning optimal individualized treatment rules from electronic health record data. IEEE Int. Conf. Healthc. Inform. 2016:65–71
    [Google Scholar]
  139. White House Office of the Press Secretary. 2015. Fact sheet: President Obama's Precision Medicine Initiative Press release, Jan. 30
    [Google Scholar]
  140. Wilensky U, Rand W 2015. An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engineered Complex Systems with NetLogo Cambridge, MA: MIT Press
    [Google Scholar]
  141. Wu T. 2016. Set valued dynamic treatment regimes PhD Diss., Univ. Mich.
    [Google Scholar]
  142. Xu Y, Müller P, Wahed AS, Thall PF. 2016. Bayesian nonparametric estimation for dynamic treatment regimes with sequential transition times (with discussion and rejoinder). J. Am. Stat. Assoc. 111:921–50
    [Google Scholar]
  143. Zadrozny B, Langford J, Abe N. 2003. Cost-sensitive learning by cost-proportionate example weighting. Third IEEE International Conference on Data Mining435–42 New York: IEEE
    [Google Scholar]
  144. Zatzick DF, Russo J, Darnell D, Chambers DA, Palinkas L et al. 2016. An effectiveness-implementation hybrid trial study protocol targeting posttraumatic stress disorder and comorbidity. Implement. Sci. 11:58
    [Google Scholar]
  145. Zhang B, Tsiatis BB, Davidian M, Zhang M, Laber EB. 2012a. Estimating optimal treatment regimes from a classification perspective. Statistics 1:103–14
    [Google Scholar]
  146. Zhang B, Tsiatis AA, Laber EB, Davidian M. 2012b. A robust method for estimating optimal treatment regimes. Biometrics 68:1010–18
    [Google Scholar]
  147. Zhang B, Tsiatis AA, Laber EB, Davidian M. 2013. Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions. Biometrika 100:681–94
    [Google Scholar]
  148. Zhang B, Zhang M. 2016. Variable selection for estimating the optimal treatment regimes in the presence of a large number of covariate Tech. Rep. 120, Sch. Public Health, Univ. Mich.
    [Google Scholar]
  149. Zhang Y, Laber EB, Davidian M, Tsiatis AA. 2018. Estimation of optimal treatment regimes using lists. J. Am. Stat. Assoc. In press
    [Google Scholar]
  150. Zhao YF, Kosorok MR, Zeng D. 2009. Reinforcement learning design for cancer clinical trials. Stat. Med. 28:3294–315
    [Google Scholar]
  151. Zhao YQ, Zeng D, Laber EB, Kosorok MR. 2015a. New statistical learning methods for estimating optimal dynamic treatment regimes. J. Am. Stat. Assoc. 110:583–98
    [Google Scholar]
  152. Zhao YQ, Zeng D, Laber EB, Song R, Yuan M, Kosorok MR. 2015b. Doubly robust learning for estimating individualized treatment with censored data. Biometrika 102:151–68
    [Google Scholar]
  153. Zhao YQ, Zeng D, Rush AJ, Kosorok MR. 2012. Estimating individualized treatment rules using outcome weighted learning. J. Am. Stat. Assoc. 107:1106–18
    [Google Scholar]
  154. Zhao YF, Zeng D, Socinski MA, Kosorok MR. 2011. Reinforcement learning strategies for clinical trials in non-small cell lung cancer. Biometrics 67:1422–33
    [Google Scholar]
  155. Zhou ZH, Liu XY. 2006. On multi-class cost-sensitive learning. Proceedings of the 21st National Conference on Artificial Intelligence 1567–72 Palo Alto, CA: AAAI
    [Google Scholar]
  156. Zhou X, Mayer-Hamblett N, Khan U, Kosorok MR. 2017. Residual weighted learning for estimating individualized treatment rules. J. Am. Stat. Assoc. 112:169–87
    [Google Scholar]
/content/journals/10.1146/annurev-statistics-030718-105251
Loading
/content/journals/10.1146/annurev-statistics-030718-105251
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error