1932

Abstract

In the era of precision medicine, time-to-event outcomes such as time to death or progression are routinely collected, along with high-throughput covariates. These high-dimensional data defy classical survival regression models, which are either infeasible to fit or likely to incur low predictability due to overfitting. To overcome this, recent emphasis has been placed on developing novel approaches for feature selection and survival prognostication. In this article, we review various cutting-edge methods that handle survival outcome data with high-dimensional predictors, highlighting recent innovations in machine learning approaches for survival prediction. We cover the statistical intuitions and principles behind these methods and conclude with extensions to more complex settings, where competing events are observed. We exemplify these methods with applications to the Boston Lung Cancer Survival Cohort study, one of the largest cancer epidemiology cohorts investigating the complex mechanisms of lung cancer.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-032921-022127
2023-03-09
2024-06-18
Loading full text...

Full text loading...

/deliver/fulltext/statistics/10/1/annurev-statistics-032921-022127.html?itemId=/content/journals/10.1146/annurev-statistics-032921-022127&mimeType=html&fmt=ahah

Literature Cited

  1. Aastha PH, Liu Y. 2020. DeepCompete: A deep learning approach to competing risks in continuous time domain. AMIA Annu. Symp. Proc. 2020:177–86
    [Google Scholar]
  2. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z et al. 2015. TensorFlow: large-scale machine learning on heterogeneous systems. Software Platform for Machine Learning https://www.tensorflow.org/
    [Google Scholar]
  3. Agrawal A, Modi A, Passos A, Lavoie A, Agarwal A et al. 2019. TensorFlow Eager: a multi-stage, Python-embedded DSL for machine learning. Proc. Mach. Learn. Syst. 1:178–89
    [Google Scholar]
  4. Ahn KW, Banerjee A, Sahr N, Kim S 2018. Group and within-group variable selection for competing risks data. Lifetime Data Anal. 24:3407–24
    [Google Scholar]
  5. Andersen PK, Borgan O, Gill RD, Keiding N. 2012. Statistical Models Based on Counting Processes New York: Springer
    [Google Scholar]
  6. Annest A, Bumgarner RE, Raftery AE, Yeung KY. 2009. Iterative Bayesian model averaging: a method for the application of survival analysis to high-dimensional microarray data. BMC Bioinformat. 10:72
    [Google Scholar]
  7. Antoniadis A, Fryzlewicz P, Letué F. 2010. The Dantzig selector in Cox's proportional hazards model. Scand. J. Stat. 37:4531–52
    [Google Scholar]
  8. Austin PC, Fine JP. 2017. Practical recommendations for reporting Fine-Gray model analyses for competing risk data. Stat. Med. 36:274391–400
    [Google Scholar]
  9. Barbeau EM, Li Y, Calderon P, Hartman C, Quinn M et al. 2006. Results of a union-based smoking cessation intervention for apprentice iron workers (United States). Cancer Causes Control 17:153–61
    [Google Scholar]
  10. Bellazzi R, Zupan B. 2008. Predictive data mining in clinical medicine: current issues and guidelines. Int. J. Med. Informat. 77:281–97
    [Google Scholar]
  11. Belloni A, Chernozhukov V, Kato K. 2019. Valid post-selection inference in high-dimensional approximately sparse quantile regression models. J. Am. Stat. Assoc. 114:526749–58
    [Google Scholar]
  12. Biganzoli E, Boracchi P, Mariani L, Marubini E. 1998. Feed forward neural networks for the analysis of censored survival data: a partial logistic regression approach. Stat. Med. 17:101169–86
    [Google Scholar]
  13. Bonato V, Baladandayuthapani V, Broom BM, Sulman EP, Aldape KD, Do KA. 2011. Bayesian ensemble methods for survival prediction in gene expression data. Bioinformatics 27:3359–67
    [Google Scholar]
  14. Bou-Hamad I, Larocque D, Ben-Ameur H. 2011. A review of survival trees. Stat. Surv. 5:44–71
    [Google Scholar]
  15. Bradic J, Fan J, Jiang J 2011. Regularization for Cox's proportional hazards model with NP-dimensionality. Ann. Stat. 39:63092–120
    [Google Scholar]
  16. Breiman L. 1996. Bagging predictors. Mach. Learn. 24:2123–40
    [Google Scholar]
  17. Breiman L. 2001. Random forests. Mach. Learn. 45:15–32
    [Google Scholar]
  18. Brown SF, Branford AJ, Moran W. 1997. On the use of artificial neural networks for the analysis of survival data. IEEE Trans. Neural Netw. 8:51071–77
    [Google Scholar]
  19. Buckley J, James I 1979. Linear regression with censored data. Biometrika 66:3429–36
    [Google Scholar]
  20. Bühlmann P, Hothorn T. 2007. Boosting algorithms: regularization, prediction and model fitting. Stat. Sci. 22:4477–505
    [Google Scholar]
  21. Candès E, Tao T. 2007. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35:62313–51
    [Google Scholar]
  22. Chaturvedi N, de Menezes RX, Goeman JJ. 2014. Fused lasso algorithm for Cox proportional hazards and binomial logit models with application to copy number profiles. Biometrical J. 56:3477–92
    [Google Scholar]
  23. Chollet F, Keras Team 2015. Keras: deep learning for humans. Deep Learning Software https://github.com/fchollet/keras
    [Google Scholar]
  24. Christiani DC 2017. The Boston Lung Cancer Survival Cohort Grant Proposal, Harvard Univ. Cambridge, MA: http://grantome.com/grant/NIH/U01-CA209414-01A1
    [Google Scholar]
  25. Ciampi A, Chang CH, Hogg S, McKinney S 1987. Recursive partition: a versatile method for exploratory-data analysis in biostatistics. Biostatistics IB MacNeill, GJ Umphrey, A Donner, VK Jandhyala 23–50 New York: Springer
    [Google Scholar]
  26. Ciampi A, Thiffault J, Nakache JP, Asselain B. 1986. Stratification by stepwise regression, correspondence analysis and recursive partition: a comparison of three methods of analysis for survival data with covariates. Comput. Stat. Data Anal. 4:3185–204
    [Google Scholar]
  27. Cox DR. 1972. Regression models and life tables (with discussion). J. R. Stat. Soc. Ser. B 34:2187–220
    [Google Scholar]
  28. Crilly CJ, Haneuse S, Litt JS. 2021. Predicting the outcomes of preterm neonates beyond the neonatal intensive care unit: What are we missing?. Pediatr. Res. 89:3426–45
    [Google Scholar]
  29. Efron B. 2014. Estimation and accuracy after model selection. J. Am. Stat. Assoc. 109:507991–1007
    [Google Scholar]
  30. Eleuteri A, Tagliaferri R, Milano L, De Placido S, De Laurentiis M. 2003. A novel neural network-based survival analysis model. Neural Netw. 16:5–6855–64
    [Google Scholar]
  31. Fan J, Feng Y, Wu Y 2010. High-dimensional variable selection for Cox's proportional hazards model. Borrowing Strength: Theory Powering Applications—A Festschrift for Lawrence D. Brown JO Berger, TT Cai, IM Johnstone 70–86 Beachwood, OH: Inst. Math. Stat.
    [Google Scholar]
  32. Fan J, Li R. 2002. Variable selection for Cox's proportional hazards model and frailty model. Ann. Stat. 30:174–99
    [Google Scholar]
  33. Fan J, Lv J. 2008. Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B 70:5849–911
    [Google Scholar]
  34. Faraggi D, Simon R. 1995. A neural network model for survival data. Stat. Med. 14:173–82
    [Google Scholar]
  35. Fard MJ, Wang P, Chawla S, Reddy CK. 2016. A Bayesian perspective on early stage event prediction in longitudinal data. IEEE Trans. Knowledge Data Eng. 28:123126–39
    [Google Scholar]
  36. Fei Z, Li Y. 2021. Estimation and inference for high dimensional generalized linear models: a splitting and smoothing approach. J. Mach. Learn. Res. 22:58
    [Google Scholar]
  37. Fei Z, Zheng Q, Hong HG, Li Y. 2021. Inference for high-dimensional censored quantile regression. J. Am. Stat. Assoc. https://doi.org/10.1080/01621459.2021.1957900
    [Crossref] [Google Scholar]
  38. Fei Z, Zhu J, Banerjee M, Li Y. 2019. Drawing inferences for high-dimensional linear models: a selection-assisted partial regression and smoothing approach. Biometrics 75:2551–61
    [Google Scholar]
  39. Fine JP, Gray RJ. 1999. A proportional hazards model for the subdistribution of a competing risk. J. Am. Stat. Assoc. 94:446496–509
    [Google Scholar]
  40. Friedman J, Hastie T, Tibshirani R. 2010. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33:11–22
    [Google Scholar]
  41. Gordon L, Olshen RA. 1985. Tree-structured survival analysis. Cancer Treatment Rep. 69:101065–69
    [Google Scholar]
  42. Ha ID, Lee M, Oh S, Jeong JH, Sylvester R, Lee Y. 2014. Variable selection in subdistribution hazard frailty models with competing risks data. Stat. Med. 33:264590–604
    [Google Scholar]
  43. Haneuse S, Lee KH. 2016. Semi-competing risks data analysis: accounting for death as a competing risk when the outcome of interest is nonterminal. Circ. Cardiovasc. Qual. Outcomes 9:3322–31
    [Google Scholar]
  44. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA. 1982. Evaluating the yield of medical tests. JAMA 247:182543–46
    [Google Scholar]
  45. He X, Wang L, Hong HG 2013. Quantile-adaptive model-free variable screening for high-dimensional heterogeneous data. Ann. Stat. 41:1342–69
    [Google Scholar]
  46. Hoerl AE, Kennard RW. 1970. Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12:155–67
    [Google Scholar]
  47. Hong HG, Chen X, Christiani DC, Li Y. 2018a. Integrated powered density: screening ultrahigh dimensional covariates with survival outcomes. Biometrics 74:2421–29
    [Google Scholar]
  48. Hong HG, Chen X, Kang J, Li Y. 2020. The Lq-norm learning for ultrahigh-dimensional survival data: an integrative framework. Stat. Sin. 30:31213–33
    [Google Scholar]
  49. Hong HG, Christiani DC, Li Y. 2019. Quantile regression for survival data in modern cancer research: expanding statistical tools for precision medicine. Precision Clin. Med. 2:290–99
    [Google Scholar]
  50. Hong HG, Kang J, Li Y. 2018b. Conditional screening for ultra-high dimensional covariates with survival outcomes. Lifetime Data Anal. 24:145–71
    [Google Scholar]
  51. Hong HG, Li Y. 2017. Feature selection of ultrahigh-dimensional covariates with survival outcomes: a selective review. Appl. Math. 32:4379–96
    [Google Scholar]
  52. Hothorn T, Bühlmann P, Dudoit S, Molinaro A, Van Der Laan MJ. 2006. Survival ensembles. Biostatistics 7:3355–73
    [Google Scholar]
  53. Hothorn T, Lausen B, Benner A, Radespiel-Tröger M. 2004. Bagging survival trees. Stat. Med. 23:177–91
    [Google Scholar]
  54. Hou J, Bradic J, Xu R. 2019. Inference under fine-gray competing risks model with high-dimensional covariates. Electron. J. Stat. 13:24449–507
    [Google Scholar]
  55. Hu C, Steingrimsson JA. 2018. Personalized risk prediction in clinical oncology research: applications and practical issues using survival trees and random forests. J. Biopharmaceut. Stat. 28:2333–49
    [Google Scholar]
  56. Ishwaran H, Kogalur UB, Blackstone EH, Lauer MS. 2008. Random survival forests. Ann. Appl. Stat. 2:3841–60
    [Google Scholar]
  57. Ishwaran H, Kogalur UB, Chen X, Minn AJ 2011. Random survival forests for high-dimensional data. Stat. Anal. Data Mining 4:1115–32
    [Google Scholar]
  58. Ishwaran H, Lu M. 2019. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival. Stat. Med. 38:4558–82
    [Google Scholar]
  59. Javanmard A, Montanari A. 2014. Confidence intervals and hypothesis testing for high-dimensional regression. J. Mach. Learn. Res. 15:12869–909
    [Google Scholar]
  60. Jazić I, Schrag D, Sargent DJ, Haneuse S. 2016. Beyond composite endpoints analysis: semicompeting risks as an underutilized framework for cancer research. J. Natl. Cancer Inst. 108:12djw154
    [Google Scholar]
  61. Kang J, Hong HG, Li Y. 2017. Partition-based ultrahigh-dimensional variable screening. Biometrika 104:4785–800
    [Google Scholar]
  62. Katzman JL, Shaham U, Cloninger A, Bates J, Jiang T, Kluger Y. 2017. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. arXiv:1606.00931v3 [stat.ML]
  63. Katzman JL, Shaham U, Cloninger A, Bates J, Jiang T, Kluger Y. 2018. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Med. Res. Methodol. 18:24
    [Google Scholar]
  64. Kawaguchi ES, Shen JI, Li G, Suchard MA. 2019. A fast and scalable implementation method for competing risks data with the R package fastcmprsk. arXiv:1905.07438 [stat.CO]
  65. Kim J, Sohn I, Jung SH, Kim S, Park C 2012. Analysis of survival data with group lasso. Commun. Stat. Simul. Comput. 41:91593–605
    [Google Scholar]
  66. Koller MT, Raatz H, Steyerberg EW, Wolbers M. 2012. Competing risks and the clinical community: irrelevance or ignorance?. Stat. Med. 31:11–121089–97
    [Google Scholar]
  67. Lau B, Cole SR, Gange SJ. 2009. Competing risk regression models for epidemiologic data. Am. J. Epidemiol. 170:2244–56
    [Google Scholar]
  68. LeBlanc M, Crowley J. 1992. Relative risk trees for censored survival data. Biometrics 48:411–25
    [Google Scholar]
  69. Lee C, Yoon J, Van Der Schaar M. 2019. Dynamic-DeepHit: a deep learning approach for dynamic survival analysis with competing risks based on longitudinal data. IEEE Trans. Biomed. Eng. 67:1122–33
    [Google Scholar]
  70. Lee C, Zame WR, Yoon J, van der Schaar M 2018. DeepHit: a deep learning approach to survival analysis with competing risks. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence SA McIlraith, KQ Weinberger 2314–21 Menlo Park, CA: AAAI
    [Google Scholar]
  71. Lee KH. 2011. Bayesian variable selection in parametric and semiparametric high dimensional survival analysis PhD Thesis, Univ. Mo. Columbia, Mo:.
    [Google Scholar]
  72. Li Y, Dicker L, Zhao SD. 2014. The Dantzig selector for censored linear regression models. Stat. Sin. 24:1251–58
    [Google Scholar]
  73. Liestbl K, Andersen PK, Andersen U. 1994. Survival analysis and neural nets. Stat. Med. 13:121189–200
    [Google Scholar]
  74. Lisboa PJ, Wong H, Harris P, Swindell R. 2003. A Bayesian neural network approach for modelling censored data with an application to prognosis after surgery for breast cancer. Artif. Intel. Med. 28:11–25
    [Google Scholar]
  75. Liu Y, Chen X, Li G 2020. A new joint screening method for right-censored time-to-event data with ultra-high dimensional covariates. Stat. Methods Med. Res. 29:61499–513
    [Google Scholar]
  76. Loh PL, Wainwright MJ. 2017. Support recovery without incoherence: a case for nonconvex regularization. Ann. Stat. 45:62455–82
    [Google Scholar]
  77. Ma Y, Li Y, Lin H. 2017. Concordance measure-based feature screening and variable selection. Stat. Sin. 27:1967–85
    [Google Scholar]
  78. Noble WS. 2006. What is a support vector machine?. Nat. Biotechnol. 24:121565–67
    [Google Scholar]
  79. Peng L, Huang Y. 2008. Survival analysis with quantile regression models. J. Am. Stat. Assoc. 103:482637–49
    [Google Scholar]
  80. Pijyan A, Zheng Q, Hong HG, Li Y. 2020. Consistent estimation of generalized linear models with high dimensional predictors via stepwise regression. Entropy 22:9965
    [Google Scholar]
  81. Pölsterl S, Navab N, Katouzian A 2015. Fast training of support vector machines for survival analysis. Joint European Conference on Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015 Porto, Portugal, September 711, 2015 Proceedings, Part I A Appice, P Pereira Rodrigues, V Santos Costa, C Soares, J Gama, A Jorge 243–59 New York: Springer
    [Google Scholar]
  82. Portnoy S. 2003. Censored regression quantiles. J. Am. Stat. Assoc. 98:4641001–12
    [Google Scholar]
  83. Powell JL. 1986. Censored regression quantiles. J. Econom. 32:1143–55
    [Google Scholar]
  84. Pungpapong V. 2021. Incorporating biological networks into high-dimensional Bayesian survival analysis using an ICM/M algorithm. J. Bioinformat. Comput. Biol. 19:052150027
    [Google Scholar]
  85. Ranganath R, Perotte A, Elhadad N, Blei D. 2016. Deep survival analysis. PMLR 56:101–14
    [Google Scholar]
  86. Reeder HT, Lu J, Haneuse S. 2022. Penalized estimation of frailty-based illness-death models for semi-competing risks. arXiv:2202.00618 [stat.ME]
  87. Rosenblatt F. 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65:6386–408
    [Google Scholar]
  88. Rudolph JE, Lesko CR, Naimi AI. 2020. Causal inference in the face of competing events. Curr. Epidemiol. Rep. 7:3125–31
    [Google Scholar]
  89. Saikia R, Barman MP. 2017. A review on accelerated failure time models. Int. J. Stat. Syst. 12:2311–22
    [Google Scholar]
  90. Shivaswamy PK, Chu W, Jansche M. 2007. A support vector approach to censored targets. Seventh IEEE International Conference on Data Mining (ICDM 2007)655–60 New York: IEEE
    [Google Scholar]
  91. Smola AJ, Schölkopf B. 2004. A tutorial on support vector regression. Stat. Comput. 14:3199–222
    [Google Scholar]
  92. Somvanshi M, Chavan P, Tambade S, Shinde SV. 2016. A review of machine learning techniques using decision tree and support vector machine. 2016 International Conference on Computing Communication Control and Automation (ICCUBEA)1–7 New York: IEEE
    [Google Scholar]
  93. Steingrimsson JA, Diao L, Molinaro AM, Strawderman RL. 2016. Doubly robust survival trees. Stat. Med. 35:203595–612
    [Google Scholar]
  94. Steingrimsson JA, Diao L, Strawderman RL. 2019. Censoring unbiased regression trees and ensembles. J. Am. Stat. Assoc. 114:525370–83
    [Google Scholar]
  95. Therneau TM, Grambsch PM, Fleming TR. 1990. Martingale-based residuals for survival models. Biometrika 77:1147–60
    [Google Scholar]
  96. Tibshirani RJ. 1996. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58:267–88
    [Google Scholar]
  97. Tibshirani RJ. 1997. The lasso method for variable selection in the Cox model. Stat. Med. 16:385–95
    [Google Scholar]
  98. Tibshirani RJ. 2009. Univariate shrinkage in the Cox model for high dimensional data. Stat. Appl. Genet. Mol. Biol. 8:21
    [Google Scholar]
  99. Tjandra D, He Y, Wiens J. 2021. A hierarchical approach to multi-event survival analysis. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35591–99 Menlo Park, CA: AAAI
    [Google Scholar]
  100. Van Belle V, Pelckmans K, Suykens J, Van Huffel S. 2007. Support vector machines for survival analysis. Paper presented at the Third International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2007) Plymouth, UK: Jul. 25–27
    [Google Scholar]
  101. Van Belle V, Pelckmans K, Van Huffel S, Suykens JA. 2011. Support vector methods for survival analysis: a comparison between ranking and regression approaches. Artif. Intel. Med. 53:2107–18
    [Google Scholar]
  102. Van de Geer S, Bühlmann P, Ritov Y, Dezeure R. 2014. On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Stat. 42:31166–202
    [Google Scholar]
  103. Van der Vaart AW. 2000. Asymptotic Statistics, Vol. 3 Cambridge, UK: Cambridge Univ. Press
    [Google Scholar]
  104. Vapnik V, Guyon I, Hastie T. 1995. Support vector machines. Mach. Learn. 20:3273–97
    [Google Scholar]
  105. Verweij PJ, Van Houwelingen HC. 1994. Penalized likelihood in Cox regression. Stat. Med. 13:23–242427–36
    [Google Scholar]
  106. Vinzamuri B, Reddy CK. 2013. Cox regression with correlation based regularization for electronic health records. 2013 IEEE 13th International Conference on Data Mining757–66 New York: IEEE
    [Google Scholar]
  107. Wahba G 1999. Support vector machines, reproducing kernel Hilbert spaces and the randomized GACV. Advances in Kernel Methods: Support Vector Learning B Schölkopf, CJC Burges, AJ Smola 69–87 Cambridge, MA: MIT Press
    [Google Scholar]
  108. Wang HJ, Zhou J, Li Y. 2013a. Variable selection for censored quantile regression. Stat. Sin. 23:1145–67
    [Google Scholar]
  109. Wang W, Baladandayuthapani V, Morris JS, Broom BM, Manyam G, Do KA. 2013b. iBAG: integrative Bayesian analysis of high-dimensional multiplatform genomics data. Bioinformatics 29:2149–59
    [Google Scholar]
  110. Wu Y. 2012. Elastic net for Cox's proportional hazards model with a solution path algorithm. Stat. Sin. 22:1271–94
    [Google Scholar]
  111. Xia L, Nan B, Li Y 2022. Debiased lasso for generalized linear models with a diverging number of covariates. Biometrics In press
    [Google Scholar]
  112. Xia L, Nan B, Li Y 2021. Statistical inference for Cox proportional hazards models with a diverging number of covariates. arXiv:2106.03244 [stat.ME]
  113. Yang G, Cai Y, Reddy CK. 2018. Spatio-temporal check-in time prediction with recurrent neural network based survival analysis. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence2976–2983 N.P.: IJCAI
    [Google Scholar]
  114. Yao J, Zhu X, Zhu F, Huang J 2017. Deep correlational learning for survival prediction from multi-modality data. Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 M Descoteaux, L Maier-Hein, A Franz, P Jannin, DL Collins, S Duchesne 406–14 New York: Springer
    [Google Scholar]
  115. Young JG, Stensrud MJ, Tchetgen Tchetgen EJ, Hernán MA 2020. A causal framework for classical statistical estimands in failure-time settings with competing events. Stat. Med. 39:81199–236
    [Google Scholar]
  116. Yu Y, Bradic J, Samworth RJ. 2018. Confidence intervals for high-dimensional Cox models. arXiv:1803.01150 [stat.ME]
  117. Zhang HH, Lu W. 2007. Adaptive lasso for Cox's proportional hazards model. Biometrika 94:3691–703
    [Google Scholar]
  118. Zhao P, Yu B 2006. On model selection consistency of lasso. J. Mach. Learn. Res. 7:2541–63
    [Google Scholar]
  119. Zhao SD, Li Y. 2012. Principled sure independence screening for Cox models with ultra-high-dimensional covariates. J. Multivar. Anal. 105:397–411
    [Google Scholar]
  120. Zhao SD, Li Y. 2014. Score test variable screening. Biometrics 70:4862–71
    [Google Scholar]
  121. Zheng Q, Gallagher C, Kulasekera K. 2013. Adaptive penalized quantile regression for high dimensional data. J. Stat. Plan. Inference 143:61029–38
    [Google Scholar]
  122. Zheng Q, Peng L, He X. 2018. High dimensional censored quantile regression. Ann. Stat. 46:308–43
    [Google Scholar]
  123. Zou H, Hastie T. 2005. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 67:2301–20
    [Google Scholar]
/content/journals/10.1146/annurev-statistics-032921-022127
Loading
/content/journals/10.1146/annurev-statistics-032921-022127
Loading

Data & Media loading...

Supplementary Data

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error