1932

Abstract

We review probabilistic principal components, principal fitted components, sufficient dimension reduction, and envelopes, arguing that at their core they are all based on variations of the conditional independence argument that Fisher used to develop his fundamental concept of sufficiency. We emphasize the foundations of the methods. Methodological details, derivations, and examples are included when they convey the flavor and implications of basic concepts. In addition to the main topics, this review covers extensions of probabilistic principal components, the central subspace and central mean subspace, sliced inverse regression, sliced average variance estimation, dimension reduction for covariance matrices, and response and predictor envelopes.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-031017-100257
2018-03-07
2024-04-19
Loading full text...

Full text loading...

/deliver/fulltext/statistics/5/1/annurev-statistics-031017-100257.html?itemId=/content/journals/10.1146/annurev-statistics-031017-100257&mimeType=html&fmt=ahah

Literature Cited

  1. Adcock RJ. 1878. A problem in least squares. Analyst 5:53–54 [Google Scholar]
  2. Adragni KP. 2009. Some basis functions for principal fitted components Work. Pap., Dep. Biostat Univ. Ala. at Birmingham: http://userpages.umbc.edu/∼kofi/reprints/BasisFunctions.pdf
  3. Adragni KP, Cook RD. 2009. Sufficient dimension reduction and prediction in regression. Philos. Trans. R. Soc. Lond. A 367:4385–405 [Google Scholar]
  4. Adragni KP, Raim AM. 2014. ldr: An R software package for likelihood-based sufficient dimension reduction. J. Stat. Softw. 61:1–21 [Google Scholar]
  5. Alter O, Brown P, Botstein D. 2000. Singular value decomposition for genome-wide expression data processing and modeling. PNAS 97:10101–6 [Google Scholar]
  6. Artemiou A, Li B. 2009. On principal components and regression: a statistical explanation of a natural phenomenon. Stat. Sinica 19:1557–66 [Google Scholar]
  7. Atkinson AC. 1985. Plots, Transformations and Regression Oxford, UK: Oxford Univ. Press
  8. Boik RJ. 2002. Spectral models for covariance matrices. Biometrika 89:159–82 [Google Scholar]
  9. Brieman L, Friedman J. 1985. Estimating optimal transformations for multiple regression and correlation. J. Am. Stat. Assoc. 80:580–98 [Google Scholar]
  10. Bura E, Cook RD. 2001. Extending sliced inverse regression: the weighted chi-square test. J. Am. Stat. Assoc. 96:996–1003 [Google Scholar]
  11. Bura E, Duarte S, Forzani L. 2016. Sufficient reductions in regressions with exponential family inverse predictors. J. Am. Stat. Assoc. 111:1313–29 [Google Scholar]
  12. Bura E, Forzani L. 2015. Sufficient reductions in regressions with elliptically contoured inverse predictors. J. Am. Stat. Assoc. 110:420–34 [Google Scholar]
  13. Bura E, Yang J. 2011. Dimension estimation in sufficient dimension reduction: a unifying approach. J. Multivariate Anal. 102:130–42 [Google Scholar]
  14. Burges CJC. 2009. Dimension reduction: a guided tour. Found. Trends Mach. Learn. 2:275–365 [Google Scholar]
  15. Cavalli-Sforza L, Menozzi P, Piazza A. 1994. The History and Geography of Human Genes Princeton, NJ: Princeton Univ. Press
  16. Chen X. 2010. Sufficient dimension reduction and variable selection PhD Thesis, Univ. Minn.
  17. Chen X, Zou C, Cook RD. 2010. Coordinate-independent sparse sufficient dimension reduction and variable selection. Ann. Stat. 38:3696–723 [Google Scholar]
  18. Chiaromonte F. 1996. A reduction paradigm for multivariate laws PhD Thesis, Univ. Minn.
  19. Chiaromonte F. 1997. A reduction paradigm for multivariate laws. L1-Statistical Procedures and Related Topics Y Dodge 229–40 Hayward, CA: Inst. Math. Stat. [Google Scholar]
  20. Chiaromonte F, Martinelli J. 2002. Dimension reduction strategies for analyzing global gene expression data with a response. Math. Biosci. 176:123–44 [Google Scholar]
  21. Constantine PG. 2015. Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies Philadelphia: Soc. Ind. Appl. Math.
  22. Cook RD. 1994.a On the interpretation of regression plots. J. Am. Stat. Assoc. 89:177–89 [Google Scholar]
  23. Cook RD. 1994.b Using dimension-reduction subspaces to identify important inputs in models of physical systems. 1994 Proc. Sect. Phys. Eng. Sci.18–25 Alexandria, VA: Am. Stat. Assoc. [Google Scholar]
  24. Cook RD. 1996. Graphics for regressions with a binary response. J. Am. Stat. Assoc. 91:983–92 [Google Scholar]
  25. Cook RD. 1998.a Principal Hessian directions revisited. J. Am. Stat. Assoc. 93:84–94 [Google Scholar]
  26. Cook RD. 1998.b Regression Graphics: Ideas for Studying Regressions Through Graphics New York: Wiley
  27. Cook RD. 2000. SAVE: a method for dimension reduction and graphics in regression. Commun. Stat. Theory Methods 29:2109–21 [Google Scholar]
  28. Cook RD. 2004. Testing predictor contributions in sufficient dimension reduction. Ann. Stat. 32:1062–92 [Google Scholar]
  29. Cook RD. 2007. Fisher Lecture: dimension reduction in regression. Stat. Sci. 22:1–26 [Google Scholar]
  30. Cook RD, Forzani L. 2008.a Covariance reducing models: an alternative to spectral modelling of covariance matrices. Biometrika 95:799–812 [Google Scholar]
  31. Cook RD, Forzani L. 2008.b Principal fitted components for dimension reduction in regression. Stat. Sci. 23:485–501 [Google Scholar]
  32. Cook RD, Forzani L. 2009. Likelihood-based sufficient dimension reduction. J. Am. Stat. Assoc. 104:197–208 [Google Scholar]
  33. Cook RD, Forzani L, Rothman AJ. 2012. Estimating sufficient reductions of the predictors in abundant high-dimensional regressions. Ann. Stat. 40:353–84 [Google Scholar]
  34. Cook RD, Forzani L, Su Z. 2016. A note on fast envelope estimation. J. Multivariate Anal. 150:42–54 [Google Scholar]
  35. Cook RD, Forzani L, Tomassi DR. 2009. ldr: a package for likelihood-based sufficient dimension reduction. J. Stat. Softw. 39:1–20 [Google Scholar]
  36. Cook RD, Forzani L, Zhang X. 2015.a Envelopes and reduced-rank regression. Biometrika 102:439–56 [Google Scholar]
  37. Cook RD, Helland IS, Su Z. 2013. Envelopes and partial least squares regression. J. R. Stat. Soc. B 75:851–77 [Google Scholar]
  38. Cook RD, Li B. 2002. Dimension reduction for the conditional mean in regression. Ann. Stat. 30:455–74 [Google Scholar]
  39. Cook R, Li L. 2009. Dimension reduction in regressions with exponential family predictors. J. Comput. Graph. Stat. 18:774–91 [Google Scholar]
  40. Cook RD, Li B, Chiaromonte F. 2007. Dimension reduction in regression without matrix inversion. Biometrika 94:569–84 [Google Scholar]
  41. Cook RD, Li B, Chiaromonte F. 2010. Envelope models for parsimonious and efficient multivariate linear regression. Stat. Sinica 20:927–60 [Google Scholar]
  42. Cook RD, Nachtsheim C. 1994. Reweighting to achieve elliptically contoured covariates in regression. J. Am. Stat. Assoc. 89:592–99 [Google Scholar]
  43. Cook RD, Ni L. 2005. Sufficient dimension reduction via inverse regression. J. Am. Stat. Assoc. 100:410–28 [Google Scholar]
  44. Cook RD, Ni L. 2007. A robust inverse regression estimator. Stat. Probab. Lett. 77:343–49 [Google Scholar]
  45. Cook RD, Su Z. 2016. Scaled predictor envelopes and partial least-squares regression. Technometrics 58:155–65 [Google Scholar]
  46. Cook RD, Su Z, Yang Y. 2015.b envlp: A MATLAB toolbox for computing envelope estimators in multivariate analysis. J. Stat. Softw. 62:1–20 [Google Scholar]
  47. Cook RD, Weisberg S. 1982. Residuals and Influence in Regression London: Chapman and Hall
  48. Cook RD, Weisberg S. 1991. Sliced inverse regression for dimension reduction: comment. J. Am. Stat. Assoc. 86:328–32 [Google Scholar]
  49. Cook RD, Weisberg S. 1999. Applied Regression Including Computing and Graphics New York: Wiley
  50. Cook RD, Yin X. 2001. Special invited paper: dimension reduction and visualization in discriminant analysis (with discussion). Aust. N. Z. J. Stat. 43:147–99 [Google Scholar]
  51. Cook RD, Zhang X. 2015.a Algorithms for envelope estimation. J. Comput. Graph. Stat. 25:284–300 [Google Scholar]
  52. Cook RD, Zhang X. 2015.b Foundations for envelope models and methods. J. Am. Stat. Assoc. 110:599–611 [Google Scholar]
  53. Cook RD, Zhang X. 2015.c Simultaneous envelopes for multivariate linear regression. Technometrics 57:11–25 [Google Scholar]
  54. Diaconis P, Freedman D. 1984. Asymptotics of graphical projection pursuit. Ann. Stat. 12:793–815 [Google Scholar]
  55. Eck DJ, Cook RD. 2017. Weighted envelope estimation to handle variability in model selection. arXiv1701.00856 [stat.ME]
  56. Edgeworth FY. 1884. On the reduction of observations. Philos. Mag. 17:135–41 [Google Scholar]
  57. Fisher RA. 1922. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. A 222:309–68 [Google Scholar]
  58. Flury B. 1987. Two generalizations of the common principal component model. Biometrika 74:59–69 [Google Scholar]
  59. Flury B. 1988. Common Principal Components and Related Multivariate Models New York: Wiley
  60. Hall P, Li KC. 1993. On almost linearity of low dimensional projections from high dimensional data. Ann. Stat. 21:867–89 [Google Scholar]
  61. Hilafu H, Yin X. 2017. Sufficient dimension reduction and variable selection for large-p-small-n data with highly correlated predictors. J. Comput. Graph. Stat. 26:26–34 [Google Scholar]
  62. Hotelling H. 1933. Analysis of a complex statistical variable into principal components. J. Educ. Psychol. 24:417–41 [Google Scholar]
  63. Jensen ST, Madsen J. 2004. Estimation of proportional covariance matrices in the presence of certain linear restrictions. Ann. Stat. 32:219–32 [Google Scholar]
  64. Johnstone IM, Lu AY. 2009. On consistency and sparsity for principal components analysis in high dimensions. J. Am. Stat. Assoc. 104:689–93 [Google Scholar]
  65. Jolliffe IT. 2002. Principal Component Analysis New York: Springer
  66. Kenward MG. 1987. A method for comparing profiles of repeated measurements. J. R. Stat. Soc. C 36:296–308 [Google Scholar]
  67. Khare K, Pal S, Su Z. 2017. A Bayesian approach for envelope models. Ann. Stat. 45:196–222 [Google Scholar]
  68. Krishnan A, Williams LJ, McIntosh AR, Abdi H. 2011. Partial least squares (PLS) methods for neuroimaging: a tutorial and review. NeuroImage 56:455–75 [Google Scholar]
  69. Lawley DN, Maxwell AE. 1971. Factor Analysis as a Statistical Method New York: Elsevier
  70. Lee KY, Li B, Chiaromonte F. 2013. A general theory for non-linear sufficient dimension reduction: formulation and estimation. Ann. Stat. 41:221–49 [Google Scholar]
  71. Li B, Dong Y. 2009. Dimension reduction for nonelliptically distributed predictors. Ann. Stat. 37:1272–98 [Google Scholar]
  72. Li B, Wang S. 2007. On directional regression for dimension reduction. J. Am. Stat. Assoc. 102:997–1008 [Google Scholar]
  73. Li KC. 1991. Sliced inverse regression for dimension reduction (with discussion). J. Am. Stat. Assoc. 86:316–42 [Google Scholar]
  74. Li KC. 1992. On principal Hessian directions for data visualization and dimension reduction: another application of Stein's lemma. J. Am. Stat. Assoc. 87:1025–39 [Google Scholar]
  75. Li L. 2007. Sparse sufficient dimension reduction. Biometrika 94:603–13 [Google Scholar]
  76. Li L, Yin X. 2008. Sliced inverse regression with regularizations. Biometrics 64:124–31 [Google Scholar]
  77. Li L, Zhang X. 2017. Parsimonious tensor response regression. J. Am. Stat. Assoc. 112:1131–46 [Google Scholar]
  78. Ma Y, Zhu L. 2014. On estimation efficiency of the central mean subspace. J. R. Stat. Soc. B 76:885–901 [Google Scholar]
  79. Ma Y, Zhu LA. 2012. Semiparametric approach to dimension reduction. J. Am. Stat. Assoc. 107:168–79 [Google Scholar]
  80. Mai Q, Zou H. 2015. Nonparametric variable transformation in sufficient dimension reduction. Technometrics 57:1–10 [Google Scholar]
  81. Martin S, Raim AM, Huang W, Adragni KP. 2016. ManifoldOptim: An R interface to the ROPTLIB library for Riemannian manifold optimization. arXiv1612.03930 [stat.CO]
  82. Muirhead RJ. 2005. Aspects of Multivariate Statistical Theory New York: Wiley
  83. Nadkarni NV, Zhao Y, Kosorok MR. 2011. Inverse regression estimation for censored data. J. Am. Stat. Assoc. 106:178–90 [Google Scholar]
  84. Naik PA, Hagerty MR, Tsai CL. 2000. A new dimension reduction approach for data-rich marketing environments: sliced inverse regression. J. Mark. Res. 37:88–101 [Google Scholar]
  85. Park JH, Sriram TN, Yin X. 2009. Central mean subspace for time series. J. Comput. Graph. Stat. 18:717–30 [Google Scholar]
  86. Park Y, Su Z, Hhu H. 2017. Groupwise envelope models for imaging genetic analysis. Biometrics http://dx.doi.org/10.1111/biom.12689 [Crossref]
  87. Pearson K. 1901. On lines and planes of closest fit to systems of points in space. Philos. Mag. 2:559–72 [Google Scholar]
  88. Roley SS, Newman RM. 2008. Predicting Eurasian watermilfoil invasions in Minnesota. Lake Reservoir Manag 24:361–69 [Google Scholar]
  89. Rönkkö M, McIntosh CN, Antonakis J, Edwards JR. 2016. Partial least squares path modeling: time for some serious second thoughts. J. Oper. Manag. 47–48:9–27 [Google Scholar]
  90. Rothman AJ. 2013. abundant. R software package for abundant regression and high-dimensional principal fitted components. https://www.r-pkg.org/pkg/abundant
  91. Rothman AJ, Bickel PJ, Levina E, Zhu J. 2008. Sparse permutation invariant covaraince estimation. Electron. J. Stat. 2:494–515 [Google Scholar]
  92. Schott JR. 1999. Partial common principal component subspaces. Biometrika 86:899–908 [Google Scholar]
  93. Shao Y, Cook RD, Weisberg S. 2007. Marginal tests with sliced average variance estimation. Biometrika 94:285–96 [Google Scholar]
  94. Stigler SM. 1973. Studies in the history of probability and statistics. XXXII: Laplace, Fisher and the discovery of the concept of sufficiency. Biometrika 60:439–45 [Google Scholar]
  95. Su Z, Cook RD. 2011. Partial envelopes for efficient estimation in multivariate linear regression. Biometrika 98:133–46 [Google Scholar]
  96. Su Z, Cook RD. 2012. Inner envelopes: efficient estimation in multivariate linear regression. Biometrika 99:687–702 [Google Scholar]
  97. Su Z, Cook RD. 2013.a Estimation of multivariate means with heteroscedastic errors using envelope models. Stat. Sinica 23:213–30 [Google Scholar]
  98. Su Z, Cook RD. 2013.b Scaled envelopes: scale-invariant and efficient estimation in multivariate linear regression. Biometrika 100:939–54 [Google Scholar]
  99. Su Z, Zhu G, Chen X, Yang Y. 2016. Sparse envelope model: estimation and response variable selection in multivariate linear regression. Biometrika 103:579–93 [Google Scholar]
  100. Tipping ME, Bishop CM. 1999. Probabilistic principal component analysis. J. R. Stat. Soc. B 61:611–22 [Google Scholar]
  101. Tuddenham RD, Snyder MM. 1954. Physical growth of California boys and girls from birth to age 18. Univ. Calif. Publ. Child Dev. 1:183–364 [Google Scholar]
  102. Weisberg S. 2002. Dimension reduction regression in R. J. Stat. Softw. 7:1–22 [Google Scholar]
  103. Welling M, Williams C, Agakov F. 2004. Extreme component analysis. Advances in Neural Information Processing Systems 16 S Thrun, SK Saul, B Schölkopf 137–44 Cambridge, MA: MIT Press [Google Scholar]
  104. Wold S, Sjöström M, Eriksson L. 2001. PLS-regression: a basic tool of chemometrics. Chemom. Intell. Lab. Syst. 58:109–30 [Google Scholar]
  105. Wu Y, Li L. 2011. Asymptotic properties of sufficient dimension reduction with a diverging number of predictors. Stat. Sinica 21:707–30 [Google Scholar]
  106. Xia Y, Tong H, Li W, Zhu LX. 2002. An adaptive estimation of dimension reduction space. J. R. Stat. Soc. Ser. B 64:363–410 [Google Scholar]
  107. Yin X, Hilafu H. 2015. Sequential sufficient dimension reduction for large p, small n problems. J. R. Stat. Soc. Ser. B 77:879–92 [Google Scholar]
  108. Yin X, Li B, Cook RD. 2008. Successive direction extraction for estimating the central subspace in a multiple-index regression. J. Multivar. Anal. 99:1733–57 [Google Scholar]
  109. Zhu L, Wei Z. 2015. Estimation and inference on central mean subspace for multivariate response data. Comput. Stat. Data Anal. 92:68–83 [Google Scholar]
  110. Zhu Y, Zeng P. 2006. Fourier methods for estimating the central subspace and the central mean subspace in regression. J. Am. Stat. Assoc. 101:1638–51 [Google Scholar]
/content/journals/10.1146/annurev-statistics-031017-100257
Loading
/content/journals/10.1146/annurev-statistics-031017-100257
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error