1932

Abstract

This article proposes a set of categories, each one representing a particular distillation of important statistical ideas. Each category is labeled a “sense” because we think of these as essential in helping every statistical mind connect in constructive and insightful ways with statistical theory, methodologies, and computation, toward the ultimate goal of building statistical phronesis. The illustration of each sense with statistical principles and methods provides a sensical tour of the conceptual landscape of statistics, as a leading discipline in the data science ecosystem.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-040220-015348
2023-03-09
2024-04-20
Loading full text...

Full text loading...

/deliver/fulltext/statistics/10/1/annurev-statistics-040220-015348.html?itemId=/content/journals/10.1146/annurev-statistics-040220-015348&mimeType=html&fmt=ahah

Literature Cited

  1. Abowd J, Ashmead R, Cumings-Menon R, Garfinkel S, Heineck M et al. 2022. The 2020 Census Disclosure Avoidance System TopDown algorithm. Harv. Data Sci. Rev. https://doi.org/10.1162/99608f92.529e3cb9
    [Crossref] [Google Scholar]
  2. Abowd JM, Hawes MB. 2023. Confidentiality protection in the 2020 US Census of Population and Housing. Annu. Rev. Stat. Appl. 10:119–44
    [Google Scholar]
  3. Agresti A. 2021. The foundations of statistical science: a history of textbook presentations. Braz. J. Probab. Stat. 35:4657–98
    [Google Scholar]
  4. Agresti A, Meng X-L. 2013. Strength in Numbers: The Rising of Academic Statistics Departments in the US New York: Springer
  5. Baum LE, Petrie T, Soules G, Weiss N. 1970. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Stat. 41:1164–71
    [Google Scholar]
  6. Berger JO, Bernardo JM. 1989. Estimating a product of means: Bayesian analysis with reference priors. J. Am. Stat. Assoc. 84:405200–7
    [Google Scholar]
  7. Berger JO, Bernardo JM, Sun D. 2009. The formal definition of reference priors. Ann. Stat. 37:2905–38
    [Google Scholar]
  8. Berger JO, Bernardo JM, Sun D. 2012. Objective priors for discrete parameter spaces. J. Am. Stat. Assoc. 107:498636–48
    [Google Scholar]
  9. Berger JO, Bernardo JM, Sun D. 2015. Overall objective priors. Bayesian Anal. 10:1189–221
    [Google Scholar]
  10. Berger JO, Sun D. 2008. Objective priors for the bivariate normal model. Ann. Stat. 36:2963–82
    [Google Scholar]
  11. Besag J. 1986. On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B 48:3259–79
    [Google Scholar]
  12. Besag J, Green PJ. 1993. Spatial statistics and Bayesian computation. J. R. Stat. Soc. Ser. B 55:125–37
    [Google Scholar]
  13. Blackwell D. 1947. Conditional expectation and unbiased sequential estimation. Ann. Math. Stat. 18:105–10
    [Google Scholar]
  14. Blitzstein JK, Hwang J. 2015. Introduction to Probability Boca Raton, FL: Chapman & Hall/CRC
  15. Box GE, Hunter WH, Hunter S. 1978. Statistics for Experimenters New York: Wiley
  16. Brooks S, Gelman A, Jones GL, Meng X-L, eds. 2011. Handbook of Markov Chain Monte Carlo Boca Raton, FL: Chapman & Hall/CRC
  17. Brooks SP, Gelman A. 1998. General methods for monitoring convergence of iterative simulations. J. Comput. Graph. Stat. 7:4434–55
    [Google Scholar]
  18. Brown LD. 1966. On the admissibility of invariant estimators of one or more location parameters. Ann. Math. Stat. 37:51087–136
    [Google Scholar]
  19. Brown LD. 1971. Admissible estimators, recurrent diffusions, and insoluble boundary value problems. Ann. Math. Stat. 42:3855–903
    [Google Scholar]
  20. Bühlmann P. 2002. Bootstraps for time series. Stat. Sci. 17:52–72
    [Google Scholar]
  21. Cochran WG, Cox GM. 1957. Experimental Designs New York: Wiley
  22. Cox DR. 1958a. Planning of Experiments New York: Wiley
  23. Cox DR. 1958b. Some problems connected with statistical inference. Ann. Math. Stat. 29:2357–72
    [Google Scholar]
  24. Cox DR. 1972. Regression models and life-tables. J. R. Stat. Soc. Ser. B 34:2187–202
    [Google Scholar]
  25. Cox DR, Hinkley DV. 1974. Theoretical Statistics Boca Raton, FL: Chapman & Hall/CRC
  26. Craiu RV, Lemieux C. 2007. Acceleration of the multiple-try Metropolis algorithm using antithetic and stratified sampling. Stat. Comput. 17:2109
    [Google Scholar]
  27. Craiu RV, Meng X-L. 2005. Multiprocess parallel antithetic coupling for backward and forward Markov chain Monte Carlo. Ann. Stat. 33:2661–97
    [Google Scholar]
  28. Craiu RV, Meng X-L 2011. Perfection within reach: exact MCMC sampling. Handbook of Markov Chain Monte Carlo S Brooks, A Gelman, G Jones, X-L Meng 199–226 Boca Raton, FL: Chapman & Hall/CRC
    [Google Scholar]
  29. Craiu RV, Meng X-L. 2022. Double happiness: enhancing the coupled gains of L-lag coupling via control variates. Stat. Sin 3241745–66
  30. Dawid AP. 1979. Conditional independence in statistical theory. J. R. Stat. Soc. Ser. B 41:11–15
    [Google Scholar]
  31. Dawid AP 2022. Fiducial inference then and now. Handbook on Bayesian, Fiducial and Frequentist (BFF) Inferences J Berger, X-L Meng, N Reid, M Xie Boca Raton, FL: Chapman & Hall/CRC. In press
    [Google Scholar]
  32. Dawid AP, Stone M. 1982. The functional-model basis of fiducial inference. Ann. Stat. 10:41054–67
    [Google Scholar]
  33. De Finetti B. 2017. Theory of Probability: A Critical Introductory Treatment New York: Wiley
  34. Dempster AP. 1963. On direct probabilities. J. R. Stat. Soc. Ser. B 25:1100–10
    [Google Scholar]
  35. Dempster AP. 1966. New methods for reasoning toward posterior distributions based on sample data. Ann. Math. Stat. 37:2355–74
    [Google Scholar]
  36. Dempster AP, Laird NM, Rubin DB. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 39:11–22
    [Google Scholar]
  37. Dwork C, McSherry F, Nissim K, Smith A 2006. Calibrating noise to sensitivity in private data analysis. Theory of Cryptography S Halevi, T Rabin 265–84 New York: Springer
    [Google Scholar]
  38. Efron B. 1979. Bootstrap methods: another look at the jackknife. Ann. Stat. 7:1–26
    [Google Scholar]
  39. Elliott MR, Valliant R. 2017. Inference for nonprobability samples. Stat. Sci. 32:2249–64
    [Google Scholar]
  40. Elliott RJ, Aggoun L, Moore JB. 2008. Hidden Markov Models: Estimation and Control New York: Springer
  41. Enders CK. 2022. Applied Missing Data Analysis New York: Guilford. , 2nd ed..
  42. Fienberg SE. 2006. When did Bayesian inference become “Bayesian”?. Bayesian Anal. 1:11–40
    [Google Scholar]
  43. Fisher LD, Lin DY. 1999. Time-dependent covariates in the Cox proportional-hazards regression model. Annu. Rev. Public Health 20:145–57
    [Google Scholar]
  44. Fisher RA. 1919. The causes of human variability. Eugen. Rev. 10:4213
    [Google Scholar]
  45. Fisher RA. 1935. The fiducial argument in statistical inference. Ann. Eugen. 6:4391–98
    [Google Scholar]
  46. Flegal J, Haran M, Jones G. 2008. Markov chain Monte Marlo: Can we trust the third significant figure?. Stat. Sci. 23:2250–60
    [Google Scholar]
  47. Fraser DA. 1968. Structural Inference New York: Wiley
  48. Fraser DA. 2004. Ancillaries and conditional inference. Stat. Sci. 19:2333–69
    [Google Scholar]
  49. Frigessi A, Gåsemyr J, Rue H. 2000. Antithetic coupling of two Gibbs sampler chains. Ann. Stat. 28:1128–49
    [Google Scholar]
  50. Gelfand AE, Smith AFM. 1992. Sampling-based approaches to calculating marginal densities. J. Am. Stat. Assoc. 87:523–32
    [Google Scholar]
  51. Gelman A. 2006. Multilevel (hierarchical) modeling: what it can and cannot do. Technometrics 48:3432–35
    [Google Scholar]
  52. Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. 2013. Bayesian Data Analysis Boca Raton, FL: Chapman & Hall/CRC
  53. Gelman A, Vehtari A. 2021. What are the most important statistical ideas of the past 50 years?. J. Am. Stat. Assoc. 116:5362087–97
    [Google Scholar]
  54. Geman S, Geman D. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intel. 6:6721–41
    [Google Scholar]
  55. Geyer CJ. 1992. Practical Markov chain Monte Carlo (with discussion). Stat. Sci. 7:473–511
    [Google Scholar]
  56. Ghosh M. 2011. Objective priors: an introduction for frequentists. Stat. Sci. 26:2187–202
    [Google Scholar]
  57. Gilks WR, Thomas A, Spiegelhalter DJ. 1994. A language and program for complex Bayesian modelling. J. R. Stat. Soc. Ser. D 43:1169–77
    [Google Scholar]
  58. Gong R. 2022a. Exact inference with approximate computation for differentially private data via perturbations. J. Priv. Confid 122 https://doi.org/10.29012/jpc.797
    [Crossref]
  59. Gong R. 2022b. Transparent privacy is principled privacy. Harv. Data Sci. Rev. https://doi.org/10.1162/99608f92.b5d3faaa
    [Crossref] [Google Scholar]
  60. Haavelmo T. 1943. The statistical implications of a system of simultaneous equations. Econom. J. Econom. Soc. 11:1–12
    [Google Scholar]
  61. Hacking I. 2006. The Emergence of Probability: A Philosophical Study of Early Ideas About Probability, Induction and Statistical Inference Cambridge, UK: Cambridge Univ. Press
  62. Hand DJ. 2020. Dark Data: Why What You Don't Know Matters Princeton, NJ: Princeton Univ. Press
  63. Hannig J, Iyer H, Lai RC, Lee TC. 2016. Generalized fiducial inference: a review and new results. J. Am. Stat. Assoc. 111:5151346–61
    [Google Scholar]
  64. Hastings WK. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57:197–109
    [Google Scholar]
  65. Heng J, Jacob PE. 2019. Unbiased Hamiltonian Monte Carlo with couplings. Biometrika 106:2287–302
    [Google Scholar]
  66. Hobert JP 2011. The data augmentation algorithm: theory and methodology. Handbook of Markov Chain Monte Carlo S Brooks, A Gelman, G Jones, X-L Meng 253–93 Boca Raton, FL: Chapman & Hall/CRC
    [Google Scholar]
  67. Imbens GW, Rubin DB. 2015. Causal Inference in Statistics, Social, and Biomedical Sciences Cambridge, UK: Cambridge Univ. Press
  68. Jacob PE, O'Leary J, Atchadé YF. 2020. Unbiased Markov chain Monte Carlo with couplings (with discussion). J. R. Stat. Soc. Ser. B 82:3543–600
    [Google Scholar]
  69. James W, Stein C 1961. Estimation with quadratic loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1: Contributions to the Theory of Statistics J Neyman 361–79 Berkeley: Univ. Calif. Press
    [Google Scholar]
  70. Jones G, Hobert J 2001. Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Stat. Sci. 16:312–34
    [Google Scholar]
  71. Kass RE, Wasserman L. 1996. The selection of prior distributions by formal rules. J. Am. Stat. Assoc. 91:4351343–70
    [Google Scholar]
  72. Kish L. 1965. Survey Sampling New York: Wiley
  73. Kong A, McCullagh P, Meng X-L, Nicolae DL 2007. Further explorations of likelihood theory for Monte Carlo integration. Advances in Statistical Modeling and Inference: Essays in Honor of Kjell A. Doksum V Nair 563–92 Singapore: World Sci.
    [Google Scholar]
  74. Lehmann E, Scheffé H. 1950. Completeness, similar regions, and unbiased estimation: part I. Sankhyā Indian J. Stat. 10:305–40
    [Google Scholar]
  75. Lehmann E, Scheffé H. 1955. Completeness, similar regions, and unbiased estimation: part II. Sankhyā Indian J. Stat. 15:3219–36
    [Google Scholar]
  76. Lewis D. 1974. Causation. J. Philos. 70:17556–67
    [Google Scholar]
  77. Lindley DV, Novick MR. 1981. The role of exchangeability in inference. Ann. Stat. 9:145–58
    [Google Scholar]
  78. Little RJ. 1989. Testing the equality of two independent binomial proportions. Am. Stat. 43:4283–88
    [Google Scholar]
  79. Little RJ, Rubin DB. 2019. Statistical Analysis with Missing Data New York: Wiley
  80. Liu JS, Wong WH, Kong A. 1995. Covariance structure and convergence rate of the Gibbs sampler with various scans. J. R. Stat. Soc. Ser. B 57:1157–69
    [Google Scholar]
  81. Liu JS, Wu YN. 1999. Parameter expansion for data augmentation. J. Am. Stat. Assoc. 94:4481264–74
    [Google Scholar]
  82. Liu K, Meng X-L. 2016. There is individualized treatment. Why not individualized inference?. Annu. Rev. Stat. Appl. 3:79–111
    [Google Scholar]
  83. Loehlin JC. 2004. Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis London: Psychology
  84. Martin R, Liu C. 2015. Inferential Models: Reasoning with Uncertainty Boca Raton, FL: Chapman & Hall/CRC
  85. Meng X-L 2000. Missing data: Dial M for???. J. Am. Stat. Assoc. 95:4521325–30
    [Google Scholar]
  86. Meng X-L. 2005. Comment: computation, survey and inference. Stat. Sci. 20:121–28
    [Google Scholar]
  87. Meng X-L. 2012. You want me to analyze data I don't have? Are you insane?. Shanghai Arch. Psychiatry 24:5297
    [Google Scholar]
  88. Meng X-L. 2018. Statistical paradises and paradoxes in big data (I): Law of large populations, big data paradox, and the 2016 US presidential election. Ann. Appl. Stat. 12:2685–726
    [Google Scholar]
  89. Meng X-L. 2019a. Data science: an artificial ecosystem. Harv. Data Sci. Rev. 1:1 https://hdsr.mitpress.mit.edu/pub/jhy4g6eg
    [Google Scholar]
  90. Meng X-L. 2019b. Five immersive 3D surroundings of data science. Harv. Data Sci. Rev. 1:2 https://doi.org/10.1162/99608f92.ab81d0a9
    [Crossref] [Google Scholar]
  91. Meng X-L. 2020. Information and uncertainty: two sides of the same coin. Harv. Data Sci. Rev. 2:2 https://doi.org/10.1162/99608f92.c108a25b
    [Crossref] [Google Scholar]
  92. Meng X-L, Rubin DB. 1993. Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80:2267–78
    [Google Scholar]
  93. Menzel C. 2017. Possible worlds. The Stanford Encyclopedia of Philosophy EN Zalta Stanford, CA: Metaphysics Res. Lab, Stanford Univ https://plato.stanford.edu/entries/possible-worlds/
    [Google Scholar]
  94. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. 1953. Equation of state calculations by fast computing machines. J. Chem. Phys. 21:61087–92
    [Google Scholar]
  95. Metropolis N, Ulam S. 1949. The Monte Carlo method. J. Am. Stat. Assoc. 44:247335–41
    [Google Scholar]
  96. Meyn SP, Tweedie RL. 1994. Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab. 4:4981–1011
    [Google Scholar]
  97. Mill JS. 1906. A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation London: Longmans, Green, Reader and Dyer
  98. Miller RG. 1964. A trustworthy jackknife. Ann. Math. Stat. 35:41594–605
    [Google Scholar]
  99. Neyman J. 1934. On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. J. R. Stat. Soc. 97:558–625
    [Google Scholar]
  100. Neyman J. 1990. On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Stat. Sci. 5:4465–72
    [Google Scholar]
  101. Owen AB 2003. Quasi–Monte Carlo sampling. Monte Carlo Ray Tracing: SIGGRAPH 2003, Course 44 H Jensen 69–88 New York: ACM
    [Google Scholar]
  102. Pal S, Khare K, Hobert JP. 2015. Improving the data augmentation algorithm in the two-block setup. J. Comput. Graph. Stat. 24:41114–33
    [Google Scholar]
  103. Pollock KH. 2002. The use of auxiliary variables in capture-recapture modelling: an overview. J. Appl. Stat. 29:1–485–102
    [Google Scholar]
  104. Propp JG, Wilson DB. 1996. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Struct. Algorithms 9:1&2223–52
    [Google Scholar]
  105. Quenouille MH. 1956. Notes on bias in estimation. Biometrika 43:3/4353–60
    [Google Scholar]
  106. Rao CR. 1945. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37:381–91
    [Google Scholar]
  107. Reid N. 1995. The roles of conditioning in inference. Stat. Sci. 10:2138–57
    [Google Scholar]
  108. Robert CP, Roberts G. 2021. Rao–Blackwellisation in the Markov chain Monte Carlo era. Int. Stat. Rev. 89:2237–49
    [Google Scholar]
  109. Roberts GO, Tweedie RL. 1996. Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83:195–110
    [Google Scholar]
  110. Rosenbaum PR, Rubin DB. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70:141–55
    [Google Scholar]
  111. Rosenbaum PR, Rubin DB. 1984. Reducing bias in observational studies using subclassification on the propensity score. J. Am. Stat. Assoc. 79:387516–24
    [Google Scholar]
  112. Rosenthal JS. 2002. Quantitative convergence rates of Markov chains: a simple account. Electron. Commun. Probab. 7:123–28
    [Google Scholar]
  113. Rubin DB. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. J. Educ. Psychol. 66:5688–701
    [Google Scholar]
  114. Rubin DB. 1976. Inference and missing data. Biometrika 63:3581–92
    [Google Scholar]
  115. Rubin DB. 1978. Bayesian inference for causal effects: the role of randomization. Ann. Stat. 6:134–58
    [Google Scholar]
  116. Rubin DB. 1987. Multiple Imputation for Nonresponse in Surveys New York: Wiley
  117. Rubin DB, Thomas N. 1992. Characterizing the effect of matching using linear propensity score methods with normal distributions. Biometrika 79:4797–809
    [Google Scholar]
  118. Rubinstein YR, Samorodnitsky G. 1985. Variance reduction by the use of common and antithetic random variables. J. Stat. Comput. Simul. 22:2161–80
    [Google Scholar]
  119. Shafer G. 1976. A Mathematical Theory of Evidence Princeton, NJ: Princeton Univ. Press
  120. Shafer G. 2019. Pascal's and Huygens's game-theoretic foundations for probability. Sartoniana 32:9117–45
    [Google Scholar]
  121. Shafer G. 2022.. “ So much data. Who needs probability?” Have we been here before?. Int. J. Approx. Reason. 141:183–89
    [Google Scholar]
  122. Shorten C, Khoshgoftaar TM. 2019. A survey on image data augmentation for deep learning. J. Big Data 6:11–48
    [Google Scholar]
  123. Slavković A, Seeman J. 2023. Statistical data privacy: a song of privacy and utility. Annu. Rev. Stat. Appl. 10:189–218
    [Google Scholar]
  124. Stein C 1956. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1: Contributions to the Theory of Statistics J Neyman 197–206 Berkeley: Univ. Calif. Press
    [Google Scholar]
  125. Stigler SM. 1986. The History of Statistics: The Measurement of Uncertainty Before 1900 Cambridge, MA: Harvard Univ. Press
  126. Stigler SM. 1990. The 1988 Neyman memorial lecture: a Galtonian perspective on shrinkage estimators. Stat. Sci. 5:1147–55
    [Google Scholar]
  127. Stigler SM. 2002. Statistics on the Table: The History of Statistical Concepts and Methods Cambridge, MA: Harvard Univ. Press
  128. Stigler SM. 2016. The Seven Pillars of Statistical Wisdom Cambridge, MA: Harvard Univ. Press
  129. Strawderman WE. 1971. Proper Bayes minimax estimators of the multivariate normal mean. Ann. Math. Stat. 42:1385–88
    [Google Scholar]
  130. Sundberg R. 1974. Maximum likelihood theory for incomplete data from an exponential family. Scand. J. Stat. 1:249–58
    [Google Scholar]
  131. Sundberg R. 1976. An iterative method for solution of the likelihood equations for incomplete data from exponential families. Commun. Stat. Simul. Comput. 5:155–64
    [Google Scholar]
  132. Tanner MA, Wong WH. 1987. The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc. 82:398528–40
    [Google Scholar]
  133. Taylor L, Nitschke G. 2018. Improving deep learning with generic data augmentation. 2018 IEEE Symposium Series on Computational Intelligence (SSCI)1542–47 Piscataway, NJ: IEEE
    [Google Scholar]
  134. Tinbergen J. 1930. Bestimmung und Deutung von Angebotskurven: ein Beispiel. Z. Nationalökonomie 1:5669–79
    [Google Scholar]
  135. Vaida F, Xu R. 2000. Proportional hazards model with random effects. Stat. Med. 19:243309–24
    [Google Scholar]
  136. Van Dyk DA, Meng X-L. 2001. The art of data augmentation. J. Comput. Graph. Stat. 10:11–50
    [Google Scholar]
  137. Van Dyk DA, Meng X-L. 2010. Cross-fertilizing strategies for better EM mountain climbing and DA field exploration: a graphical guide book. Stat. Sci. 25:4429–49
    [Google Scholar]
  138. Vats D, Flegal JM, Jones GL. 2019. Multivariate output analysis for Markov chain Monte Carlo. Biometrika 106:2321–37
    [Google Scholar]
  139. Warner SL. 1965. Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60:30963–69
    [Google Scholar]
  140. Wu C. 2022. Statistical inference with non-probability survey samples (with discussions). Surv. Methodol. In press
    [Google Scholar]
  141. Wu CJ. 1983. On the convergence properties of the EM algorithm. Ann. Stat. 11:195–103
    [Google Scholar]
  142. Yates F. 1984. Tests of significance for 2×2 contingency tables. J. R. Stat. Soc. Ser. A 147:3426–49
    [Google Scholar]
  143. Ye K. 1993. Reference priors when the stopping rule depends on the parameter of interest. J. Am. Stat. Assoc. 88:421360–63
    [Google Scholar]
  144. Zabell SL. 1992. R.A. Fisher and fiducial argument. Stat. Sci. 7:3369–87
    [Google Scholar]
  145. Zhang LC. 2019. On valid descriptive inference from non-probability sample. Stat. Theory Relat. Fields 3:2103–13
    [Google Scholar]
  146. Zhao S, Witten D, Shojaie A. 2021. In defense of the indefensible: a very naive approach to high-dimensional inference. Stat. Sci. 36:4562–77
    [Google Scholar]
/content/journals/10.1146/annurev-statistics-040220-015348
Loading
/content/journals/10.1146/annurev-statistics-040220-015348
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error