1932

Abstract

Simulation-based methods for statistical inference have evolved dramatically over the past 50 years, keeping pace with technological advancements. The field is undergoing a new revolution as it embraces the representational capacity of neural networks, optimization libraries, and graphics processing units for learning complex mappings between data and inferential targets. The resulting tools are amortized, in the sense that, after an initial setup cost, they allow rapid inference through fast feed-forward operations. In this article we review recent progress in the context of point estimation, approximate Bayesian inference, summary-statistic construction, and likelihood approximation. We also cover software and include a simple illustration to showcase the wide array of tools available for amortized inference and the benefits they offer over Markov chain Monte Carlo methods. The article concludes with an overview of relevant topics and an outlook on future research directions.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-statistics-112723-034123
2025-03-07
2025-06-19
Loading full text...

Full text loading...

/deliver/fulltext/statistics/12/1/annurev-statistics-112723-034123.html?itemId=/content/journals/10.1146/annurev-statistics-112723-034123&mimeType=html&fmt=ahah

Literature Cited

  1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, et al. 2016.. TensorFlow: large-scale machine learning on heterogeneous systems. . arXiv:1603.04467 [cs.DC]
  2. Åkesson M, Singh P, Wrede F, Hellander A. 2022.. Convolutional neural networks as summary statistics for approximate Bayesian computation. . IEEE/ACM Trans. Comput. Biol. Bioinform. 19::335365
    [Crossref] [Google Scholar]
  3. Albert C, Ulzega S, Ozdemir F, Perez-Cruz F, Mira A. 2022.. Learning summary statistics for Bayesian inference with autoencoders. . arXiv:2201.12059 [cs.LG]
  4. Amos B. 2023.. Tutorial on amortized optimization. . Found. Trends Mach. Learn. 16::592732
    [Crossref] [Google Scholar]
  5. Ardizzone L, Kruse J, Rother C, Köthe U. 2019.. Analyzing inverse problems with invertible neural networks. Paper presented at 7th International Conference on Learning Representations (ICLR 2019), May 6–9 , New Orleans, LA:. https://openreview.net/forum?id=rJed6j0cKX
    [Google Scholar]
  6. Baldi P, Cranmer K, Faucett T, Sadowski P, Whiteson D. 2016.. Parameterized neural networks for high-energy physics. . Eur. Phys. J. C 76::235
    [Crossref] [Google Scholar]
  7. Barnes C, Filippi S, Stumpf M, Thorne T. 2011.. Considerate approaches to achieving sufficiency for ABC model selection. . arXiv:1106.6281 [stat.CO]
  8. Begy V, Schikuta E. 2021.. Error-guided likelihood-free MCMC. . In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 17. Piscataway, NJ:: IEEE
    [Google Scholar]
  9. Belghazi MI, Baratin A, Rajeshwar S, Ozair S, Bengio Y, et al. 2018.. Mutual information neural estimation. . Proc. Mach. Learn. Res. 80::53140
    [Google Scholar]
  10. Besag J. 1986.. On the statistical analysis of dirty pictures (with discussion). . J. R. Stat. Soc. B 48::259302
    [Crossref] [Google Scholar]
  11. Bishop C. 1995.. Neural Networks for Pattern Recognition. Oxford, UK:: Clarendon
    [Google Scholar]
  12. Blum MG, Francois O. 2010.. Non-linear regression models for approximate Bayesian computation. . Stat. Comput. 20::6373
    [Crossref] [Google Scholar]
  13. Blum MG, Nunes MA, Prangle D, Sisson SA. 2013.. A comparative review of dimension reduction methods in approximate Bayesian computation. . Stat. Sci. 28::189208
    [Crossref] [Google Scholar]
  14. Brown LD, Purves R. 1973.. Measurable selections of extrema. . Ann. Stat. 1::90212
    [Google Scholar]
  15. Casella G, Berger R. 2001.. Statistical Inference. Belmont, CA:: Duxbury. , 2nd ed..
    [Google Scholar]
  16. Chan J, Perrone V, Spence J, Jenkins P, Mathieson S, Song Y. 2018.. A likelihood-free inference framework for population genetic data using exchangeable neural networks. . In NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, ed. S Bengio, HM Wallach, H Larochelle, K Grauman, N Cesa-Bianchi , pp. 860314. Red Hook, NY:: Curran
    [Google Scholar]
  17. Charnock T, Lavaux G, Wandelt BD. 2018.. Automatic physical inference with information maximizing neural networks. . Phys. Rev. D 97::083004
    [Crossref] [Google Scholar]
  18. Chen Y, Gutmann MU, Weller A. 2023.. Is learning summary statistics necessary for likelihood-free inference?. Proc. Mach. Learn. Res. 202::452944
    [Google Scholar]
  19. Chen Y, Zhang D, Gutmann MU, Courville A, Zhu Z. 2021.. Neural approximate sufficient statistics for implicit models. Paper presented at 9th International Conference on Learning Representations (ICLR 2021), May 3–7, virtual. https://openreview.net/pdf?id=SRDuJssQud
    [Google Scholar]
  20. Cobb AD, Matejek B, Elenius D, Roy A, Jha S. 2023.. Direct amortized likelihood ratio estimation. . arXiv:2311.10571 [stat.ML]
  21. Cranmer K, Brehmer J, Louppe G. 2020.. The frontier of simulation-based inference. . PNAS 117::3005562
    [Crossref] [Google Scholar]
  22. Cranmer K, Pavez J, Louppe G. 2015.. Approximating likelihood ratios with calibrated discriminative classifiers. . arXiv:1506.02169 [stat.ML]
  23. Creel M. 2017.. Neural nets for indirect inference. . Econom. Stat. 2::3649
    [Google Scholar]
  24. Cremer C, Li X, Duvenaud D. 2018.. Inference suboptimality in variational autoencoders. . Proc. Mach. Learn. Res. 80::107886
    [Google Scholar]
  25. Cressie N. 2018.. Mission CO2ntrol: a statistical scientist's role in remote sensing of atmospheric carbon dioxide. . J. Am. Stat. Assoc. 113::15268
    [Crossref] [Google Scholar]
  26. Dai Z, Damianou A, González J, Lawrence N. 2015.. Variational auto-encoded deep Gaussian processes. . arXiv:1511.06455 [cs.LG]
  27. David L, Bréon FM, Chevallier F. 2021.. XCO2 estimates from the OCO-2 measurements using a neural network approach. . Atmos. Meas. Tech. 14::11732
    [Crossref] [Google Scholar]
  28. Davison AC, Padoan SA, Ribatet M. 2012.. Statistical modeling of spatial extremes (with discussion). . Stat. Sci. 27::161201
    [Google Scholar]
  29. Dayan P, Hinton GE, Neal RM, Zemel RS. 1995.. The Helmholtz machine. . Neural Comput. 7::889904
    [Crossref] [Google Scholar]
  30. de Castro P, Dorigo T. 2019.. INFERNO: inference-aware neural optimisation. . Comput. Phys. Commun. 244::17079
    [Crossref] [Google Scholar]
  31. Delaunoy A, Hermans J, Rozet F, Wehenkel A, Louppe G. 2022.. Towards reliable simulation-based inference with balanced neural ratio estimation. . In NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing Systems, ed. S Koyejo, S Mohamed, A Agarwal, D Belgrave, K Cho, A Oh , pp. 2002537. Red Hook, NY:: Curran
    [Google Scholar]
  32. Diggle PJ, Gratton RJ. 1984.. Monte Carlo methods of inference for implicit statistical models (with discussion). . J. R. Stat. Soc. B 46::193227
    [Crossref] [Google Scholar]
  33. Dinev T, Gutmann MU. 2018.. Dynamic likelihood-free inference via ratio estimation (DIRE). . arXiv:1810.09899 [stat.ML]
  34. Donsker MD, Varadhan SS. 1983.. Asymptotic evaluation of certain Markov process expectations for large time. IV. . Commun. Pure Appl. Math. 36::183212
    [Crossref] [Google Scholar]
  35. Drovandi C, Frazier DT. 2022.. A comparison of likelihood-free methods with and without summary statistics. . Stat. Comput. 32::42
    [Crossref] [Google Scholar]
  36. Drovandi CC. 2018.. ABC and indirect inference. . In Handbook of Approximate Bayesian Computation, ed. SA Sisson, Y Fan, M Beaumont , pp. 179209. Boca Raton, FL:: Chapman and Hall/CRC Press
    [Google Scholar]
  37. Drovandi CC, Pettitt AN, Faddy MJ. 2011.. Approximate Bayesian computation using indirect inference. . J. R. Stat. Soc. C 60::31737
    [Crossref] [Google Scholar]
  38. Dyer J, Cannon P, Farmer JD, Schmon SM. 2024.. Black-box Bayesian inference for agent-based models. . J. Econ. Dyn. Control 161::104827
    [Crossref] [Google Scholar]
  39. Fan J, Yao Q. 1998.. Efficient estimation of conditional variance functions in stochastic regression. . Biometrika 85::64560
    [Crossref] [Google Scholar]
  40. Fasiolo M, Pya N, Wood SN. 2016.. A comparison of inferential methods for highly nonlinear state space models in ecology and epidemiology. . Stat. Sci. 31::96118
    [Crossref] [Google Scholar]
  41. Fearnhead P, Prangle D. 2012.. Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation (with discussion). . J. R. Stat. Soc. B 74::41974
    [Crossref] [Google Scholar]
  42. Fengler A, Govindarajan L, Chen T, Frank MJ. 2021.. Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience. . eLife 10::e65074
    [Crossref] [Google Scholar]
  43. Ganguly A, Jain S, Watchareeruetai U. 2023.. Amortized variational inference: a systematic review. . J. Artif. Intel. Res. 78::167215
    [Google Scholar]
  44. Gerber F, Nychka D. 2021.. Fast covariance parameter estimation of spatial Gaussian process models using neural networks. . Stat 10::e382
    [Crossref] [Google Scholar]
  45. Gershman S, Goodman N. 2014.. Amortized inference in probabilistic reasoning. . In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 36, pp. 51722. Seattle, WA:: Cogn. Sci. Soc.
    [Google Scholar]
  46. Gloeckler M, Deistler M, Weilbach C, Wood F, Macke JH. 2024.. All-in-one simulation-based inference. . arXiv:2404.09636 [cs.LG]
  47. Gneiting T, Balabdaoui F, Raftery AE. 2007.. Probabilistic forecasts, calibration and sharpness. . J. R. Stat. Soc. B 69::24368
    [Crossref] [Google Scholar]
  48. Gneiting T, Raftery AE. 2007.. Strictly proper scoring rules, prediction, and estimation. . J. Am. Stat. Assoc. 102::35978
    [Crossref] [Google Scholar]
  49. Goh H, Sheriffdeen S, Wittmer J, Bui-Thanh T. 2019.. Solving Bayesian inverse problems via variational autoencoders. . Proc. Mach. Learn. Res. 145::386425
    [Google Scholar]
  50. Gourieroux C, Monfort A, Renault E. 1993.. Indirect inference. . J. Appl. Econom. 8::S85118
    [Crossref] [Google Scholar]
  51. Grazian C, Fan Y. 2020.. A review of approximate Bayesian computation methods via density estimation: inference for simulator-models. . Wiley Interdiscip. Rev. Comput. Stat. 12::e1486
    [Crossref] [Google Scholar]
  52. Gretton A, Borgwardt KM, Rasch MJ, Schölkopf B, Smola A. 2012.. A kernel two-sample test. . J. Mach. Learn. Res. 13::72373
    [Google Scholar]
  53. Gutmann MU, Hyvärinen A. 2012.. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. . J. Mach. Learn. Res. 13:(11):30761
    [Google Scholar]
  54. Hermans J, Begy V, Louppe G. 2020.. Likelihood-free MCMC with amortized approximate ratio estimators. . Proc. Mach. Learn. Res. 119::423948
    [Google Scholar]
  55. Hermans J, Delaunoy A, Rozet F, Wehenkel A, Begy V, Louppe G. 2022.. A crisis in simulation-based inference? Beware, your posterior approximations can be unfaithful. . Trans. Mach. Learn. Res. https://openreview.net/pdf?id=LHAbHkt6Aq
    [Google Scholar]
  56. Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, et al. 2019.. Learning deep representations by mutual information estimation and maximization. Paper presented at 7th International Conference on Learning Representations (ICLR 2019) , May 6–9 , New Orleans, LA:. https://openreview.net/forum?id=Bklr3j0cKX
    [Google Scholar]
  57. Hoel DG, Mitchell TJ. 1971.. The simulation, fitting, and testing of a stochastic cellular proliferation model. . Biometrics 27::19199
    [Crossref] [Google Scholar]
  58. Hornik K. 1989.. Multilayer feedforward networks are universal approximators. . Neural Netw. 2::35966
    [Crossref] [Google Scholar]
  59. Huser R, Wadsworth JL. 2022.. Advances in statistical modeling of spatial extremes. . Wiley Interdiscip. Rev. Comput. Stat. 14::e1537
    [Crossref] [Google Scholar]
  60. Innes M. 2018.. Flux: Elegant machine learning with Julia. . J. Open Source Softw. 3:(25):602
    [Crossref] [Google Scholar]
  61. Jiang B, Wu Ty, Zheng C, Wong WH. 2017.. Learning summary statistic for approximate Bayesian computation via deep neural network. . Stat. Sin. 27::1595618
    [Google Scholar]
  62. Kingma DP, Salimans T, Jozefowicz R, Chen X, Sutskever I, Welling M. 2016.. Improving variational autoencoders with inverse autoregressive flow. . In Proceedings of the 30th Conference on Neural Information Processing Systems (NeurIPS 2016), pp. 474351. Red Hook, NY:: Curran
    [Google Scholar]
  63. Kingma DP, Welling M. 2013.. Auto-encoding variational Bayes. . arXiv:1312.6114 [stat.ML]
  64. Kullback S, Leibler RA. 1951.. On information and sufficiency. . Ann. Math. Stat. 22::7986
    [Crossref] [Google Scholar]
  65. Lenzi A, Bessac J, Rudi J, Stein ML. 2023.. Neural networks for parameter estimation in intractable models. . Comput. Stat. Data Anal. 185::107762
    [Crossref] [Google Scholar]
  66. Liu L, Liu L. 2019.. Amortized variational inference with graph convolutional networks for Gaussian processes. . Proc. Mach. Learn. Res. 89::2291300
    [Google Scholar]
  67. Liu S, Sun X, Ramadge PJ, Adams RP. 2020.. Task-agnostic amortized inference of Gaussian process hyperparameters. . In NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, ed. H Larochelle, M Ranzato, R Hadsell, MF Balcan, H Lin , pp. 2144052. Red Hook, NY:: Curran
    [Google Scholar]
  68. Luccioni AS, Viguier S, Ligozat AL. 2023.. Estimating the carbon footprint of BLOOM, a 176B parameter language model. . J. Mach. Learn. Res. 24::115
    [Google Scholar]
  69. Lueckmann JM, Boelts J, Greenberg D, Goncalves P, Macke J. 2021.. Benchmarking simulation-based inference. . Proc. Mach. Learn. Res. 130::34351
    [Google Scholar]
  70. Lueckmann JM, Goncalves PJ, Bassetto G, Öcal K, Nonnenmacher M, Macke JH. 2017.. Flexible statistical inference for mechanistic models of neural dynamics. . In NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, ed. U von Luxburg, I Guyon, S Bengio, H Wallach, R Fergus , pp. 1290300. Red Hook, NY:: Curran
    [Google Scholar]
  71. Maceda E, Hector EC, Lenzi A, Reich BJ. 2024.. A variational neural Bayes framework for inference on intractable posterior distributions. . arXiv:2404.10899 [stat.CO]
  72. Margossian CC, Blei DM. 2023.. Amortized variational inference: When and why?. arXiv:2307.11018 [stat.ML]
  73. Mestdagh M, Verdonck S, Meers K, Loossens T, Tuerlinckx F. 2019.. Prepaid parameter estimation without likelihoods. . PLOS Comput. Biol. 15::e1007181
    [Crossref] [Google Scholar]
  74. Miller BK, Cole A, Forré P, Louppe G, Weniger C. 2021.. Truncated marginal neural ratio estimation. . In NIPS'21: Proceedings of the 35th International Conference on Neural Information Processing Systems, ed. M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, J Wortman Vaughan , pp. 12943. Red Hook, NY:: Curran
    [Google Scholar]
  75. Miller BK, Cole A, Weniger C, Nattino F, Ku O, Grootes MW. 2022.. swyft: Truncated marginal neural ratio estimation in Python. . J. Open Source Softw. 7:(75):4205
    [Crossref] [Google Scholar]
  76. Mnih A, Gregor K. 2014.. Neural variational inference and learning in belief networks. . Proc. Mach. Learn. Res. 22::179199
    [Google Scholar]
  77. Moores MT, Drovandi CC, Mengersen K, Robert CP. 2015.. Pre-processing for approximate Bayesian computation in image analysis. . Stat. Comput. 25::2333
    [Crossref] [Google Scholar]
  78. Murphy KP. 2012.. Machine Learning: A Probabilistic Perspective. Cambridge, MA:: MIT Press
    [Google Scholar]
  79. Ong VM, Nott DJ, Tran MN, Sisson SA, Drovandi CC. 2018.. Variational Bayes with synthetic likelihood. . Stat. Comput. 28::97188
    [Crossref] [Google Scholar]
  80. Pacchiardi L, Dutta R. 2022a.. Likelihood-free inference with generative neural networks via scoring rule minimization. . arXiv:2205.15784 [stat.CO]
  81. Pacchiardi L, Dutta R. 2022b.. Score matched neural exponential families for likelihood-free inference. . J. Mach. Learn. Res. 23::171
    [Google Scholar]
  82. Papamakarios G, Murray I. 2016.. Fast ε-free inference of simulation models with Bayesian conditional density estimation. . In NIPS'16: Proceedings of the 30th International Conference on Neural Information Processing Systems, ed. DD Lee, U von Luxburg, R Garnett, M Sugiyama, I Guyon , pp. 103644. Red Hook, NY:: Curran
    [Google Scholar]
  83. Papamakarios G, Sterratt D, Murray I. 2019.. Sequential neural likelihood: fast likelihood-free inference with autoregressive flows. . Proc. Mach. Learn. Res. 89::83748
    [Google Scholar]
  84. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, et al. 2019.. PyTorch: an imperative style, high-performance deep learning library. . In NIPS'19: Proceedings of the 33rd International Conference on Neural Information Processing Systems, ed. HM Wallach, H Larochelle, A Beygelzimer, F d'Alché-Buc, EB Fox , pp. 79948005. Red Hook, NY:: Curran
    [Google Scholar]
  85. Radev ST, Mertens UK, Voss A, Ardizzone L, Köthe U. 2022.. BayesFlow: Learning complex stochastic models with invertible neural networks. . IEEE Trans. Neural Netw. Learn. Syst. 33::145266
    [Crossref] [Google Scholar]
  86. Radev ST, Schmitt M, Pratz V, Picchini U, Köethe U, Buerkner PC. 2023a.. JANA: jointly amortized neural approximation of complex Bayesian models. . Proc. Mach. Learn. Res. 216::1695706
    [Google Scholar]
  87. Radev ST, Schmitt M, Schumacher L, Elsemüller L, Pratz V, et al. 2023b.. BayesFlow: amortized Bayesian workflows with neural networks. . J. Open Source Softw. 8:(89):5702
    [Crossref] [Google Scholar]
  88. Rai S, Hoffman A, Lahiri S, Nychka DW, Sain SR, Bandyopadhyay S. 2024.. Fast parameter estimation of generalized extreme value distribution using neural networks. . Environmetrics 35::e2845
    [Crossref] [Google Scholar]
  89. Rehn A. 2022.. Amortized Bayesian inference of Gaussian process hyperparameters. Master's Thesis , Univ. Helsinki, Helsinki, Finl:.
    [Google Scholar]
  90. Rezende D, Mohamed S. 2015.. Variational inference with normalizing flows. . Proc. Mach. Learn. Res. 37::153038
    [Google Scholar]
  91. Richards J, Sainsbury-Dale M, Zammit-Mangion A, Huser R. 2023.. Likelihood-free neural Bayes estimators for censored peaks-over-threshold models. . arXiv:2306.15642 [stat.ME]
  92. Robert CP. 2007.. The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation. New York:: Springer
    [Google Scholar]
  93. Ross G. 1972.. Stochastic model fitting by evolutionary operation. . In Mathematical Models in Ecology, ed. JNR Jeffers , pp. 297308. Oxford, UK:: Blackwell
    [Google Scholar]
  94. Rozet F, Delaunoy A, Miller B, et al. 2021.. LAMPE: Likelihood-free amortized posterior estimation. . Statistical Software. https://pypi.org/project/lampe
    [Google Scholar]
  95. Rozet F, Louppe G. 2021.. Arbitrary marginal neural ratio estimation for simulation-based inference. . arXiv:2110.00449 [cs.LG]
  96. Rudi J, Bessac J, Lenzi A. 2022.. Parameter estimation with dense and convolutional neural networks applied to the FitzHugh–Nagumo ODE. . Proc. Mach. Learn. Res. 145::781808
    [Google Scholar]
  97. Sainsbury-Dale M, Richards J, Zammit-Mangion A, Huser R. 2023.. Neural Bayes estimators for irregular spatial data using graph neural networks. . arXiv:2310.02600 [stat.ME]
  98. Sainsbury-Dale M, Zammit-Mangion A, Huser R. 2024.. Likelihood-free parameter estimation with neural Bayes estimators. . Am. Stat. 78::114
    [Crossref] [Google Scholar]
  99. Schmitt M, Bürkner PC, Köthe U, Radev ST. 2024a.. Detecting model misspecification in amortized Bayesian inference with neural networks. . In Pattern Recognition. DAGM GCPR 2023, ed. U Köthe, C Rother , pp. 54157. Cham, Switz.:: Springer
    [Google Scholar]
  100. Schmitt M, Bürkner PC, Köthe U, Radev ST. 2024b.. Detecting model misspecification in amortized Bayesian inference with neural networks: an extended investigation. . arXiv:2406.03154 [cs.LG]
  101. Siahkoohi A, Rizzuti G, Orozco R, Herrmann FJ. 2023.. Reliable amortized variational inference with physics-based latent distribution correction. . Geophysics 88::297322
    [Crossref] [Google Scholar]
  102. Sisson SA, Fan Y, Beaumont M. 2018.. Handbook of Approximate Bayesian Computation. Boca Raton, FL:: Chapman and Hall/CRC
    [Google Scholar]
  103. Song Y, Sohl-Dickstein J, Kingma DP, Kumar A, Ermon S, Poole B. 2021.. Score-based generative modeling through stochastic differential equations. Paper presented at 9th International Conference on Learning Representations (ICLR 2021), May 3–7 , virtual:. https://openreview.net/forum?id=PxTIG12RRHS
    [Google Scholar]
  104. Svendsen DH, Hernandez-Lobato D, Martino L, Laparra V, Moreno-Martinez A, Camps-Valls G. 2023.. Inference over radiative transfer models using variational and expectation maximization methods. . Mach. Learn. 112::92137
    [Crossref] [Google Scholar]
  105. Tejero-Cantero A, Boelts J, Deistler M, Lueckmann JM, Durkan C, et al. 2020.. sbi: A toolkit for simulation-based inference. . J. Open Source Softw. 5:(52):2505
    [Crossref] [Google Scholar]
  106. Thomas O, Dutta R, Corander J, Kaski S, Gutmann MU. 2022.. Likelihood-free inference by ratio estimation. . Bayesian Anal. 17:(1):131
    [Crossref] [Google Scholar]
  107. Walchessen J, Lenzi A, Kuusela M. 2023.. Neural likelihood surfaces for spatial processes with computationally intensive or intractable likelihoods. . arXiv:2305.04634 [stat.ME]
  108. Wang Z, Hasenauer J, Schälte Y. 2024.. Missing data in amortized simulation-based neural posterior estimation. . PLOS Comput. Biol. 20::e1012184
    [Crossref] [Google Scholar]
  109. Wiqvist S, Frellsen J, Picchini U. 2021.. Sequential neural posterior and likelihood approximation. . arXiv:2102.06522 [stat.ML]
  110. Wood SN. 2010.. Statistical inference for noisy nonlinear ecological dynamic systems. . Nature 466::11024
    [Crossref] [Google Scholar]
  111. Zammit-Mangion A, Wikle CK. 2020.. Deep integro-difference equation models for spatio-temporal forecasting. . Spat. Stat. 37::100408
    [Crossref] [Google Scholar]
  112. Zhang C, Bütepage J, Kjellström H, Mandt S. 2018.. Advances in variational inference. . IEEE Trans. Pattern Anal. Mach. Intel. 41::200826
    [Crossref] [Google Scholar]
  113. Zhang L, Ma X, Wikle CK, Huser R. 2023.. Flexible and efficient spatial extremes emulation via variational autoencoders. . arXiv:2307.08079 [stat.ML]
/content/journals/10.1146/annurev-statistics-112723-034123
Loading
/content/journals/10.1146/annurev-statistics-112723-034123
Loading

Data & Media loading...

Supplemental Materials

Supplemental Materials

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error