1932

Abstract

Deep neural networks (DNNs) are machine learning algorithms that have revolutionized computer vision due to their remarkable successes in tasks like object classification and segmentation. The success of DNNs as computer vision algorithms has led to the suggestion that DNNs may also be good models of human visual perception. In this article, we review evidence regarding current DNNs as adequate behavioral models of human core object recognition. To this end, we argue that it is important to distinguish between statistical tools and computational models and to understand model quality as a multidimensional concept in which clarity about modeling goals is key. Reviewing a large number of psychophysical and computational explorations of core object recognition performance in humans and DNNs, we argue that DNNs are highly valuable scientific tools but that, as of today, DNNs should only be regarded as promising—but not yet adequate—computational models of human core object recognition behavior. On the way, we dispel several myths surrounding DNNs in vision science.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-vision-120522-031739
2023-09-15
2024-04-24
Loading full text...

Full text loading...

/deliver/fulltext/vision/9/1/annurev-vision-120522-031739.html?itemId=/content/journals/10.1146/annurev-vision-120522-031739&mimeType=html&fmt=ahah

Literature Cited

  1. Abbas A, Deny S 2022. Progress and limitations of deep networks to recognize objects in unusual poses. arXiv:2207.08034 [cs.CV]
  2. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B. 2018. Sanity checks for saliency maps. Adv. Neural Inform. Proc. Syst. 32:9505–15
    [Google Scholar]
  3. Adelson EH, Bergen JR. 1985. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A 2:2284–99
    [Google Scholar]
  4. Alcorn MA, Li Q, Gong Z, Wang C, Mai L et al. 2019. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)4845–54. Piscataway, NJ: IEEE
    [Google Scholar]
  5. Anderson BL. 2020. Mid-level vision. Curr. Biol. 30:3R105–9
    [Google Scholar]
  6. Baker N, Lu H, Erlikhman G, Kellman PJ. 2018. Deep convolutional networks do not classify based on global object shape. PLOS Comput. Biol. 14:12e1006613
    [Google Scholar]
  7. Berardino A, Laparra V, Ballé J, Simoncelli E. 2017. Eigen-distortions of hierarchical representations. Adv. Neural Inform. Proc. Syst. 30:3530–39
    [Google Scholar]
  8. Biederman I. 1987. Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94:2115–47
    [Google Scholar]
  9. Boelts J, Lueckmann JM, Gao R, Macke JH. 2022. Flexible and efficient simulation-based inference for models of decision-making. eLife 11:e77220
    [Google Scholar]
  10. Borowski J, Zimmermann RS, Schepers J, Geirhos R, Wallis TSA et al. 2021. Exemplary natural images explain CNN activations better than state-of-the-art feature visualization. Proceedings of the 2021 International Conference on Learning Representations (ICLR) N.p.: ICLR https://openreview.net/forum?id=QO9-y8also-
    [Google Scholar]
  11. Bowers JS, Malhotra G, Dujmović M, Montero ML, Tsvetkov C et al. 2022. Deep problems with neural network models of human vision. Behav. Brain Sci. In press
    [Google Scholar]
  12. Box GEP. 1976. Science and statistics. J. Am. Stat. Assoc. 71:356791–99
    [Google Scholar]
  13. Brendel W, Bethge M. 2019. Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. Proceedings of the 2019 International Conference on Learning Representations (ICLR) N.p.: ICLR https://openreview.net/forum?id=SkfMWhAqYQ
    [Google Scholar]
  14. Brick C, Hood B, Ekroll V, de-Wit L. 2021. Illusory essences: a bias holding back theorizing in psychological science. Perspect. Psychol. Sci. 17:491–506
    [Google Scholar]
  15. Chung S, Abbott LF. 2021. Neural population geometry: an approach for understanding biological and artificial neural networks. Curr. Opin. Neurobiol. 70:137–44
    [Google Scholar]
  16. Cichy RM, Kaiser D. 2019. Deep neural networks as scientific models. Trends Cogn. Sci. 23:305–17
    [Google Scholar]
  17. Cohen U, Chung S, Lee DD, Sompolinsky H. 2020. Separability and geometry of object manifolds in deep neural networks. Nat. Commun. 11:746
    [Google Scholar]
  18. Dapello J, Marques T, Schrimpf M, Geiger F, Cox DD, DiCarlo JJ. 2020. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. bioRxiv 2020.06.16.154542. https://doi.org/10.1101/2020.06.16.154542
    [Crossref]
  19. DiCarlo JJ, Zoccolan D, Rust NC. 2012. How does the brain solve visual object recognition?. Neuron 73:3415–34
    [Google Scholar]
  20. Doerig A, Schmittwilken L, Sayim B, Manassi M, Herzog MH. 2020. Capsule networks as recurrent models of grouping and segmentation. PLOS Comput. Biol. 16:7e1008017
    [Google Scholar]
  21. Dong Y, Ruan S, Su H, Kang C, Wei X, Zhu J. 2023. Viewfool: evaluating the robustness of visual recognition to adversarial viewpoints. Adv. Neural Inform. Proc. Syst. 37: In press
    [Google Scholar]
  22. Douglas RJ, Martin KA. 1991. Opening the grey box. Trends Neurosci. 14:7286–93
    [Google Scholar]
  23. Dujmović M, Malhotra G, Bowers JS. 2020. What do adversarial images tell us about human vision?. eLife 9:e55978
    [Google Scholar]
  24. Elsayed G, Shankar S, Cheung B, Papernot N, Kurakin A et al. 2018. Adversarial examples that fool both computer vision and time-limited humans. Adv. Neural Inform. Proc. Syst. 31:3910–20
    [Google Scholar]
  25. Evans BD, Malhotra G, Bowers JS. 2022. Biological convolutions improve DNN robustness to noise and generalisation. Neural Netw. 148:96–110
    [Google Scholar]
  26. Fawzi A, Balog M, Huang A, Hubert T, Romera-Paredes B et al. 2022. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610:793047–53
    [Google Scholar]
  27. Feather J, Durango A, Gonzalez R, McDermott J. 2019. Metamers of neural networks reveal divergence from human perceptual systems. Adv. Neural Inform. Proc. Syst. 32:10078–89
    [Google Scholar]
  28. Fukushima K. 1980. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybernet. 36:4193–202
    [Google Scholar]
  29. Funke CM, Borowski J, Stosio K, Brendel W, Wallis TSA, Bethge M. 2021. Five points to check when comparing visual perception in humans and machines. J. Vis. 21:316
    [Google Scholar]
  30. Gale EM, Martin N, Blything R, Nguyen A, Bowers JS. 2020. Are there any “object detectors” in the hidden layers of CNNs trained to identify objects or scenes?. Vis. Res. 176:60–71
    [Google Scholar]
  31. Gatys LA, Ecker AS, Bethge M. 2016. Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition2414–23. Piscataway, NJ: IEEE
    [Google Scholar]
  32. Geirhos R, Jacobsen JH, Michaelis C, Zemel R, Brendel W et al. 2020a. Shortcut learning in deep neural networks. Nat. Mach. Intel. 2:665–73
    [Google Scholar]
  33. Geirhos R, Meding K, Wichmann FA. 2020b. Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency. Adv. Neural Inform. Proc. Syst. 33:13890–902
    [Google Scholar]
  34. Geirhos R, Narayanappa K, Mitzkus B, Thieringer T, Bethge M et al. 2021. Partial success in closing the gap between human and machine vision. Adv. Neural Inform. Proc. Syst. 34:23885–99
    [Google Scholar]
  35. Geirhos R, Rubisch P, Michaelis C, Bethge M, Wichmann FA, Brendel W. 2019. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Proceedings of the 2019 International Conference on Learning Representations (ICLR) N.p.: ICLR https://openreview.net/forum?id=Bygh9j09KX
    [Google Scholar]
  36. Geirhos R, Temme CR, Rauber J, Schütt HH, Bethge M, Wichmann FA. 2018. Generalisation in humans and deep neural networks. Adv. Neural Inform. Proc. Syst. 31:7538–50
    [Google Scholar]
  37. Gibson JJ. 1950. The Perception of the Visual World Boston, MA: Houghton Mifflin
  38. Gigerenzer G. 1991. From tools to theories: a heuristic of discovery in cognitive psychology. Psychol. Rev. 98:2254–67
    [Google Scholar]
  39. Goetschalckx L, Andonian A, Wagemans J. 2021. Generative adversarial networks unlock new methods for cognitive science. Trends Cogn. Sci. 25:9788–801
    [Google Scholar]
  40. Golan T, Raju PC, Kriegeskorte N. 2020. Controversial stimuli: pitting neural networks against each other as models of human cognition. PNAS 117:4729330–37
    [Google Scholar]
  41. Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K et al. 2020. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife 9:e56261
    [Google Scholar]
  42. Goodfellow I, Bengio Y, Courville A. 2016. Deep Learning Cambridge, MA: MIT Press
  43. Goris RL, Putzeys T, Wagemans J, Wichmann FA. 2013. A neural population model for visual pattern detection. Psychol. Rev. 120:3472–96
    [Google Scholar]
  44. Green DM. 1964. Consistency of auditory detection judgments. Psychol. Rev. 71:5392–407
    [Google Scholar]
  45. Hassenstein B, Reichardt W. 1956. Systemtheoretische Analyse der Zeit, Reihenfolgen und Vorzeichenauswertung bei der Bewegungsrezeption des Rüsselkäfers Chlorophanus. Z. Naturforsch. B 11:9–10513–24
    [Google Scholar]
  46. Hendrycks D, Basart S, Mu N, Kadavath S, Wang F et al. 2021a. The many faces of robustness: a critical analysis of out-of-distribution generalization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition8340–49. Piscataway, NJ: IEEE
    [Google Scholar]
  47. Hendrycks D, Dietterich T. 2019. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the 2019 International Conference on Learning Representations (ICLR) N.p.: ICLR https://openreview.net/forum?id=HJz6tiCqYm
    [Google Scholar]
  48. Hendrycks D, Zhao K, Basart S, Steinhardt J, Song D. 2021b. Natural adversarial examples. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition15262–71. Piscataway, NJ: IEEE
    [Google Scholar]
  49. Hermann K, Chen T, Kornblith S 2020. The origins and prevalence of texture bias in convolutional neural networks. Adv. Neural Inform. Proc. Syst. 33:19000–15
    [Google Scholar]
  50. Hornik K, Stinchcombe M, White H. 1989. Multilayer feedforward networks are universal approximators. Neural Netw. 2:5359–66
    [Google Scholar]
  51. Huber LS, Geirhos R, Wichmann FA. 2022. The developmental trajectory of object recognition robustness: Children are like small adults but unlike big deep neural networks. arXiv:2205.10144 [cs.CV]
  52. Ibrahim M, Garrido Q, Morcos A, Bouchacourt D. 2022. The robustness limits of SoTA vision models to natural variation. arXiv:2210.13604 [cs.CV]
  53. Idrissi BY, Bouchacourt D, Balestriero R, Evtimov I, Hazirbas C et al. 2022. ImageNet-X: understanding model mistakes with factor of variation annotations. arXiv:2211.01866 [cs.CV]
  54. Jastrow J. 1899. The mind's eye. Popul. Sci. Mon. 54:299–312
    [Google Scholar]
  55. Kietzmann TC, Spoerer CJ, Sörensen LK, Cichy RM, Hauk O, Kriegeskorte N. 2019. Recurrence is required to capture the representational dynamics of the human visual system. PNAS 116:4321854–63
    [Google Scholar]
  56. Kindermans PJ, Hooker S, Adebayo J, Alber M, Schütt KT et al. 2017. The (un)reliability of saliency methods. arXiv:1711.00867 [stat.ML]
  57. Koenderink J, Valsecchi M, van Doorn A, Wagemans J, Gegenfurtner K. 2017. Eidolons: novel stimuli for vision research. J. Vis. 17:27
    [Google Scholar]
  58. Koh PW, Sagawa S, Marklund H, Xie SM, Zhang M et al. 2021. Wilds: a benchmark of in-the-wild distribution shifts. International Conference on Machine Learning5637–64. N.p.: PMLR
    [Google Scholar]
  59. Kreiman G, Serre T. 2020. Beyond the feedforward sweep: feedback computations in the visual cortex. Ann. N. Y. Acad. Sci. 1464:1222–41
    [Google Scholar]
  60. Kriegeskorte N. 2015. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1:417–46
    [Google Scholar]
  61. Krizhevsky A, Sutskever I, Hinton GE. 2012. ImageNet classification with deep convolutional neural networks. Adv. Neural Inform. Proc. Syst. 25:1097–105
    [Google Scholar]
  62. Kubilius J, Schrimpf M, Kar K, Rajalingham R, Hong H et al. 2019. Brain-like object recognition with high-performing shallow recurrent ANNs. Adv. Neural Inform. Proc. Syst. 32:12805–16
    [Google Scholar]
  63. Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ. 2017. Building machines that learn and think like people. Behav. Brain Sci. 40:e253
    [Google Scholar]
  64. Lauer J, Zhou M, Ye S, Menegas W, Schneider S et al. 2022. Multi-animal pose estimation, identification and tracking with DeepLabCut. Nat. Methods 19:4496–504
    [Google Scholar]
  65. LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature 521:7553436–44
    [Google Scholar]
  66. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE et al. 1989. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1:4541–51
    [Google Scholar]
  67. Liao Q, Poggio T. 2016. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv:1604.03640 [cs.LG]
  68. Logothetis NK, Sheinberg DL. 1996. Visual object recognition. Annu. Rev. Neurosci. 19:577–621
    [Google Scholar]
  69. Lonnqvist B, Bornet A, Doerig A, Herzog MH. 2021. A comparative biology approach to DNN modeling of vision: a focus on differences, not similarities. J. Vis. 21:1017
    [Google Scholar]
  70. Ma WJ, Peters B. 2020. A neural network walks into a lab: towards using deep nets as models for human behavior. arXiv:2005.02181 [cs.AI]
  71. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083 [stat.ML]
  72. Malhotra G, Dujmović M, Bowers JS. 2022. Feature blindness: a challenge for understanding and modelling visual object recognition. PLOS Comput. Biol. 18:5e1009572
    [Google Scholar]
  73. Marcel S, Rodriguez Y. 2010. Torchvision the machine-vision package of torch. Proceedings of the 18th ACM International Conference on Multimedia1485–88. New York: ACM
    [Google Scholar]
  74. Marcus G. 2018. Deep learning: a critical appraisal. arXiv:1801.00631 [cs.AI]
  75. Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN et al. 2018. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21:91281–89
    [Google Scholar]
  76. McClelland JL, Rumelhart DE, Group PR, eds. 1986. Parallel Distributed Processing, Vol. II Explorations in the Microstructure of Cognition: Psychological and Biological Models Cambridge, MA: MIT Press
  77. McCulloch WS, Pitts W. 1943. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5:4115–33
    [Google Scholar]
  78. Meding K, Buschoff LMS, Geirhos R, Wichmann FA. 2022. Trivial or impossible—dichotomous data difficulty masks model differences (on ImageNet and beyond). Proceedings of the 2022 International Conference on Learning Representations (ICLR) N.p.: ICLR https://openreview.net/forum?id=C_vsGwEIjAr
    [Google Scholar]
  79. Mehrer J, Spoerer CJ, Jones EC, Kriegeskorte N, Kietzmann TC. 2021. An ecologically motivated image dataset for deep learning yields better models of human vision. PNAS 118:8e2011417118
    [Google Scholar]
  80. Mitchell TM. 1980. The need for biases in learning generalizations Rutgers CS Tech. Rep. CBM-TR-117 Rutgers Univ. New Brunswick, NJ:
  81. Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 65:211–22
    [Google Scholar]
  82. Nakayama K, He ZJ, Shimojo S. 1995. Visual surface representation: a critical link between lower-level and higher-level vision. An Invitation to Cognitive Science SM Kosslyn, DN Osherson 1–70. Cambridge, MA: MIT Press
    [Google Scholar]
  83. Oliva A, Torralba A, Schyns PG. 2006. Hybrid images. ACM Trans. Graph. 25:3527–32
    [Google Scholar]
  84. O'Toole AJ, Castillo CD. 2021. Face recognition by humans and machines: three fundamental advances from deep learning. Annu. Rev. Vis. Sci. 7:543–70
    [Google Scholar]
  85. Peissig JJ, Tarr MJ. 2007. Visual object recognition: Do we know more now than we did 20 years ago?. Annu. Rev. Psychol. 58:75–96
    [Google Scholar]
  86. Peters B, Kriegeskorte N. 2021. Capturing the objects of vision with neural networks. Nat. Hum. Behav. 5:91127–44
    [Google Scholar]
  87. Piloto LS, Weinstein A, Battaglia P, Botvinick M. 2022. Intuitive physics learning in a deep-learning model inspired by developmental psychology. Nat. Hum. Behav. 6:91257–67
    [Google Scholar]
  88. Rajalingham R, Issa EB, Bashivan P, Kar K, Schmidt K, DiCarlo JJ. 2018. Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks. J. Neurosci. 38:337255–69
    [Google Scholar]
  89. Ramadhan A, Marshall JC, Souza AN, Lee XK, Piterbarg U et al. 2022. Capturing missing physics in climate model parameterizations using neural differential equations. arXiv:2010.12559 [physics.ao-ph]
  90. Ramón y Cajal S. 1967. The structure and connexions of neurons. Nobel Lectures, Physiology or Medicine 1901–1921 Nobel Found. 220–53. Amsterdam: Elsevier
    [Google Scholar]
  91. Rauber J, Brendel W, Bethge M. 2017. Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv:1707.04131 [cs.LG]
  92. Recht B, Roelofs R, Schmidt L, Shankar V. 2019. Do ImageNet classifiers generalize to ImageNet?. Proceedings of the 36th International Conference on Machine Learning5389–400. N.p.: PMLR
    [Google Scholar]
  93. Reichardt W. 1957. Autokorrelationsauswertung als Funktionsprinzip des Zentralnervensystems. Z. Naturforsch. B 12:447–57
    [Google Scholar]
  94. Rideaux R, Storrs KR, Maiello G, Welchman AE. 2021. How multisensory neurons solve causal inference. PNAS 118:32e2106235118
    [Google Scholar]
  95. Riesenhuber M, Poggio T. 1999. Hierarchical models of object recognition in cortex. Nat. Neurosci. 2:111019–25
    [Google Scholar]
  96. Rosenblatt F. 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65:6386–408
    [Google Scholar]
  97. Rumelhart DE, McClelland JL, PDP Res. Group, eds. 1986. Parallel Distributed Processing, Vol. 1 Explorations in the Microstructure of Cognition: Foundations Cambridge, MA: MIT Press
  98. Sabour S, Frosst N, Hinton GE. 2017. Dynamic routing between capsules. Adv. Neural Inform. Proc. Syst. 30:3856–66
    [Google Scholar]
  99. Safavi S, Dayan P. 2022. Multistability, perceptual value, and internal foraging. Neuron 110:193076–90
    [Google Scholar]
  100. Schade OH. 1956. Optical and photoelectric analogue of the eye. J. Opt. Soc. Am. 46:721–39
    [Google Scholar]
  101. Schmidhuber J. 2015. Deep learning in neural networks: an overview. Neural Netw. 61:85–117
    [Google Scholar]
  102. Schütt HH, Wichmann FA. 2017. An image-computable psychophysical spatial vision model. J. Vis. 17:1212
    [Google Scholar]
  103. Senior AW, Evans R, Jumper J, Kirkpatrick J, Sifre L et al. 2020. Improved protein structure prediction using potentials from deep learning. Nature 577:7792706–10
    [Google Scholar]
  104. Serre T. 2019. Deep learning: the good, the bad, and the ugly. Annu. Rev. Vis. Sci. 5:399–426
    [Google Scholar]
  105. Simoncelli EP, Heeger DJ. 1998. A model of neuronal responses in visual area MT. Vis. Res. 38:5743–61
    [Google Scholar]
  106. Singh M, Gustafson L, Adcock A, de Freitas Reis V, Gedik B et al. 2022. Revisiting weakly supervised pre-training of visual perception models. arXiv:2201.08371 [cs.CV]
  107. Speiser A, Müller LR, Hoess P, Matti U, Obara CJ et al. 2021. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18:91082–90
    [Google Scholar]
  108. Storrs KR, Anderson BL, Fleming RW. 2021. Unsupervised learning predicts human perception and misperception of gloss. Nat. Hum. Behav. 5:1402–17
    [Google Scholar]
  109. Storrs KR, Fleming RW. 2021. Learning about the world by learning about images. Curr. Direct. Psychol. Sci. 30:2120–28
    [Google Scholar]
  110. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D et al. 2013. Intriguing properties of neural networks. arXiv:1312.6199 [cs.CV]
  111. Teller DY. 1984. Linking propositions. Vis. Res. 24:101233–46
    [Google Scholar]
  112. Todorović D. 2020. What are visual illusions?. Perception 49:111128–99
    [Google Scholar]
  113. Torralba A, Efros AA. 2011. Unbiased look at dataset bias. Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)1521–28. Piscataway, NJ: IEEE
    [Google Scholar]
  114. van Bergen RS, Kriegeskorte N. 2020. Going in circles is the way forward: the role of recurrence in visual inference. Curr. Opin. Neurobiol. 65:176–93
    [Google Scholar]
  115. Wang Z, Simoncelli EP. 2008. Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities. J. Vis. 8:128
    [Google Scholar]
  116. Wichmann FA, Drewes J, Rosas P, Gegenfurtner KR. 2010. Animal detection in natural scenes: critical features revisited. J. Vis. 10:46
    [Google Scholar]
  117. Wichmann FA, Janssen DH, Geirhos R, Aguilar G, Schütt HH et al. 2017. Methods and measurements to compare men against machines. Electron. Imaging Hum. Vis. Electron. Imaging 2017:1436–45
    [Google Scholar]
  118. Wolpert DH, Macready WG. 1997. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1:167–82
    [Google Scholar]
  119. Yamins DL, DiCarlo JJ. 2016. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19:3356–65
    [Google Scholar]
  120. Yamins DL, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ. 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. PNAS 111:238619–24
    [Google Scholar]
  121. Zador AM. 2019. A critique of pure learning and what artificial neural networks can learn from animal brains. Nat. Commun. 10:3770
    [Google Scholar]
  122. Zednik C, Jäkel F. 2016. Bayesian reverse-engineering considered as a research strategy for cognitive science. Synthese 193:123951–85
    [Google Scholar]
  123. Zhou Z, Firestone C. 2019. Humans can decipher adversarial images. Nat. Commun. 10:1334
    [Google Scholar]
  124. Zimmermann RS, Borowski J, Geirhos R, Bethge M, Wallis TSA, Brendel W. 2021. How well do feature visualizations support causal understanding of CNN activations?. Adv. Neural Inform. Proc. Syst. 34:11730–44
    [Google Scholar]
/content/journals/10.1146/annurev-vision-120522-031739
Loading
/content/journals/10.1146/annurev-vision-120522-031739
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error