1932

Abstract

The term visual attention immediately evokes the idea of limited resources, serial processing, or a zoom metaphor. But evidence has slowly accumulated that computations that take into account probabilistic relationships among visual forms and the target contribute to optimizing decisions in biological and artificial organisms, even without considering these limited-capacity processes in covert attention or even foveation. The benefits from such computations can be formalized within the framework of an ideal Bayesian observer and can be related to the classic theory of sensory cue combination in vision science and context-driven approaches to object detection in computer vision. The framework can account for a large range of behavioral findings across distinct experimental paradigms, including visual search, cueing, and scene context. I argue that these forms of probabilistic computations might be fundamental to optimizing decisions in many species and review human experiments trying to identify scene properties that serve as cues to guide eye movements and facilitate search. I conclude by discussing contributions of attention beyond probabilistic computations but argue that the framework's merit is to unify many basic paradigms to study attention under a single theory.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-vision-102016-061220
2017-09-15
2024-05-24
Loading full text...

Full text loading...

/deliver/fulltext/vision/3/1/annurev-vision-102016-061220.html?itemId=/content/journals/10.1146/annurev-vision-102016-061220&mimeType=html&fmt=ahah

Literature Cited

  1. Ackermann JF, Landy MS. 2013. Choice of saccade endpoint under risk. J. Vis. 13:327 https://doi.org/10.1167/13.3.27 [Crossref] [Google Scholar]
  2. Akbas E, Eckstein MP. 2014. Object detection through exploration with a foveated visual field. arXiv 1408.0814 [cs.CV]
  3. Araujo C, Kowler E, Pavel M. 2001. Eye movements during visual search: the costs of choosing the optimal path. Vis. Res. 41:25–263613–25 [Google Scholar]
  4. Azzopardi P, Cowey A. 1993. Preferential representation of the fovea in the primary visual cortex. Nature 361:6414719–21 https://doi.org/10.1038/361719a0 [Crossref] [Google Scholar]
  5. Bacon WF, Egeth HE. 1991. Local processes in preattentive feature detection. J. Exp. Psychol. Hum. Percept. Perform. 17:177–90 [Google Scholar]
  6. Baldassi S, Verghese P. 2002. Comparing integration rules in visual search. J. Vis. 2:83 https://doi.org/10.1167/2.8.3 [Crossref] [Google Scholar]
  7. Barlow HB. 1980. The absolute efficiency of perceptual decisions. Philos. Trans. R. Soc. B 290:103871–82 [Google Scholar]
  8. Biederman I. 1972. Perceiving real-world scenes. Science 177:404377–80 https://doi.org/10.1126/science.177.4043.77 [Crossref] [Google Scholar]
  9. Bravo MJ, Farid H. 2009. The specificity of the search template. J. Vis. 9:134 https://doi.org/10.1167/9.1.34 [Crossref] [Google Scholar]
  10. Bravo MJ, Nakayama K. 1992. The role of attention in different visual-search tasks. Percept. Psychophys. 51:5465–72 [Google Scholar]
  11. Brockmole JR, Henderson JM. 2006. Using real-world scenes as contextual cues for search. Vis. Cogn. 13:199–108 https://doi.org/10.1080/13506280500165188 [Crossref] [Google Scholar]
  12. Bulf H, Johnson SP, Valenza E. 2011. Visual statistical learning in the newborn infant. Cognition 121:1127–32 https://doi.org/10.1016/j.cognition.2011.06.010 [Crossref] [Google Scholar]
  13. Bundesen C. 1990. A theory of visual attention. Psychol. Rev. 97:4523–47 [Google Scholar]
  14. Bushnell PJ, Rice DC. 1999. Behavioral assessments of learning and attention in rats exposed perinatally to 3,3′,4,4′,5-pentachlorobiphenyl (PCB 126). Neurotoxicol. Teratol. 21:4381–92 [Google Scholar]
  15. Cameron EL, Tai JC, Eckstein MP, Carrasco M. 2004. Signal detection theory applied to three visual search tasks—identification, yes/no detection and localization. Spat. Vis. 17:4–5295–325 [Google Scholar]
  16. Carrasco M. 2006. Covert attention increases contrast sensitivity: psychophysical, neurophysiological and neuroimaging studies. Prog. Brain Res. 154:33–70 https://doi.org/10.1016/S0079-6123(06)54003-8 [Crossref] [Google Scholar]
  17. Carrasco M. 2011. Visual attention: the past 25 years. Vis. Res. 51:131484–525 https://doi.org/10.1016/j.visres.2011.04.012 [Crossref] [Google Scholar]
  18. Castelhano MS, Heaven C. 2011. Scene context influences without scene gist: eye movements guided by spatial associations in visual search. Psychon. Bull. Rev. 18:5890–96 https://doi.org/10.3758/s13423-011-0107-8 [Crossref] [Google Scholar]
  19. Castelhano MS, Henderson JM. 2007. Initial scene representations facilitate eye movement guidance in visual search. J. Exp. Psychol. Hum. Percept. Perform. 33:4753 [Google Scholar]
  20. Choi MJ, Torralba A, Willsky AS. 2012. A tree-based context model for object recognition. IEEE Trans. Pattern Anal. Mach. Intel. 34:2240–52 https://doi.org/10.1109/TPAMI.2011.119 [Crossref] [Google Scholar]
  21. Cohn TE, Lasley DJ. 1974. Detectability of a luminance increment: effect of spatial uncertainty. J. Opt. Soc. Am. 64:121715–19 [Google Scholar]
  22. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. 1990. Human photoreceptor topography. J. Comp. Neurol. 292:4497–523 https://doi.org/10.1002/cne.902920402 [Crossref] [Google Scholar]
  23. Davenport JL, Potter MC. 2004. Scene consistency in object and background perception. Psychol. Sci. 15:8559–64 [Google Scholar]
  24. Davis ET, Kramer P, Graham N. 1983. Uncertainty about spatial frequency, spatial position, or contrast of visual patterns. Percep. Psychophys. 33:120–28 [Google Scholar]
  25. Davis ET, Shikano T, Main K, Hailston K, Michel RK, Sathian K. 2006. Mirror-image symmetry and search asymmetry: A comparison of their effects on visual search and a possible unifying explanation. Vis. Res. 46:8–91263–81 https://doi.org/10.1016/j.visres.2005.10.032 [Crossref] [Google Scholar]
  26. Davis ET, Shikano T, Peterson SA, Keyes Michel R. 2003. Divided attention and visual search for simple versus complex features. Vis. Res. 43:212213–32 [Google Scholar]
  27. de Bivort BL, van Swinderen B. 2016. Evidence for selective attention in the insect brain. Curr. Opin. Insect Sci. 15:9–15 https://doi.org/10.1016/j.cois.2016.02.007 [Crossref] [Google Scholar]
  28. Dosher BA, Lu Z-L. 2000. Mechanisms of perceptual attention in precuing of location. Vis. Res. 40:10–121269–92 [Google Scholar]
  29. Dosher BA, Lu Z-L. 2013. Mechanisms of visual attention. Human Information Processing: Vision, Memory, and Attention C Chubb, BA Dosher, Z-L Lu, RE Shiffrin 149–64 Washington, DC: Am. Psych. Assoc. [Google Scholar]
  30. Droll JA, Abbey CK, Eckstein MP. 2009. Learning cue validity through performance feedback. J. Vis. 9:218 https://doi.org/10.1167/9.2.18 [Crossref] [Google Scholar]
  31. Druker M, Anderson B. 2010. Spatial probability aids visual stimulus discrimination. Front. Hum. Neurosci. 4:1–10 https://doi.org/10.3389/fnhum.2010.00063 [Crossref] [Google Scholar]
  32. Duncan J, Humphreys GW. 1989. Visual search and stimulus similarity. Psychol. Rev. 96:3433–58 [Google Scholar]
  33. Eckstein MP. 1998. The lower visual search efficiency for conjunctions is due to noise and not serial attentional processing. Psychol. Sci. 9:2111–18 [Google Scholar]
  34. Eckstein MP. 2011. Visual search: a retrospective. J. Vis. 11:514 https://doi.org/10.1167/11.5.14 [Crossref] [Google Scholar]
  35. Eckstein MP, Beutter BR, Pham BT, Shimozaki SS, Stone LS. 2007. Similar neural representations of the target for saccades and perception during search. J. Neurosci. 27:61266–70 https://doi.org/10.1523/JNEUROSCI.3975-06.2007 [Crossref] [Google Scholar]
  36. Eckstein MP, Beutter BR, Stone LS. 2001. Quantifying the performance limits of human saccadic targeting during visual search. Perception 30:111389–401 [Google Scholar]
  37. Eckstein MP, Drescher BA, Shimozaki SS. 2006. Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychol. Sci. 17:11973–80 https://doi.org/10.1111/j.1467-9280.2006.01815.x [Crossref] [Google Scholar]
  38. Eckstein MP, Mack SC, Liston DB, Bogush L, Menzel R, Krauzlis RJ. 2013. Rethinking human visual attention: spatial cueing effects and optimality of decisions by honeybees, monkeys and humans. Vis. Res. 85:5–19 https://doi.org/10.1016/j.visres.2012.12.011 [Crossref] [Google Scholar]
  39. Eckstein MP, Peterson MF, Pham BT, Droll JA. 2009. Statistical decision theory to relate neurons to behavior in the study of covert visual attention. Vis. Res. 49:101097 [Google Scholar]
  40. Eckstein MP, Schoonveld W, Zhang S, Mack SC, Akbas E. 2015. Optimal and human eye movements to clustered low value cues to increase decision rewards during search. Vis. Res. 113:137–54 https://doi.org/10.1016/j.visres.2015.05.016 [Crossref] [Google Scholar]
  41. Eckstein MP, Shimozaki SS, Abbey CK. 2002. The footprints of visual attention in the Posner cueing paradigm revealed by classification images. J. Vis. 2:13 https://doi.org/10.1167/2.1.3 [Crossref] [Google Scholar]
  42. Eckstein MP, Thomas JP, Palmer J, Shimozaki SS. 2000. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays. Percept. Psychophys. 62:3425–51 [Google Scholar]
  43. Edwards DC, Metz CE, Nishikawa RM. 2002. Estimation of three-class ideal observer decision functions with a Bayesian artificial neural network. Proc. SPIE 4686:1–12 https://doi.org/10.1117/12.462662 [Crossref] [Google Scholar]
  44. Ernst MO, Banks MS. 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:6870429–33 https://doi.org/10.1038/415429a [Crossref] [Google Scholar]
  45. Faisal AA, Selen LPJ, Wolpert DM. 2008. Noise in the nervous system. Nat. Rev. Neurosci. 9:4292–303 https://doi.org/10.1038/nrn2258 [Crossref] [Google Scholar]
  46. Fei-Fei L, Iyer A, Koch C, Perona P. 2007. What do we perceive in a glance of a real-world scene?. J. Vis. 7:110 https://doi.org/10.1167/7.1.10 [Crossref] [Google Scholar]
  47. Findlay JM. 1997. Saccade target selection during visual search. Vis. Res. 37:5617–31 https://doi.org/10.1016/S0042-6989(96)00218-0 [Crossref] [Google Scholar]
  48. Findlay JM, Gilchrist ID. 2003. Active Vision: The Psychology of Looking and Seeing New York: Oxford Univ. Press, 1st ed..
  49. Fiser J, Aslin RN. 2002. Statistical learning of new visual feature combinations by infants. PNAS 99:2415822–26 https://doi.org/10.1073/pnas.232472899 [Crossref] [Google Scholar]
  50. Foley NC, Kelly SP, Mhatre H, Lopes M, Gottlieb J. 2017. Parietal neurons encode expected gains in instrumental information. PNAS 114:16E3315–23 https://doi.org/10.1073/pnas.1613844114 [Crossref] [Google Scholar]
  51. Geisler WS. 2011. Contributions of ideal observer theory to vision research. Vis. Res. 51:7771–81 https://doi.org/10.1016/j.visres.2010.09.027 [Crossref] [Google Scholar]
  52. Geisler WS, Chou K-L. 1995. Separation of low-level and high-level factors in complex tasks: Visual search. Psychol. Rev. 102:356–78 https://doi.org/10.1037/0033-295X.102.2.356 [Crossref] [Google Scholar]
  53. Gekas N, Seitz AR, Seriès P. 2015. Expectations developed over multiple timescales facilitate visual search performance. J. Vis. 15:910 https://doi.org/10.1167/15.9.10 [Crossref] [Google Scholar]
  54. Geng JJ, Behrmann M. 2005. Spatial probability as an attentional cue in visual search. Percept. Psychophys. 67:71252–68 [Google Scholar]
  55. Gibson BM, Leber AB, Mehlman ML. 2015. Spatial context learning in pigeons (Columba livia). J. Exp. Psychol. Anim. Learn. Cogn. 41:4336–42 https://doi.org/10.1037/xan0000068 [Crossref] [Google Scholar]
  56. Gold JI, Shadlen MN. 2007. The neural basis of decision making. Annu. Rev. Neurosci. 30:535–74 https://doi.org/10.1146/annurev.neuro.29.051605.113038 [Crossref] [Google Scholar]
  57. Green DM, Swets JA. 1989. Signal Detection Theory and Psychophysics Los Altos, CA: Peninsula Publ.
  58. Greene MR. 2016. Estimations of object frequency are frequently overestimated. Cognition 149:6–10 https://doi.org/10.1016/j.cognition.2015.12.011 [Crossref] [Google Scholar]
  59. Greene MR, Oliva A. 2009. Recognition of natural scenes from global properties: seeing the forest without representing the trees. Cogn. Psychol. 58:2137–76 https://doi.org/10.1016/j.cogpsych.2008.06.001 [Crossref] [Google Scholar]
  60. Hayhoe M, Ballard D. 2014. Modeling task control of eye movements. Curr. Biol. 24:13R622–28 https://doi.org/10.1016/j.cub.2014.05.020 [Crossref] [Google Scholar]
  61. Helbig HB, Ernst MO. 2007. Optimal integration of shape information from vision and touch. Exp. Brain Res. 179:4595–606 https://doi.org/10.1007/s00221-006-0814-y [Crossref] [Google Scholar]
  62. Henderson JM, Malcolm GL, Schandl C. 2009. Searching in the dark: cognitive relevance drives attention in real-world scenes. Psychon. Bull. Rev. 16:5850–56 https://doi.org/10.3758/PBR.16.5.850 [Crossref] [Google Scholar]
  63. Hillis JM, Watt SJ, Landy MS, Banks MS. 2004. Slant from texture and disparity cues: optimal cue combination. J. Vis. 4:121 https://doi.org/10.1167/4.12.1 [Crossref] [Google Scholar]
  64. Jacobs RA. 1999. Optimal integration of texture and motion cues to depth. Vis. Res. 39:213621–29 [Google Scholar]
  65. Jiang YV, Won B-Y, Swallow KM. 2014. First saccadic eye movement reveals persistent attentional guidance by implicit learning. J. Exp. Psychol. Hum. Percept. Perform. 40:31161–73 https://doi.org/10.1037/a0035961 [Crossref] [Google Scholar]
  66. Jones JL, Kaschak MP. 2012. Global statistical learning in a visual search task. J. Exp. Psychol. Hum. Percept. Perform. 38:1152–60 https://doi.org/10.1037/a0026233 [Crossref] [Google Scholar]
  67. Kanan C, Tong MH, Zhang L, Cottrell GW. 2009. SUN: top-down saliency using natural statistics. Vis. Cogn. 17:6–7979–1003 https://doi.org/10.1080/13506280902771138 [Crossref] [Google Scholar]
  68. Kantorov V, Oquab M, Cho M, Laptev I. 2016. ContextLocNet: context-aware deep network models for weakly supervised localization. Proc. Eur. Conf. Comp. Vis., Amsterdam, Neth., Oct. 11–14350–65 Cham, Switz.: Springer https://doi.org/10.1007/978-3-319-46454-1_22 [Crossref] [Google Scholar]
  69. Knill DC, Pouget A. 2004. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27:12712–19 https://doi.org/10.1016/j.tins.2004.10.007 [Crossref] [Google Scholar]
  70. Koehler K, Eckstein MP. 2017a. Beyond scene gist: Objects guide search more than backgrounds. J. Exp. Psychol. Hum. Percept. Perform. 43:61177–93 https://doi.org/10.1037/xhp0000363 [Crossref] [Google Scholar]
  71. Koehler K, Eckstein MP. 2017b. Temporal and peripheral extraction of contextual cues from scenes during visual search. J. Vis. 7:216 https://doi.org/10.1167/17.2.16 [Crossref] [Google Scholar]
  72. Koehler K, Guo F, Zhang S, Eckstein MP. 2014. What do saliency models predict?. J. Vis. 14:314 https://doi.org/10.1167/14.3.14 [Crossref] [Google Scholar]
  73. Krauzlis RJ, Bollimunta A, Arcizet F, Wang L. 2014. Attention as an effect not a cause. Trends Cogn. Sci. 18:9457–64 https://doi.org/10.1016/j.tics.2014.05.008 [Crossref] [Google Scholar]
  74. Krauzlis RJ, Lovejoy LP, Zénon A. 2013. Superior colliculus and visual spatial attention. Annu. Rev. Neurosci. 36:165–82 https://doi.org/10.1146/annurev-neuro-062012-170249 [Crossref] [Google Scholar]
  75. Kupinski MA, Edwards DC, Giger ML, Metz CE. 2001. Ideal observer approximation using Bayesian classification neural networks. IEEE Trans. Med. Imaging 209886–99 https://doi.org/10.1109/42.952727 [Crossref] [Google Scholar]
  76. Landy MS, Banks MS, Knill DC. 2011. Ideal-observer models of cue integration. Sensory Cue Integration J. Trommershäuser, K. Kording, MS Landy 5–29 New York: Oxford Univ. Press [Google Scholar]
  77. Landy MS, Kojima H. 2001. Ideal cue combination for localizing texture-defined edges. J. Opt. Soc. Am. A 18:92307–20 [Google Scholar]
  78. Landy MS, Maloney LT, Johnston EB, Young M. 1995. Measurement and modeling of depth cue combination: in defense of weak fusion. Vis. Res. 35:3389–412 [Google Scholar]
  79. Larson AM, Loschky LC. 2009. The contributions of central versus peripheral vision to scene gist recognition. J. Vis. 9:106 https://doi.org/10.1167/9.10.6 [Crossref] [Google Scholar]
  80. Levi DM. 2008. Crowding—an essential bottleneck for object recognition: a mini-review. Vis. Res. 48:5635–54 https://doi.org/10.1016/j.visres.2007.12.009 [Crossref] [Google Scholar]
  81. Liston DB, Stone LS. 2008. Effects of prior information and reward on oculomotor and perceptual choices. J. Neurosci. 28:5113866–75 https://doi.org/10.1523/JNEUROSCI.3120-08.2008 [Crossref] [Google Scholar]
  82. Lu Z-L, Dosher BA. 1998. External noise distinguishes attention mechanisms. Vis. Res. 38:91183–98 https://doi.org/10.1016/S0042-6989(97)00273-3 [Crossref] [Google Scholar]
  83. Luck SJ, Hillyard SA, Mouloua M, Hawkins HL. 1996. Mechanisms of visual-spatial attention: resource allocation or uncertainty reduction?. J. Exp. Psychol. Hum. Percept. Perform. 22:3725–37 [Google Scholar]
  84. Luo TZ, Maunsell JHR. 2015. Neuronal modulations in visual cortex are associated with only one of multiple components of attention. Neuron 86:51182–88 https://doi.org/10.1016/j.neuron.2015.05.007 [Crossref] [Google Scholar]
  85. Ma WJ. 2012. Organizing probabilistic models of perception. Trends Cogn. Sci. 16:10511–18 https://doi.org/10.1016/j.tics.2012.08.010 [Crossref] [Google Scholar]
  86. Ma WJ, Navalpakkam V, Beck JM, Berg RVD, Pouget A. 2011. Behavior and neural basis of near-optimal visual search. Nat. Neurosci. 14:6783–90 https://doi.org/10.1038/nn.2814 [Crossref] [Google Scholar]
  87. Ma WJ, Shen S, Dziugaite G, van den Berg R. 2015. Requiem for the max rule. ? Vis. Res. 116:B179–93 https://doi.org/10.1016/j.visres.2014.12.019 [Crossref] [Google Scholar]
  88. Mack SC, Eckstein MP. 2011. Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. J. Vis. 11:99 https://doi.org/10.1167/11.9.9 [Crossref] [Google Scholar]
  89. Malcolm GL, Henderson JM. 2009. The effects of target template specificity on visual search in real-world scenes: evidence from eye movements. J. Vis. 9:118 https://doi.org/10.1167/9.11.8 [Crossref] [Google Scholar]
  90. Malcolm GL, Henderson JM. 2010. Combining top-down processes to guide eye movements during real-world scene search. J. Vis. 10:24 https://doi.org/10.1167/10.2.4 [Crossref] [Google Scholar]
  91. Marote CFO, Xavier GF. 2011. Endogenous-like orienting of visual attention in rats. Anim. Cogn. 14:4535–44 https://doi.org/10.1007/s10071-011-0388-3 [Crossref] [Google Scholar]
  92. Maunsell JHR, Cook EP. 2002. The role of attention in visual processing. Philos. Trans. R. Soc. B 357:14241063–72 https://doi.org/10.1098/rstb.2002.1107 [Crossref] [Google Scholar]
  93. Mottaghi R, Chen X, Liu X, Cho N-G, Lee S-W. et al. 2014. The role of context for object detection and semantic segmentation in the wild. Proc. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, June 23–28891–898 https://doi.org/10.1109/CVPR.2014.119 [Crossref]
  94. Myers KJ, Anderson MP, Brown DG, Wagner RF, Hanson KM. 1995. Neural network performance for binary discrimination tasks. Part II: effect of task, training, and feature preselection. Proc. SPIE 2434:828 [Google Scholar]
  95. Najemnik J, Geisler WS. 2005. Optimal eye movement strategies in visual search. Nature 434:7031387–91 https://doi.org/10.1038/nature03390 [Crossref] [Google Scholar]
  96. Najemnik J, Geisler WS. 2009. Simple summation rule for optimal fixation selection in visual search. Vis. Res. 49:101286–94 https://doi.org/10.1016/j.visres.2008.12.005 [Crossref] [Google Scholar]
  97. Navalpakkam V, Itti L. 2005. Modeling the influence of task on attention. Vis. Res. 45:2205–31 https://doi.org/10.1016/j.visres.2004.07.042 [Crossref] [Google Scholar]
  98. Neider MB, Zelinsky GJ. 2006. Scene context guides eye movements during visual search. Vis. Res. 46:5614–21 [Google Scholar]
  99. Nityananda V. 2016. Attention-like processes in insects. Proc. R. Soc. B 283:184220161986 https://doi.org/10.1098/rspb.2016.1986 [Crossref] [Google Scholar]
  100. Nolte LW, Jaarsma D. 1967. More on the detection of one of M orthogonal signals. J. Acoust. Soc. Am. 41:497–505 [Google Scholar]
  101. Oliva A, Torralba A. 2001. Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42:3145–75 https://doi.org/10.1023/A:1011139631724 [Crossref] [Google Scholar]
  102. Oliva A, Torralba A. 2006. Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155:23–36 [Google Scholar]
  103. Oliva A, Torralba A. 2007. The role of context in object recognition. Trends Cogn. Sci. 11:12520–27 [Google Scholar]
  104. Palmer EM, Fencsik DE, Flusberg SJ, Horowitz TS, Wolfe JM. 2011. Signal detection evidence for limited capacity in visual search. Attention Percept. Psychophys. 73:82413–24 https://doi.org/10.3758/s13414-011-0199-2 [Crossref] [Google Scholar]
  105. Palmer J. 1994. Set-size effects in visual search: the effect of attention is independent of the stimulus for simple tasks. Vis. Res. 34:131703–21 [Google Scholar]
  106. Palmer J, Ames CT, Lindsey DT. 1993. Measuring the effect of attention on simple visual search. J. Exp. Psychol. Hum. Percept. Perform. 19:1108–30 https://doi.org/10.1037/0096-1523.19.1.108 [Crossref] [Google Scholar]
  107. Palmer J, Verghese P, Pavel M. 2000. The psychophysics of visual search. Vis. Res. 40:101227–68 [Google Scholar]
  108. Palmer TE. 1975. The effects of contextual scenes on the identification of objects. Mem. Cogn. 3:519–26 [Google Scholar]
  109. Pelli DG. 1985. Uncertainty explains many aspects of visual contrast detection and discrimination. J. Opt. Soc. Am. A 2:91508–31 https://doi.org/10.1364/JOSAA.2.001508 [Crossref] [Google Scholar]
  110. Pereira EJ, Castelhano MS. 2014. Peripheral guidance in scenes: the interaction of scene context and object content. J. Exp. Psychol. Hum. Percept. Perform. 40:52056–72 [Google Scholar]
  111. Peterson MS, Kramer AF. 2001. Attentional guidance of the eyes by contextual information and abrupt onsets. Percept. Psychophys. 63:71239–49 [Google Scholar]
  112. Peterson W, Birdsall T, Fox W. 1954. The theory of signal detectability. Trans. IRE Prof. Group Inf. Theory 4:4171–212 https://doi.org/10.1109/TIT.1954.1057460 [Crossref] [Google Scholar]
  113. Posner MI, Snyder CR, Davidson BJ. 1980. Attention and the detection of signals. J. Exp. Psychol. Gen. 109:2160–74 https://doi.org/10.1037/0096-3445.109.2.160 [Crossref] [Google Scholar]
  114. Potter MC. 1976. Short-term conceptual memory for pictures. J. Exp. Psychol. Hum. Learn. Mem. 2:5509 [Google Scholar]
  115. Rabinovich A, Vedaldi A, Galleguillos C, Wiewiora E, Belongie S. 2007. Objects in context. Proc. 2007 IEEE 11th Int. Conf. Comput. Vis., Rio de Janeiro, Braz., Oct. 14–211–8 https://doi.org/10.1109/ICCV.2007.4408986 [Crossref]
  116. Ren S, He K, Girshick R, Sun J. 2016. Faster R-CNN: towards real-time object detection with region proposal networks. arXiv 1506.01497v3 [cs.CV]
  117. Renninger LW, Coughlan J, Verghese P, Malik J. 2005. An information maximization model of eye movements. Adv. Neural Inf. Process. Syst. 17:1121–28 [Google Scholar]
  118. Renninger LW, Verghese P, Coughlan J. 2007. Where to look next? Eye movements reduce local uncertainty. J. Vis. 7:36 https://doi.org/10.1167/7.3.6 [Crossref] [Google Scholar]
  119. Rosenholtz R. 2001. Visual search for orientation among heterogeneous distractors: experimental results and implications for signal-detection theory models of search. J. Exp. Psychol. Hum. Percept. Perform. 27:4985 [Google Scholar]
  120. Rosenholtz R. 2016. Capabilities and limitations of peripheral vision. Annu. Rev. Vis. Sci. 2:437–57 [Google Scholar]
  121. Rovamo J, Leinonen L, Laurinen P, Virsu V. 1984. Temporal integration and contrast sensitivity in foveal and peripheral vision. Perception 13:6665–74 https://doi.org/10.1068/p130665 [Crossref] [Google Scholar]
  122. Santhi N, Reeves A. 2004. The roles of distractor noise and target certainty in search: a signal detection model. Vis. Res. 44:121235–56 https://doi.org/10.1016/j.visres.2003.11.011 [Crossref] [Google Scholar]
  123. Schoonveld W, Shimozaki SS, Eckstein MP. 2007. Optimal observer model of single-fixation oddity search predicts a shallow set-size function. J. Vis. 7:101 https://doi.org/10.1167/7.10.1 [Crossref] [Google Scholar]
  124. Schyns PG, Oliva A. 1994. From blobs to boundary edges: evidence for time- and spatial-scale-dependent scene recognition. Psychol. Sci. 5:4195–200 [Google Scholar]
  125. Shadlen MN, Newsome WT. 1998. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J. Neurosci. 18:103870–96 [Google Scholar]
  126. Shimozaki SS, Eckstein MP, Abbey CK. 2003. Comparison of two weighted integration models for the cueing task: linear and likelihood. J. Vis. 3:33 https://doi.org/10.1167/3.3.3 [Crossref] [Google Scholar]
  127. Shimozaki SS, Schoonveld WA, Eckstein MP. 2012. A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades. J. Vis. 12:627 https://doi.org/10.1167/12.6.27 [Crossref] [Google Scholar]
  128. Shimp CP, Friedrich FJ. 1993. Behavioral and computational models of spatial attention. J. Exp. Psychol. Anim. Behav. Process. 19:126–37 [Google Scholar]
  129. Sperling G, Dosher BA. 1986. Strategy optimization in human information processing. Handbook of Perception and Human Performance 1 KR Boff, L Kaufman, JP Thomas 1–65 New York: John Wiley and Sons [Google Scholar]
  130. Sridharan D, Steinmetz NA, Moore T, Knudsen EI. 2017. Does the superior colliculus control perceptual sensitivity or choice bias during attention? Evidence from a multialternative decision framework. J. Neurosci. 37:3480–511 https://doi.org/10.1523/JNEUROSCI.4505-14.2017 [Crossref] [Google Scholar]
  131. Strasburger H, Rentschler I, Jüttner M. 2011. Peripheral vision and pattern recognition: a review. J. Vis. 11:513 https://doi.org/10.1167/11.5.13 [Crossref] [Google Scholar]
  132. Sullivan BT, Johnson L, Rothkopf CA, Ballard D, Hayhoe M. 2012. The role of uncertainty and reward on eye movements in a virtual driving task. J. Vis. 12:1319 https://doi.org/10.1167/12.13.19 [Crossref] [Google Scholar]
  133. Tatler BW, Hayhoe MM, Land MF, Ballard DH. 2011. Eye guidance in natural vision: reinterpreting salience. J. Vis. 11:55 https://doi.org/10.1167/11.5.5 [Crossref] [Google Scholar]
  134. Thorpe SJ, Gegenfurtner KR, Fabre-Thorpe M, Bülthoff HH. 2001. Detection of animals in natural images using far peripheral vision. Eur. J. Neurosci. 14:5869–76 https://doi.org/10.1046/j.0953-816x.2001.01717.x [Crossref] [Google Scholar]
  135. Torralba A. 2003. Contextual priming for object detection. Int. J. Comput. Vis. 53:2169–91 https://doi.org/10.1023/A:1023052124951 [Crossref] [Google Scholar]
  136. Torralba A, Murphy KP, Freeman WT. 2010. Using the forest to see the trees: exploiting context for visual object detection and localization. Commun. ACM 53:3107–14 https://doi.org/10.1145/1666420.1666446 [Crossref] [Google Scholar]
  137. Torralba A, Murphy KP, Freeman WT, Rubin MA. 2003. Context-based vision system for place and object recognition. Proc. Ninth IEEE Int. Conf. Comput. Vis., Oct. 13–16, Nice, France273–80 https://doi.org/10.1109/ICCV.2003.1238354 [Crossref]
  138. Torralba A, Oliva A, Castelhano MS, Henderson JM. 2006. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113:4766–86 https://doi.org/10.1037/0033-295X.113.4.766 [Crossref] [Google Scholar]
  139. Treisman A, Gelade G. 1980. A feature-integration theory of attention. Cogn. Psychol. 12:197–136 [Google Scholar]
  140. Vincent BT. 2011a. Covert visual search: prior beliefs are optimally combined with sensory evidence. J. Vis. 11:1325 https://doi.org/10.1167/11.13.25 [Crossref] [Google Scholar]
  141. Vincent BT. 2011b. Search asymmetries: parallel processing of uncertain sensory information. Vis. Res. 51:151741–50 https://doi.org/10.1016/j.visres.2011.05.017 [Crossref] [Google Scholar]
  142. Vincent BT. 2015. Bayesian accounts of covert selective attention: a tutorial review. Atten. Percept. Psychophys. 77:41013–32 https://doi.org/10.3758/s13414-014-0830-0 [Crossref] [Google Scholar]
  143. Vincent BT, Baddeley RJ, Troscianko T, Gilchrist ID. 2009. Optimal feature integration in visual search. J. Vis. 9:515 https://doi.org/10.1167/9.5.15 [Crossref] [Google Scholar]
  144. ML-H, Henderson JM. 2010. The time course of initial scene processing for eye movement guidance in natural scene search. J. Vis. 10:314 https://doi.org/10.1167/10.3.14 [Crossref] [Google Scholar]
  145. Walthew C, Gilchrist ID. 2006. Target location probability effects in visual search: an effect of sequential dependencies. J. Exp. Psychol. Hum. Percept. Perform. 32:51294–301 https://doi.org/10.1037/0096-1523.32.5.1294 [Crossref] [Google Scholar]
  146. Wasserman EA, Teng Y, Brooks DI. 2014a. Scene-based contextual cueing in pigeons. J. Exp. Psychol. Anim. Learn. Cogn. 40:4401–18 https://doi.org/10.1037/xan0000028 [Crossref] [Google Scholar]
  147. Wasserman EA, Teng Y, Castro L. 2014b. Pigeons exhibit contextual cueing to both simple and complex backgrounds. Behav. Process. 104:44–52 https://doi.org/10.1016/j.beproc.2014.01.021 [Crossref] [Google Scholar]
  148. Wolfe JM. 1998a. Visual Search. Attention HE Pashler 13–56 East Sussex, UK: Psychology Press [Google Scholar]
  149. Wolfe JM. 1998b. What can 1 million trials tell us about visual search. ? Psychol. Sci. 9:133–39 [Google Scholar]
  150. Wolfe JM. 2007. Guided Search 4.0: current progress with a model of visual search. Integr. Models Cogn. Syst. 25:1–57 [Google Scholar]
  151. Wolfe JM, ML-H, Evans KK, Greene MR. 2011. Visual search in scenes involves selective and non-selective pathways. Trends Cogn. Sci. 15:277–84 https://doi.org/10.1016/j.tics.2010.12.001 [Crossref] [Google Scholar]
  152. Wu C-C, Wang H-C, Pomplun M. 2014. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes. Vis. Res. 105:10–20 https://doi.org/10.1016/j.visres.2014.08.019 [Crossref] [Google Scholar]
  153. Yu AJ, Dayan P. 2005. Uncertainty, neuromodulation, and attention. Neuron 46:4681–92 https://doi.org/10.1016/j.neuron.2005.04.026 [Crossref] [Google Scholar]
  154. Zeiler MD, Fergus R. 2014. Visualizing and understanding convolutional networks. Proc. Eur. Conf. Comp. Vis., Zurich, Switz., Sept. 6–12 D Fleet, T Pajdla, B Schiele, T Tuytelaars 818–33 Cham, Switz.: Springer https://doi.org/10.1007/978-3-319-10590-1_53 [Crossref] [Google Scholar]
  155. Zelinsky GJ. 2008. A theory of eye movements during target acquisition. Psychol. Rev. 115:4787–835 https://doi.org/10.1037/a0013118 [Crossref] [Google Scholar]
  156. Zhang S, Eckstein MP. 2010. Evolution and optimality of similar neural mechanisms for perception and action during search. PLOS Comput. Biol. 6:9e1000930 https://doi.org/10.1371/journal.pcbi.1000930 [Crossref] [Google Scholar]
/content/journals/10.1146/annurev-vision-102016-061220
Loading
/content/journals/10.1146/annurev-vision-102016-061220
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error