1932

Abstract

Dopamine neurons facilitate learning by calculating reward prediction error, or the difference between expected and actual reward. Despite two decades of research, it remains unclear how dopamine neurons make this calculation. Here we review studies that tackle this problem from a diverse set of approaches, from anatomy to electrophysiology to computational modeling and behavior. Several patterns emerge from this synthesis: that dopamine neurons themselves calculate reward prediction error, rather than inherit it passively from upstream regions; that they combine multiple separate and redundant inputs, which are themselves interconnected in a dense recurrent network; and that despite the complexity of inputs, the output from dopamine neurons is remarkably homogeneous and robust. The more we study this simple arithmetic computation, the knottier it appears to be, suggesting a daunting (but stimulating) path ahead for neuroscience more generally.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-neuro-072116-031109
2017-07-25
2024-12-11
Loading full text...

Full text loading...

/deliver/fulltext/neuro/40/1/annurev-neuro-072116-031109.html?itemId=/content/journals/10.1146/annurev-neuro-072116-031109&mimeType=html&fmt=ahah

Literature Cited

  1. Atallah BV, Bruns W, Carandini M, Scanziani M. 2012. Parvalbumin-expressing interneurons linearly transform cortical responses to visual stimuli. Neuron 73:159–70 [Google Scholar]
  2. Ayaz A, Chance FS. 2009. Gain modulation of neuronal responses by subtractive and divisive mechanisms of inhibition. J. Neurophysiol. 101:958–68 [Google Scholar]
  3. Bayer HM, Glimcher PW. 2005. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron 47:129–41 [Google Scholar]
  4. Bayer HM, Lau B, Glimcher PW. 2007. Statistics of midbrain dopamine neuron spike trains in the awake primate. J. Neurophysiol. 98:1428–39 [Google Scholar]
  5. Beier KT, Steinberg EE, DeLoach KE, Xie S, Miyamichi K. et al. 2015. Circuit architecture of VTA dopamine neurons revealed by systematic input-output mapping. Cell 162:622–34 [Google Scholar]
  6. Blaess S, Bodea GO, Kabanova A, Chanet S, Mugniery E. et al. 2011. Temporal-spatial changes in Sonic Hedgehog expression and signaling reveal different potentials of ventral mesencephalic progenitors to populate distinct ventral midbrain nuclei. Neural Dev 6:29 [Google Scholar]
  7. Boyden ES, Zhang F, Bamberg E, Nagel G, Deisseroth K. 2005. Millisecond-timescale, genetically targeted optical control of neural activity. Nat. Neurosci. 8:1263–68 [Google Scholar]
  8. Bromberg-Martin ES, Matsumoto M, Hikosaka O. 2010. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron 68:815–34 [Google Scholar]
  9. Brown J, Bullock D, Grossberg S. 1999. How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues. J. Neurosci. 19:10502–11 [Google Scholar]
  10. Burton MJ, Rolls ET, Mora F. 1976. Effects of hunger on the responses of neurons in the lateral hypothalamus to the sight and taste of food. Exp. Neurol. 51:668–77 [Google Scholar]
  11. Bush RR, Mosteller F. 1951. A mathematical model for simple learning. Psychol. Rev. 58:313–23 [Google Scholar]
  12. Cardin JA, Palmer LA, Contreras D. 2008. Cellular mechanisms underlying stimulus-dependent gain modulation in primary visual cortex neurons in vivo. Neuron 59:150–60 [Google Scholar]
  13. Chance FS, Abbott LF, Reyes AD. 2002. Gain modulation from background synaptic input. Neuron 35:773–82 [Google Scholar]
  14. Chang CY, Esber GR, Marrero-Garcia Y, Yau H-J, Bonci A, Schoenbaum G. 2016. Brief optogenetic inhibition of dopamine neurons mimics endogenous negative reward prediction errors. Nat. Neurosci. 19:111–16 [Google Scholar]
  15. Cohen JY, Haesler S, Vong L, Lowell BB, Uchida N. 2012. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Nature 482:85–88 [Google Scholar]
  16. Contreras-Vidal JL, Schultz W. 1999. A predictive reinforcement model of dopamine neurons for learning approach behavior. J. Comput. Neurosci. 6:191–214 [Google Scholar]
  17. Danjo T, Yoshimi K, Funabiki K, Yawata S, Nakanishi S. 2014. Aversive behavior induced by optogenetic inactivation of ventral tegmental area dopamine neurons is mediated by dopamine D2 receptors in the nucleus accumbens. PNAS 111:6455–60 [Google Scholar]
  18. D'Ardenne K, McClure SM, Nystrom LE, Cohen JD. 2008. BOLD responses reflecting dopaminergic signals in the human ventral tegmental area. Science 319:1264–67 [Google Scholar]
  19. Day JJ, Roitman MF, Wightman RM, Carelli RM. 2007. Associative learning mediates dynamic shifts in dopamine signaling in the nucleus accumbens. Nat. Neurosci. 10:1020–28 [Google Scholar]
  20. Enomoto K, Matsumoto N, Nakai S, Satoh T, Sato TK. et al. 2011. Dopamine neurons learn to encode the long-term value of multiple future rewards. PNAS 108:15462–67 [Google Scholar]
  21. Eshel N, Bukwich M, Rao V, Hemmelder V, Tian J, Uchida N. 2015. Arithmetic and local circuitry underlying dopamine prediction errors. Nature 525:243–46 [Google Scholar]
  22. Eshel N, Tian J, Bukwich M, Uchida N. 2016. Dopamine neurons share common response function for reward prediction error. Nat. Neurosci. 19:479–86 [Google Scholar]
  23. Feierstein CE, Quirk MC, Uchida N, Sosulski DL, Mainen ZF. 2006. Representation of spatial goals in rat orbitofrontal cortex. Neuron 51:495–507 [Google Scholar]
  24. Fiorillo CD. 2013. Two dimensions of value: Dopamine neurons represent reward but not aversiveness. Science 341:546–49 [Google Scholar]
  25. Fiorillo CD, Newsome WT, Schultz W. 2008. The temporal precision of reward prediction in dopamine neurons. Nat. Neurosci. 11:966–73 [Google Scholar]
  26. Fiorillo CD, Tobler PN, Schultz W. 2003. Discrete coding of reward probability and uncertainty by dopamine neurons. Science 299:1898–902 [Google Scholar]
  27. Fiorillo CD, Yun SR, Song MR. 2013. Diversity and homogeneity in responses of midbrain dopamine neurons. J. Neurosci. 33:4693–709 [Google Scholar]
  28. Flagel SB, Clark JJ, Robinson TE, Mayo L, Czuj A. et al. 2011. A selective role for dopamine in stimulus-reward learning. Nature 469:53–57 [Google Scholar]
  29. Friston K. 2012. Prediction, perception and agency. Int. J. Psychophysiol. 83:248–52 [Google Scholar]
  30. Geisler S, Derst C, Veh RW, Zahm DS. 2007. Glutamatergic afferents of the ventral tegmental area in the rat. J. Neurosci. 27:5730–43 [Google Scholar]
  31. Geisler S, Zahm DS. 2005. Afferents of the ventral tegmental area in the rat-anatomical substratum for integrative functions. J. Comp. Neurol. 490:270–94 [Google Scholar]
  32. Gershman SJ. 2014. Dopamine ramps are a consequence of reward prediction errors. Neural Comput 26:467–71 [Google Scholar]
  33. Glimcher PW. 2011. Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. PNAS 108:Suppl. 315647–54 [Google Scholar]
  34. Grace AA, Bunney BS. 1983. Intracellular and extracellular electrophysiology of nigral dopaminergic neurons—1. Identification and characterization. Neuroscience 10:301–15 [Google Scholar]
  35. Haber SN, Ryoo H, Cox C, Lu W. 1995. Subsets of midbrain dopaminergic neurons in monkeys are distinguished by different levels of mRNA for the dopamine transporter: comparison with the mRNA for the D2 receptor, tyrosine hydroxylase and calbindin immunoreactivity. J. Comp. Neurol. 362:400–10 [Google Scholar]
  36. Hamid AA, Pettibone JR, Mabrouk OS, Hetrick VL, Schmidt R. et al. 2016. Mesolimbic dopamine signals the value of work. Nat. Neurosci. 19:117–26 [Google Scholar]
  37. Hart AS, Rutledge RB, Glimcher PW, Phillips PEM. 2014. Phasic dopamine release in the rat nucleus accumbens symmetrically encodes a reward prediction error term. J. Neurosci. 34:698–704 [Google Scholar]
  38. Hazy TE, Frank MJ, O'Reilly RC. 2010. Neural mechanisms of acquired phasic dopamine responses in learning. Neurosci. Biobehav. Rev. 34:701–20 [Google Scholar]
  39. Hikosaka O, Sakamoto M. 1986. Neural activities in the monkey basal ganglia related to attention, memory and anticipation. Brain Dev 8:454–61 [Google Scholar]
  40. Hollerman JR, Tremblay L, Schultz W. 1998. Influence of reward expectation on behavior-related neuronal activity in primate striatum. J. Neurophysiol. 80:947–63 [Google Scholar]
  41. Holt GR, Koch C. 1997. Shunting inhibition does not have a divisive effect on firing rates. Neural Comput 9:1001–13 [Google Scholar]
  42. Hong S, Jhou TC, Smith M, Saleem KS, Hikosaka O. 2011. Negative reward signals from the lateral habenula to dopamine neurons are mediated by rostromedial tegmental nucleus in primates. J. Neurosci. 31:11457–71 [Google Scholar]
  43. Horvitz JC. 2000. Mesolimbocortical and nigrostriatal dopamine responses to salient non-reward events. Neuroscience 96:651–56 [Google Scholar]
  44. Houk JC, Davis JL, Beiser DG. 1995. Models of Information Processing in the Basal Ganglia Cambridge, MA: MIT Press [Google Scholar]
  45. Howe MW, Dombeck DA. 2016. Rapid signalling in distinct dopaminergic axons during locomotion and reward. Nature 535:505–10 [Google Scholar]
  46. Howe MW, Tierney PL, Sandberg SG, Phillips PEM, Graybiel AM. 2013. Prolonged dopamine signalling in striatum signals proximity and value of distant rewards. Nature 500:575–79 [Google Scholar]
  47. Jhou TC, Fields HL, Baxter MG, Saper CB, Holland PC. 2009. The rostromedial tegmental nucleus (RMTg), a GABAergic afferent to midbrain dopamine neurons, encodes aversive stimuli and inhibits motor responses. Neuron 61:786–800 [Google Scholar]
  48. Jin X, Costa RM. 2010. Start/stop signals emerge in nigrostriatal circuits during sequence learning. Nature 466:457–62 [Google Scholar]
  49. Joel D, Niv Y, Ruppin E. 2002. Actor-critic models of the basal ganglia: new anatomical and computational perspectives. Neural Netw 15:535–47 [Google Scholar]
  50. Joshua M, Adler A, Mitelman R, Vaadia E, Bergman H. 2008. Midbrain dopaminergic neurons and striatal cholinergic interneurons encode the difference between reward and aversive events at different epochs of probabilistic classical conditioning trials. J. Neurosci. 28:11673–84 [Google Scholar]
  51. Joshua M, Adler A, Prut Y, Vaadia E, Wickens JR, Bergman H. 2009. Synchronization of midbrain dopaminergic neurons is enhanced by rewarding events. Neuron 62:695–704 [Google Scholar]
  52. Kamin L. 1969. Selective association and conditioning. Fundamental Issues in Associative Learning42–64 Halifax, N.S: Dalhousie Univ. Press [Google Scholar]
  53. Kato HK, Gillet SN, Peters AJ, Isaacson JS, Komiyama T. 2013. Parvalbumin-expressing interneurons linearly control olfactory bulb output. Neuron 80:1218–31 [Google Scholar]
  54. Kawato M, Samejima K. 2007. Efficient reinforcement learning: computational theories, neuroscience and robotics. Curr. Opin. Neurobiol. 17:205–212 [Google Scholar]
  55. Kim HF, Ghazizadeh A, Hikosaka O. 2015. Dopamine neurons encoding long-term memory of object value for habitual behavior. Cell 163:1165–75 [Google Scholar]
  56. Kim Y, Wood J, Moghaddam B. 2012. Coordinated activity of ventral tegmental neurons adapts to appetitive and aversive learning. PLOS ONE 7:e29766 [Google Scholar]
  57. Kobayashi S, Schultz W. 2008. Influence of reward delays on responses of dopamine neurons. J. Neurosci. 28:7837–46 [Google Scholar]
  58. Kobayashi S, Schultz W. 2014. Reward contexts extend dopamine signals to unrewarded stimuli. Curr. Biol. 24:56–62 [Google Scholar]
  59. Lak A, Stauffer WR, Schultz W. 2014. Dopamine prediction error responses integrate subjective value from different reward dimensions. PNAS 111:2343–48 [Google Scholar]
  60. Lammel S, Hetzel A, Häckel O, Jones I, Liss B, Roeper J. 2008. Unique properties of mesoprefrontal neurons within a dual mesocorticolimbic dopamine system. Neuron 57:760–73 [Google Scholar]
  61. Lammel S, Lim BK, Ran C, Huang KW, Betley MJ. et al. 2012. Input-specific control of reward and aversion in the ventral tegmental area. Nature 491:212–17 [Google Scholar]
  62. LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature 521:436–44 [Google Scholar]
  63. Lee S-H, Kwan AC, Zhang S, Phoumthipphavong V, Flannery JG. et al. 2012. Activation of specific interneurons improves V1 feature selectivity and visual perception. Nature 488:379–83 [Google Scholar]
  64. Lerner TN, Shilyansky C, Davidson TJ, Evans KE, Beier KT. et al. 2015. Intact-brain analyses reveal distinct information carried by SNc dopamine subcircuits. Cell 162:635–47 [Google Scholar]
  65. Lima SQ, Hromádka T, Znamenskiy P, Zador AM. 2009. PINP: a new method of tagging neuronal populations for identification during in vivo electrophysiological recording. PLOS ONE 4:e6099 [Google Scholar]
  66. Margolis EB, Lock H, Hjelmstad GO, Fields HL. 2006. The ventral tegmental area revisited: Is there an electrophysiological marker for dopaminergic neurons. ? J. Physiol. 577:907–24 [Google Scholar]
  67. Martin SJ, Grimwood PD, Morris RGM. 2000. Synaptic plasticity and memory: an evaluation of the hypothesis. Annu. Rev. Neurosci. 23:649–711 [Google Scholar]
  68. Matsumoto M, Hikosaka O. 2007. Lateral habenula as a source of negative reward signals in dopamine neurons. Nature 447:1111–15 [Google Scholar]
  69. Matsumoto M, Hikosaka O. 2009a. Representation of negative motivational value in the primate lateral habenula. Nat. Neurosci. 12:77–84 [Google Scholar]
  70. Matsumoto M, Hikosaka O. 2009b. Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature 459:837–41 [Google Scholar]
  71. Matsumoto H, Tian J, Uchida N, Watabe-Uchida M. 2016. Midbrain dopamine neurons signal aversion in a reward-context-dependent manner. eLife 5:e17328 [Google Scholar]
  72. Menegas W, Bergan JF, Ogawa SK, Isogai Y, Umadevi Venkataraju K. et al. 2015. Dopamine neurons projecting to the posterior striatum form an anatomically distinct subclass. eLife 4:e10032 [Google Scholar]
  73. Mirenowicz J, Schultz W. 1994. Importance of unpredictability for reward responses in primate dopamine neurons. J. Neurophysiol. 72:1024–27 [Google Scholar]
  74. Miyamichi K, Shlomai-Fuchs Y, Shu M, Weissbourd BC, Luo L, Mizrahi A. 2013. Dissecting local circuits: Parvalbumin interneurons underlie broad feedback control of olfactory bulb output. Neuron 80:1232–45 [Google Scholar]
  75. Montague PR, Dayan P, Sejnowski TJ. 1996. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J. Neurosci. 16:1936–47 [Google Scholar]
  76. Morita K, Morishima M, Sakai K, Kawaguchi Y. 2013. Dopaminergic control of motivation and reinforcement learning: a closed-circuit account for reward-oriented behavior. J. Neurosci. 33:8866–90 [Google Scholar]
  77. Morris G, Arkadir D, Nevet A, Vaadia E, Bergman H. 2004. Coincident but distinct messages of midbrain dopamine and striatal tonically active neurons. Neuron 43:133–43 [Google Scholar]
  78. Murphy BK, Miller KD. 2003. Multiplicative gain changes are induced by excitation or inhibition alone. J. Neurosci. 23:10040–51 [Google Scholar]
  79. Neuhoff H, Neu A, Liss B, Roeper J. 2002. Ih channels contribute to the different functional properties of identified dopaminergic subpopulations in the midbrain. J. Neurosci. 22:1290–302 [Google Scholar]
  80. Oleson EB, Gentry RN, Chioma VC, Cheer JF. 2012. Subsecond dopamine release in the nucleus accumbens predicts conditioned punishment and its successful avoidance. J. Neurosci. 32:14804–8 [Google Scholar]
  81. Olsen SR, Bhandawat V, Wilson RI. 2010. Divisive normalization in olfactory population codes. Neuron 66:287–99 [Google Scholar]
  82. Ono T, Sasaki K, Nishino H, Fukuda M, Shibata R. 1986. Feeding and diurnal related activity of lateral hypothalamic neurons in freely behaving rats. Brain Res 373:92–102 [Google Scholar]
  83. O'Reilly RC, Frank MJ, Hazy TE, Watz B. 2007. PVLV: the primary value and learned value Pavlovian learning algorithm. Behav. Neurosci. 121:31–49 [Google Scholar]
  84. Papadopoulou M, Cassenaer S, Nowotny T, Laurent G. 2011. Normalization for sparse encoding of odors by a wide-field interneuron. Science 332:721–25 [Google Scholar]
  85. Parker NF, Cameron CM, Taliaferro JP, Lee J, Choi JY. et al. 2016. Reward and choice encoding in terminals of midbrain dopamine neurons depends on striatal target. Nat. Neurosci. 19:845–54 [Google Scholar]
  86. Rao RP, Ballard DH. 1999. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2:79–87 [Google Scholar]
  87. Rescorla RA, Wagner AR. 1972. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. Classical Conditioning II: Current Research and Theory A Black, W Prokasy 64–99 New York: Appleton-Century-Crofts [Google Scholar]
  88. Roesch MR, Calu DJ, Schoenbaum G. 2007. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat. Neurosci. 10:1615–24 [Google Scholar]
  89. Roitman MF, Wheeler RA, Wightman RM, Carelli RM. 2008. Real-time chemical responses in the nucleus accumbens differentiate rewarding and aversive stimuli. Nat. Neurosci. 11:1376–77 [Google Scholar]
  90. Schultz W. 1986. Responses of midbrain dopamine neurons to behavioral trigger stimuli in the monkey. J. Neurophysiol. 56:1439–61 [Google Scholar]
  91. Schultz W. 1998. Predictive reward signal of dopamine neurons. J. Neurophysiol. 80:1–27 [Google Scholar]
  92. Schultz W. 2013. Updating dopamine reward signals. Curr. Opin. Neurobiol. 23:229–38 [Google Scholar]
  93. Schultz W. 2016a. Dopamine reward prediction error coding. Dialogues Clin. Neurosci. 18:23–32 [Google Scholar]
  94. Schultz W. 2016b. Dopamine reward prediction-error signalling: a two-component response. Nat. Rev. Neurosci. 17:183–95 [Google Scholar]
  95. Schultz W, Apicella P, Ljungberg T, Romo R, Scarnati E. 1993. Reward-related activity in the monkey striatum and substantia nigra. Prog. Brain Res. 99:227–35 [Google Scholar]
  96. Schultz W, Dayan P, Montague PR. 1997. A neural substrate of prediction and reward. Science 275:1593–99 [Google Scholar]
  97. Sesack SR, Grace AA. 2010. Cortico-basal ganglia reward network: microcircuitry. Neuropsychopharmacolgy 35:27–47 [Google Scholar]
  98. Shu Y, Hasenstaub A, Badoual M, Bal T, McCormick DA. 2003. Barrages of synaptic activity control the gain and sensitivity of cortical neurons. J. Neurosci. 23:10388–401 [Google Scholar]
  99. Silver D, Huang A, Maddison CJ, Guez A, Sifre L. et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529:484–89 [Google Scholar]
  100. Silver RA. 2010. Neuronal arithmetic. Nat. Rev. Neurosci. 11:474–89 [Google Scholar]
  101. Stauffer WR, Lak A, Yang A, Borel M, Paulsen O. et al. 2016. Dopamine neuron-specific optogenetic stimulation in rhesus macaques. Cell 166:1564–1571.e6 [Google Scholar]
  102. Steinberg EE, Keiflin R, Boivin JR, Witten IB, Deisseroth K, Janak PH. 2013. A causal link between prediction errors, dopamine neurons and learning. Nat. Neurosci. 16:966–73 [Google Scholar]
  103. Stephenson-Jones M, Yu K, Ahrens S, Tucciarone JM, van Huijstee AN. et al. 2016. A basal ganglia circuit for evaluating action outcomes. Nature 539:289–93 [Google Scholar]
  104. Stuber GD, Klanker M, de Ridder B, Bowers MS, Joosten RN. et al. 2008. Reward-predictive cues enhance excitatory synaptic strength onto midbrain dopamine neurons. Science 321:1690–92 [Google Scholar]
  105. Sutton RS, Barto AG. 1998. Reinforcement Learning: An Introduction Cambridge, UK: Cambridge Univ. Press [Google Scholar]
  106. Takahashi YK, Langdon AJ, Niv Y, Schoenbaum G. 2016. Temporal specificity of reward prediction errors signaled by putative dopamine neurons in rat VTA depends on ventral striatum. Neuron 91:182–93 [Google Scholar]
  107. Takahashi YK, Roesch MR, Wilson RC, Toreson K, O'Donnell P. et al. 2011. Expectancy-related changes in firing of dopamine neurons depend on orbitofrontal cortex. Nat. Neurosci. 14:1590–97 [Google Scholar]
  108. Tan CO, Bullock D. 2008. A local circuit model of learned striatal and dopamine cell responses under probabilistic schedules of reward. J. Neurosci. 28:10062–74 [Google Scholar]
  109. Tan KR, Yvon C, Turiault M, Mirzabekov JJ, Doehner J. et al. 2012. GABA neurons of the VTA drive conditioned place aversion. Neuron 73:1173–83 [Google Scholar]
  110. Tian J, Huang R, Cohen JY, Osakada F, Kobak D. et al. 2016. Distributed and mixed information in monosynaptic inputs to dopamine neurons. Neuron 91:1374–89 [Google Scholar]
  111. Tian J, Uchida N. 2015. Habenula lesions reveal that multiple mechanisms underlie dopamine prediction errors. Neuron 87:1304–16 [Google Scholar]
  112. Tobler PN, Fiorillo CD, Schultz W. 2005. Adaptive coding of reward value by dopamine neurons. Science 307:1642–45 [Google Scholar]
  113. Tsai H-C, Zhang F, Adamantidis A, Stuber GD, Bonci A. et al. 2009. Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning. Science 324:1080–84 [Google Scholar]
  114. Uchida N, Eshel N, Watabe-Uchida M. 2013. Division of labor for division: inhibitory interneurons with different spatial landscapes in the olfactory system. Neuron 80:1106–9 [Google Scholar]
  115. Ungless MA, Grace AA. 2012. Are you or aren't you? Challenges associated with physiologically identifying dopamine neurons. Trends Neurosci 35:422–30 [Google Scholar]
  116. Vandecasteele M, Glowinski J, Venance L. 2005. Electrical synapses between dopaminergic neurons of the substantia nigra pars compacta. J. Neurosci. 25:291–98 [Google Scholar]
  117. van Zessen R, Phillips JL, Budygin EA, Stuber GD. 2012. Activation of VTA GABA neurons disrupts reward consumption. Neuron 73:1184–94 [Google Scholar]
  118. Vitay J, Hamker FH. 2014. Timing and expectation of reward: a neuro-computational model of the afferents to the ventral tegmental area. Front. Neurorobotics 8:4 [Google Scholar]
  119. Volman SF, Lammel S, Margolis EB, Kim Y, Richard JM. et al. 2013. New insights into the specificity and plasticity of reward and aversion encoding in the mesolimbic system. J. Neurosci. 33:17569–76 [Google Scholar]
  120. Waelti P, Dickinson A, Schultz W. 2001. Dopamine responses comply with basic assumptions of formal learning theory. Nature 412:43–48 [Google Scholar]
  121. Watabe-Uchida M, Zhu L, Ogawa SK, Vamanrao A, Uchida N. 2012. Whole-brain mapping of direct inputs to midbrain dopamine neurons. Neuron 74:858–73 [Google Scholar]
  122. Wenzel JM, Rauscher NA, Cheer JF, Oleson EB. 2015. A role for phasic dopamine release within the nucleus accumbens in encoding aversion: a review of the neurochemical literature. ACS Chem. Neurosci. 6:16–26 [Google Scholar]
  123. Wickersham IR, Lyon DC, Barnard RJO, Mori T, Finke S. et al. 2007. Monosynaptic restriction of trans-synaptic tracing from single, genetically targeted neurons. Neuron 53:639–47 [Google Scholar]
  124. Williford T, Maunsell JHR. 2006. Effects of spatial attention on contrast response functions in macaque area V4. J. Neurophysiol. 96:40–54 [Google Scholar]
  125. Wilson NR, Runyan CA, Wang FL, Sur M. 2012. Division and subtraction by distinct cortical inhibitory networks in vivo. Nature 488:343–48 [Google Scholar]
  126. Wise RA. 2004. Dopamine, learning and motivation. Nat. Rev. Neurosci. 5:483–94 [Google Scholar]
  127. Wise RA, Rompre PP. 1989. Brain dopamine and reward. Annu. Rev. Psychol. 40:191–225 [Google Scholar]
  128. Witten IB, Steinberg EE, Lee SY, Davidson TJ, Zalocusky KA. et al. 2011. Recombinase-driver rat lines: tools, techniques, and optogenetic application to dopamine-mediated reinforcement. Neuron 72:721–33 [Google Scholar]
/content/journals/10.1146/annurev-neuro-072116-031109
Loading
/content/journals/10.1146/annurev-neuro-072116-031109
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error