Skip to main content
COVID-19 is an emerging, rapidly evolving situation.

Get the latest public health information from CDC:
Get the latest research information from NIH:

Profile Image


Bruno B. Averbeck, Ph.D.

Laboratory of Neuropsychology

Section on Learning and Decision Making (SLDM)
Building 49 Room 1B80
49 Convent Drive MSC4415
Bethesda MD 20892-4415
Office: (301) 594-1126
Lab: (301) 594-1126

Dr. Averbeck attained a B.S. in Electrical Engineering from the University of Minnesota in 1994. After working 3 years in industry, Dr. Averbeck returned to the University of Minnesota and completed a Ph.D. in Neuroscience in 2001, working in the lab of Dr. Apostolos Georgopoulos. His thesis was titled, "Neural Mechanisms of Copying Geometrical Shapes". Following his thesis work, Dr. Averbeck carried out post-doctoral studies at the University of Rochester with Dr. Daeyeol Lee. During this period he studied neural mechanisms underlying sequential learning, coding of vocalizations and population coding. In 2006 Dr. Averbeck moved to University College London as a senior Lecturer, where he began experiments looking at the role of frontal-striatal circuits in learning, combining neurophysiology, brain imaging and patient studies. In 2009, Dr. Averbeck moved to the NIMH and established the Unit on Learning and Decision Making in the Laboratory of Neuropsychology.

The Section on Learning and Decision making studies the neural circuitry that underlies reinforcement learning. Reinforcement learning (RL) is the behavioral process of learning to make advantageous choices. While some preferences are innate, many are learned over time. How do we learn what we like and what we want to avoid? The lab uses a combination of experiments in in-vivo model systems, human participants including patients and computational modeling. We examine several facets of the learning problem including learning from gains vs. losses, learning to select rewarding actions vs. learning to select rewarding objects, and the explore-exploit trade-off. The explore-exploit trade-off describes a fundamental problem in learning. Should you try every restaurant when visiting a new city, or explore a small set of them and then return to your favorite several times?

Standard models of RL assume that dopamine neurons code reward prediction errors (RPEs; the difference between the size of the reward received and the reward that was expected following a choice). These RPEs are then communicated to the basal ganglia, specifically the striatum, because of its substantial dopamine innervation. This dopamine signal drives learning in frontal-striatal and amygdala-striatal circuits, such that choices that have previously been rewarded lead to larger neural responses in the striatum, and choices that have previously not been rewarded (or have been punished) lead to smaller responses. Thus, the striatal neurons come to represent the values of choices. They signal a high-value choice with higher activity and this higher activity drives decision processes. These models often mention a potential role for the amygdala, without formally incorporating it. They further suggest a general role for the ventral-striatum (VS) in representing values of decisions, whether they are decisions about actions or decisions about objects and independent of whether values are related to reward magnitude or probability.

In contrast to the standard model, we have recently shown that the amygdala has a larger role in RL than the VS (Costa VD et al., Neuron, 2016). In addition, the role of the VS depends strongly on the reward environment. When rewards are predictable, the VS has almost no role in learning whereas when rewards are less predictable the VS plays a larger role. This data outlines a more specific role for the VS in RL than is attributed to it by current models. Given that the VS has been implicated in depression, particularly adolescent depression, this delineation of the contribution of the VS to normal behavior may help inform hypotheses about the mechanisms and circuitry underlying depression.

Staff Image
  • 1) Bartolo R, Averbeck BB (2020)
  • Prefrontal Cortex Predicts State Switches during Reversal Learning
  • Neuron 106, 1044-1054.e4.
  • 2) Costa VD, Mitz AR, Averbeck BB (2019)
  • Subcortical Substrates of Explore-Exploit Decisions in Primates
  • Neuron 103 533-545.e5.
  • 3) Averbeck BB, Costa VD (2017)
  • Motivational neural circuits underlying reinforcement learning
  • Neurosci 20, 505-512.
  • 4) Taswell CA, Costa VD, Murray EA, Averbeck BB (2018)
  • Ventral striatum's role in learning from gains and losses
  • Proc Natl Acad Sci U S A 115, E12398-E12406.
  • 5) Costa VD, Dal Monte O, Lucas DR, Murray EA, Averbeck BB (2016)
  • Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning
  • Neuron 92, 505-517.
  • 6) Evans, S. Shergill, S.S. and Averbeck, B. B (2010)
  • Oxytocin decreases aversion to angry faces in a decision making task
  • Neuropsychopharmacology, 35, 2502-2509
  • 7) Djamshidian, A., Jha, A., O'Sullivan, S. S., Silveira-Moriyama, L., Jacobson, C., Brown, P., Lees, A. and Averbeck, B. B (2010)
  • Risk and learning in impulsive and non-impulsive patients with Parkinson's disease
  • Movement Disorders, 25, 2203-2210
  • 8) Crowe, D. A., Averbeck, B. B. and Chafee, M. V (2009)
  • Neural ensemble decoding reveals a correlate of viewer- to object-centered spatial transformation in monkey parietal cortex
  • J Neurosci, 28, 5218-5228
  • 9) Averbeck, B. B. and Duchaine, B. (2009)
  • Integration of social and utilitarian factors in decision making
  • Emotion, 9, 599-608
  • 10) Averbeck, B. B., Crowe, D. A., Chafee, M. V., and Georgopoulos, A. P (2009)
  • Differential contribution of superior parietal and dorsal-lateral prefrontal cortices in copying
  • Cortex, 45, 432-441
  • 11) Eusebio, A., Pogosyan, A., Wang, S., Averbeck, B., Doyle Gaynor, L., Limousin, P., Hariz, M., and Brown, P. (2009)
  • Resonance in the Cortical Response to Basal Ganglia Output in Parkinson's Disease
  • Brain, 132, 2139-2150
  • 12) Romanski, L.M. and Averbeck, B. B. (2009)
  • The primate cortical auditory system and neural representation of conspecific vocalizations
  • Annual Rev Neurosci, 39, 315-346
  • 13) Cruz, A.V., Mallet, N., Magill, P., Brown, P., and Averbeck, B. B (2009)
  • Effects of dopamine depletion on network entropy in the external globus pallidus
  • J Neurophys, 102, 1092-1102
View Pubmed Publication