Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

Attractor networks

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract An attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for long‐term memory, short‐term memory, attention, and decision making. The article then shows how the random firing of neurons can influence the stability of these networks by introducing stochastic noise, and how these effects are involved in probabilistic decision making, and implicated in some disorders of cortical function such as poor short‐term memory and attention, schizophrenia, and obsessive‐compulsive disorder. Copyright © 2009 John Wiley & Sons, Ltd. This article is categorized under: Computer Science > Neural Networks

The architecture of an autoassociative or attractor neural network (see text).

[ Normal View | Magnified View ]

Representation of connections within the hippocampus. Inputs reach the hippocampus through the perforant path (1) which makes synapses with the dendrites of the dentate granule cells and also with the apical dendrites of the CA3 pyramidal cells. The dentate granule cells project via the mossy fibres (2) to the CA3 pyramidal cells. The well‐developed recurrent collateral system of the CA3 cells is indicated. The CA3 pyramidal cells project via the Schaffer collaterals (3) to the CA1 pyramidal cells, which in turn have connections (4) to the subiculum.

[ Normal View | Magnified View ]

Decision making in a model of vibrotactile decision making. Dynamical evolution of the network activity of Ventral Premotor Cortex neurons during the comparison period between vibrotactile frequency f1 = 30 Hz and frequency f2 = 20 Hz. (a) The evolution as a function of time of the spiking rate of the populations (f1 > f2) (corresponding to a decision that f1 is greater than f2), (f1 < f2), and the inhibitory population. (b) The corresponding rastergrams of 10 randomly selected neurons for each pool (population of neurons) in the network. Each vertical line corresponds to the generation of a spike. The spatio‐temporal spiking activity shows the transition to the correct final single‐state attractor, i.e., a transition to the correct final attractor encoding the result of the discrimination (f1 > f2) (after Deco and Rolls66).

[ Normal View | Magnified View ]

(a) Attractor or autoassociation network architecture for decision making. The evidence for decision 1 is applied via the λ1 inputs, and for decision 2 via the λ2 inputs. The synaptic weights wij have been associatively modified during training in the presence of λ1 and at a different time of λ1. When λ1 and λ2 are applied, each attractor competes through the inhibitory interneurons (not shown), until one wins the competition, and the network falls into one of the high firing rate attractors that represents the decision. The noise in the network caused by the random spiking of the neurons means that on some trials, for given inputs, the neurons in the decision 1 attractor are more likely to win, and on other trials the neurons in the decision 2 attractor are more likely to win. This makes the decision‐making probabilistic, for, as shown in (c), the noise influences when the system will jump out of the spontaneous firing stable (low energy) state S, and whether it jumps into the high firing state for decision 1 or decision 2 (D). (b) The architecture of the integrate‐and‐fire network used to model vibrotactile decision making (see text).

[ Normal View | Magnified View ]

(a) Using an integrate‐and‐fire approach, the individual neurons, synapses, and ion channels that comprise an attractor network, can be simulated, and when a threshold is reached the cell fires. (b) The attractor dynamics can be pictured by effective energy landscapes, which indicate the basin of attraction by valleys, and the attractor states or fixed points by the bottom of the valleys. The stability of an attractor is characterised by the average time in which the system stays in the basin of attraction under the influence of noise, which provokes transitions to other attractor states. Two factors determine the stability. First, if the depths of the attractors are shallow (as in the left compared to the right valley), less force is needed to move a ball from the shallow valley to the next. Second, a high level of noise increases the likelihood that the system will jump over an energy boundary from one state to another.

[ Normal View | Magnified View ]

Network architecture of the prefrontal cortex unified model of attention, working memory, action selection, and decision making. There are sensory neuronal populations or pools for object type (O1 or O2) and spatial position (S1 or S2). These connect hierarchically (with stronger forward than backward connections) to the intermediate or ‘associative’ pools in which neurons may respond to combinations of the inputs received from the sensory pools for some types of mapping such as reversal, as described by Deco and Rolls.50 For the simulation of the data of Asaad et al.,52 these intermediate pools respond to O1‐L, O2‐R, S1‐L, or S2‐R. These intermediate pools receive an attentional bias, which in the case of this particular simulation biases either the O pools or the S pools. The intermediate pools are connected hierarchically to the premotor pools, which in this case code for a Left or Right response. Each of the pools is an attractor network in which there are stronger associatively modified synaptic weights between the neurons that represent the same state (e.g., object type for a sensory pool, or response for a premotor pool) than between neurons in the other pools or populations. However, all the neurons in the network are associatively connected by at least weak synaptic weights. The attractor properties, the competition implemented by the inhibitory interneurons, and the biasing inputs result in the same network implementing both short‐term memory and biased competition, and the stronger feed forward than feedback connections between the sensory, intermediate, and premotor pools results in the hierarchical property by which sensory inputs can be mapped to motor outputs in a way that depends on the biasing contextual or rule input (after Deco and Rolls50).

[ Normal View | Magnified View ]

The overall architecture of a model of object and spatial processing and attention, including the prefrontal cortical areas that provide the short‐term memory required to hold the object or spatial target of attention active. Forward connections are indicated by solid lines; backprojections, which could implement top–down processing, by dashed lines; and recurrent connections within an area by dotted lines. The triangles represent pyramidal cell bodies, with the thick vertical line above them the dendritic trees. The cortical layers in which the cells are concentrated are indicated by s (superficial, layers 2 and 3) and d (deep, layers 5 and 6). The prefrontal cortical areas most strongly reciprocally connected to the inferior temporal cortex ‘what’ processing stream are labelled v to indicate that they are in the more ventral part of the lateral prefrontal cortex, area 46, close to the inferior convexity in macaques. The prefrontal cortical areas most strongly reciprocally connected to the parietal visual cortical ‘where’ processing stream are labelled d to indicate that they are in the more dorsal part of the lateral prefrontal cortex, area 46, in and close to the banks of the principal sulcus in macaques (after Rolls and Deco10).

[ Normal View | Magnified View ]

A short‐term memory autoassociation network in the prefrontal cortex could hold active a working memory representation by maintaining its firing in an attractor state. The prefrontal module would be loaded with the to‐be‐remembered stimulus by the posterior module (in the temporal or parietal cortex) in which the incoming stimuli are represented. Backprojections from the prefrontal short‐term memory module to the posterior module would enable the working memory to be unloaded, to for example, influence ongoing perception (see text). RC, recurrent collateral connections.

[ Normal View | Magnified View ]

Energy landscape of an attractor network. There are two types of stable fixed point: a spontaneous state with a low firing rate, and one or more persistent states with high firing rates in which the neurons keep firing. Each one of the high firing rate attractor states can implement a different memory.

[ Normal View | Magnified View ]

Related Articles

Decision neuroscience: neuroeconomics

Browse by Topic

Computer Science > Neural Networks

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts