This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

Motion perception: behavior and neural substrate

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Visual motion perception is vital for survival. Single‐unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after‐effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance‐defined and texture‐defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large‐scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion‐processing hierarchy. WIREs Cogni Sci 2011 2 305–314 DOI: 10.1002/wcs.110 Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html This article is categorized under: Psychology > Perception and Psychophysics

A basic neural circuit for retinal motion detection. The image is sampled at two locations (A and B). Neural responses generated at these locations are combined at C, after passing through either a fast (F) or a slow (S) temporal filter. Output at C is direction selective.

[ Normal View | Magnified View ]

Optic flow created by translation along a ground plane, or by a combination of translation and eye rotation. (a) The observer translates along the ground plane while fixating on their destination, the vertical line, to create a translational flow field. (b) The observer rotates the eyes to sweep their gaze to the left; the fixation point (circle) remains centered while the destination point (line) sweeps to the right. The flow field now contains a rotational component.36

[ Normal View | Magnified View ]

Model of motion integration in the middle temporal cortical area (MT). A group of direction‐selective cells in V1 (left), corresponding to the sensors depicted in Figure 2, provide input to a single MT cell (middle). The MT cell's response corresponds to the weighted sum of the V1 inputs, with some V1 cells providing excitation and others providing inhibition. The output of the MT cell is subject to a nonlinear transform (right). (Reprinted with permission from Ref 25. Copyright 2006 Nature Publishing.)

[ Normal View | Magnified View ]

Direction discrimination performance using a two‐frame grating displacement, as a function of the interstimulus interval (ISI) between the frames. Open circles: results using a large, centrally fixated display. Squares: results obtained when a central, 4‐degree diameter area of the stimulus was removed. Diamonds: results obtained when a central 8‐degree diameter area of the stimulus was removed. (Reprinted with permission from Ref 19. Copyright 2009 Elsevier.)

[ Normal View | Magnified View ]

Filled circles: direction discrimination performance using two‐frame displacement of an annular grating, as a function of the duration of the interstimulus interval (ISI) between the frames. Thick line: the output of a computational model of the motion energy sensor depicted in Figure 2.

[ Normal View | Magnified View ]

Direction discrimination performance using two‐frame displacement of a dense array of random elements, as a function of element displacement. Filled circles: elements in both frames were defined by luminance (first‐order). Open circles: elements in both frames were defined by texture (second‐order). Crosses: elements were luminance defined in one frame and texture defined in the other frame (cross‐order). The thick line shows the output of a computational model of the motion energy sensor depicted in Figure 2.

[ Normal View | Magnified View ]

Motion energy sensor. The scheme is similar to that in Figure 1, except that the sensor's output is partially suppressed or normalized (division symbol) by an amount proportional to the combined activity of many sensors. Inset graphs show spatial and temporal sensitivity profiles of the spatial (A and B) and temporal (F and S) filters.

[ Normal View | Magnified View ]

Related Articles

Categorical perception
People watching: visual, motor, and social processes in the perception of human movement
Perception and action

Browse by Topic

Psychology > Perception and Psychophysics

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts