This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

Visual imagery

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Visual mental imagery is our ability to reactivate and manipulate visual representations in the absence of the corresponding visual stimuli, giving rise to the experience of ‘seeing with the mind's eye’. Until relatively recently, visual mental imagery had been investigated by philosophy and cognitive psychology. However, these disciplines did not have the tools required to address empirically some of the important questions they had raised, for instance the extent to which visual mental images rely on some of the same representations that support visual perception. During the last two decades, cognitive neuroscience has leveraged the vast amount of knowledge about the neural basis of primate vision to provide new insights into visual mental imagery processes. Such insights enabled the empirical test of key questions about visual mental imagery using the armamentarium of tools provided by cognitive neuroscience, including electrophysiology and neuroimaging. Using a similar logic, we propose that information about the neural basis of memory systems should be used to further enhance understanding of the neural mechanisms of visual mental imagery. WIREs Cogni Sci 2011 2 239–252 DOI: 10.1002/wcs.103 This article is categorized under: Psychology > Perception and Psychophysics

(a) ERPs in response to an object are more negative between 200 and 500 ms when immediately preceded by imagery of a different (‘D’) than the same picture (‘S’). This imagery effect was clear on a frontopolar N350 and centrofrontal N390 but minimal or absent on the well‐known, linguistic N400 at centroparietal scalp locations, demonstrating the frontal effects were primarily related to visual object, not linguistic (e.g., name), knowledge.50 Greater negativity to the different picture may reflect greater automatic simulation (imagery) of knowledge when processing a new than the identical stimulus just seen, for which knowledge processing has been primed.63,64 (b) ERPs in response to a pair of Shepard–Metzler cube objects are more negative when they require mental imagery of rotation than when they do not (none condition) on the frontopolar N350 (and to some extent the centrofrontal N390) between 200 and 700 ms and the posterior LPC after 500 ms.61.

[ Normal View | Magnified View ]

(a) Diagram of an experimental trial for the visual mental imagery and perception conditions in an ERP adaptation paradigm (only trials with test faces preceded by adaptor faces are shown). Perception and imagery trials have a parallel structure. In the perception trials, left side, an adaptor face was perceived and subjects pressed a key as soon as they identified the person. The test face appeared 200 ms after the key press. In the imagery trials, right side, an appropriate face (the adaptor) was visualized upon seeing the corresponding name (which was on the screen for 300 ms). Subjects pressed a key as soon as they had generated a vivid mental image and the test face appeared 200 ms after this key press, as in the perception trials. There was no task on the test faces. (b) Perceptual (left) and imagery (right) N170 adaptation effects on test faces. Data are for test faces (F) preceded by face or control object adaptors (perceived or visualized), resulting in f–F (dashed line) and o–F (solid line) trials, respectively. Grand average ERP results between − 100 and 200 ms at the right occipitotemporal site indicated in the schematic head. Data are referenced to the average of all sites and plotted negative voltage up. Face adaptors suppress the N170, whereas imagery adaptors enhance it (relative to objects adaptors). (Reprinted with permission from Ref 50. Copyright 2008 Elsevier).

[ Normal View | Magnified View ]

Schematic of the multi‐voxel pattern analysis logic applied to comparing visual perception and imagery datasets. The pattern of activation during perception and imagery of two categories of stimuli is measured in one or more regions of interest (left). A classifier is used to find an optimal hypersurface (usually employing generalization methods) that distinguishes the two stimulus categories (X and Y) during perception. Note that the classifier in the figure has only two axes, for simplicity; however, in general, the classifier has N axes, one per voxel, and the classification hypersurface has N dimensions. Finally, the classifier trained on the perception data is tested on the data from the two categories in the imagery condition (right).

[ Normal View | Magnified View ]

Maps comparing brain activation during perception and imagery of objects. Activation maps are displayed on a normalized brain (coronal sections), with the left side of the brain shown on the right. The pattern of similarity is shown with six sections for (a) frontal regions, (b) parietal and temporal regions, and (c) parietal and occipital regions. The position of each section is shown on the sagittal views (bottom). The three columns in each panel show activation maps (Z scores) for the perception condition, the imagery condition, and the contrast between perception and imagery. The pattern of overlap is 100% in the frontal lobe, as indicated by the lack of any active voxels in the third column. (Reprinted with permission from Ref 30. Copyright 2004 Elsevier).

[ Normal View | Magnified View ]

Browse by Topic

Psychology > Perception and Psychophysics

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts