This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.476

Reframing spatial frames of reference: What can aging tell us about egocentric and allocentric navigation?

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Representations of space in mind are crucial for navigation, facilitating processes such as remembering landmark locations, understanding spatial relationships between objects, and integrating routes. A significant problem, however, is the lack of consensus on how these representations are encoded and stored in memory. Specifically, the nature of egocentric and allocentric frames of reference in human memory is widely debated. Yet, in recent investigations of the spatial domain across the lifespan, these distinctions in mnemonic spatial frames of reference have identified age‐related impairments. In this review, we survey the ways in which different terms related to spatial representations in memory have been operationalized in past aging research and suggest a taxonomy to provide a common language for future investigations and theoretical discussion. This article is categorized under: Psychology > Memory Neuroscience > Cognition Psychology > Development and Aging
Egocentric and allocentric spatial frames of reference. (a) Knowing that one is to the right of the grocery store is an example of an egocentric representation of space. (b) Knowing that the library is across from the grocery store is an example of an allocentric representation of space
[ Normal View | Magnified View ]
Summary of egocentric and allocentric frames of reference. A visual summary of the taxonomy outlined in this review regarding spatial frames of reference as representations of space. The top images are separated from the bottom images on the basis of the spatial relationship highlighted in the representation (i.e., subject‐to‐object in egocentric space and object‐to‐object in allocentric space). The left images are separated from the right images on the basis of the observer's point‐of‐view(s) to apprehend all relevant landmarks in a space
[ Normal View | Magnified View ]
Multi‐versus single‐viewpoint allocentric space. (a) A three‐dimensional (3D), multi‐viewpoint representation of allocentric space. Here, the relative relationships between feature locations (striped yellow and blue circles) are encoded with regard to a target location (solid green circle) independent of the navigator's position in space. (b) A two‐dimensional (2D), single‐viewpoint representation of allocentric map‐like space, with landmarks encoded relative to each other indicated by the colored circles. In spatial memory tasks, older adults tend to perform better on 2D single‐viewpoint allocentric tasks than 3D multi‐viewpoint allocentric ones, suggesting a fundamental difference between these two spatial representations that needs to be clarified
[ Normal View | Magnified View ]
Single‐viewpoint versus multi‐viewpoint egocentric space. (a) Egocentric‐single‐viewpoint representations account for spatial information in small‐scale, vista space that can be fully apprehended from a single position in space. In this example, the full kitchen can be seen from a single perspective relative to one's own position (indicated by the eye icon), including key features as indicated by the orange circles. (b) Egocentric‐multi‐viewpoint representations need to account for spatial information in large‐scale environmental space, which is not always visible from a single point of view and needs to be continuously updated through time. For example, a city square is a large‐scale environment that requires head rotations to see all the important features from multiple perspectives (indicated by the eye icons on the orange arrows)
[ Normal View | Magnified View ]
Path integration. An example of a virtual reality arena presented from a first‐person perspective on a computer screen created using OpenMaze (https://openmaze.duncanlab.org/) in Unity (Unity Technologies, San Francisco, CA). Participants move through the space using the keyboard arrow keys or a joystick, and what is visible on the screen at any given moment represents their field of view. Solid‐lined arrows represent the path taken between the start point (Location marker A, in orange) and intermediate stops (Location markers B, C, and D, in blue). The dotted‐lined arrow represents the integrated inferred path back to the start point not explicitly learned during the path integration task
[ Normal View | Magnified View ]

Browse by Topic

Psychology > Development and Aging
Neuroscience > Cognition
Psychology > Memory

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts