This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

Vision in the natural world

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Historically, the study of visual perception has followed a reductionist strategy, with the goal of understanding complex visually guided behavior by separate analysis of its elemental components. Recent developments in monitoring behavior, such as measurement of eye movements in unconstrained observers, have allowed investigation of the use of vision in the natural world. This has led to a variety of insights that would be difficult to achieve in more constrained experimental contexts. In general, it shifts the focus of vision away from the properties of the stimulus toward a consideration of the behavioral goals of the observer. It appears that behavioral goals are a critical factor in controlling the acquisition of visual information from the world. This insight has been accompanied by a growing understanding of the importance of reward in modulating the underlying neural mechanisms and by theoretical developments using reinforcement learning models of complex behavior. These developments provide us with the tools to understanding how tasks are represented in the brain, and how they control acquisition of information through use of gaze. WIREs Cogni Sci 2011 2 158–166 DOI: 10.1002/wcs.113 This article is categorized under: Computer Science > Computer Vision

The yellow circles show the fixations made while a subject makes a peanut butter and jelly sandwich. Views from a video camera mounted on the subject's head have been superimposed to make a composite mosaic to compensate for the subject's head movements during the task using the method described by Rothkopf & Pelz (2004). The diameter of the yellow circles indicates the duration of the fixations.

[ Normal View | Magnified View ]

The top part of the figure shows a virtual agent in a simulated walking environment. The agent must extract visual information from the environment in order to do three sub‐tasks: staying on the sidewalk, avoiding blue obstacles, and picking‐up purple litter objects (achieved by contacting them). The model agent learns how to deploy attention/gaze at each time step. The bottom panel shows seven time steps after learning. The red lines indicate that the agent is using visual information to avoid the obstacle. The blue line indicates that the agent is using information about position on the sidewalk, and the green lines show the agent using vision to intersect the purple litter object.

[ Normal View | Magnified View ]

Illustration of the task control of fixations. A particular behavioral goal, such as eating a peanut butter and jelly sandwich, can be broken down into sub‐tasks such as picking up the peanut butter jar. To accomplish this the subject must fixate the jar to guide the pickup action. The fixation in turn is used to acquire specific kinds of information such as location of the jar, size, orientation, surface texture, and so on.

[ Normal View | Magnified View ]

Related Articles

Cognitive Science: Overviews

Browse by Topic

Computer Science and Robotics > Computer Vision

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts