This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

What eye movements can tell us about sentence comprehension

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Eye movement data have proven to be very useful for investigating human sentence processing. Eyetracking research has addressed a wide range of questions, such as recovery mechanisms following garden‐pathing, the timing of processes driving comprehension, the role of anticipation and expectation in parsing, the role of semantic, pragmatic, and prosodic information, and so on. However, there are some limitations regarding the inferences that can be made on the basis of eye movements. One relates to the nontrivial interaction between parsing and the eye movement control system which complicates the interpretation of eye movement data. Detailed computational models that integrate parsing with eye movement control theories have the potential to unpack the complexity of eye movement data and can therefore aid in the interpretation of eye movements. Another limitation is the difficulty of capturing spatiotemporal patterns in eye movements using the traditional word‐based eyetracking measures. Recent research has demonstrated the relevance of these patterns and has shown how they can be analyzed. In this review, we focus on reading, and present examples demonstrating how eye movement data reveal what events unfold when the parser runs into difficulty, and how the parsing system interacts with eye movement control. WIREs Cogn Sci 2013, 4:125–134. doi: 10.1002/wcs.1209 This article is categorized under: Linguistics > Computational Models of Language

This figure shows two characteristic scanpath patterns found by von der Malsburg and Vasishth in the Meseguer, Carreiras, and Clifton Spanish reanalysis dataset. One scanpath pattern is a complete re‐start (re‐reading) after the disambiguating region9 is encountered. The other scanpath pattern was a short regression to the immediately preceding word; this is consistent with the suggestion by Mitchell and colleagues of a time‐out—as the parser reanalyses, the eyes are prevented from moving forward, and as a result the eyes either stay on the current word or just revisit the preceding word until syntactic processing is finished. See text for details.

[ Normal View | Magnified View ]

This schematic figure shows how the ACT‐R‐based parsing architecture of Lewis and Vasishth interfaces with the EMMA eye movement control model. The upper panel shows the uninterrupted reading process. The only ACT‐R rule that interacts with eye movement control in a top‐down way is the shift of attention. As soon as an attention shift is requested (ATTENTION) the eye movement module starts the word recognition process (ENCODING) and at the same time programs a saccade to the same word. The preparation stage of the saccade programming (EYE PREP) can be canceled by an upcoming attention shift, which leads to a skipping of the targeted word. Once the beginning of the execution stage (EYE EXEC) has passed, an eye movement will be carried out inevitably. The completion of the attention shift, which includes the recognition of the word, is the signal for the parsing module (PARSER) to begin the integration into the syntactic structure. This includes the creation of new syntactic nodes, the retrieval of previously created structural chunks from memory, and finally the grammatical combination of both. While the parser is carrying out these steps attention is shifted to the next word and a new saccade is programmed. The time needed to retrieve an item from memory varies as a function of decay over time and similarity‐based interference. Consequently, dependent on the syntactic configuration of the sentence it is possible that the structural integration of a word is still in process while the recognition of the next word has already completed. This scenario is shown in the lower panel. In this situation, the next word naturally cannot be integrated yet. Instead a Time Out rule fires, which initiates an attention shift to the left of the current word in order to buy time for the integration process to finish.

[ Normal View | Magnified View ]

Browse by Topic

Linguistics > Computational Models of Language

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts