Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.175

Bayesian models of cognition

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract There has been a recent explosion in research applying Bayesian models to cognitive phenomena. This development has resulted from the realization that across a wide variety of tasks the fundamental problem the cognitive system confronts is coping with uncertainty. From visual scene recognition to on‐line language comprehension, from categorizing stimuli to determining to what degree an argument is convincing, people must deal with the incompleteness of the information they possess to perform these tasks, many of which have important survival‐related consequences. This paper provides a review of Bayesian models of cognition, dividing them up by the different aspects of cognition to which they have been applied. The paper begins with a brief review of Bayesian inference. This falls short of a full technical introduction but the reader is referred to the relevant literature for further details. There follows reviews of Bayesian models in Perception, Categorization, Learning and Causality, Language Processing, Inductive Reasoning, Deductive Reasoning, and Argumentation. In all these areas, it is argued that sophisticated Bayesian models are enhancing our understanding of the underlying cognitive computations involved. It is concluded that a major challenge is to extend the evidential basis for these models, especially to accounts of higher level cognition. WIREs Cogn Sci 2010 1 811–823 This article is categorized under: Psychology > Theory and Methods

Inductive arguments vary in strength. The conclusion in argument (a) may seem stronger, or more probable given the evidence, than the conclusion in (b).

[ Normal View | Magnified View ]

The mean acceptance ratings for Ref 100 by evidence (1 vs. 50 experiments), prior belief (strong vs. weak), and argument type (positive vs. negative). CI = confidence interval, (N = 84).

[ Normal View | Magnified View ]

Positive (P(T|e)) and negative ( test validity. These probabilities can be calculated from the sensitivity (P(e|T)) and the selectivity (P(eT)) of the test and the prior belief that T is true (P(T)) using Bayes' theorem. Let n denote sensitivity, that is, n = P(e|T), l denote selectivity, that is, l = P(eT), and h denote the prior probability of drug A being toxic, that is, h = P(T).

[ Normal View | Magnified View ]

Fit of the Bayesian conditionalization model to the empirical data. (a) the fit of the standard account presented in the text; (b) the fit provided by classical logic (modified to incorporate error); (c) the fit of a modified Bayesian conditionalization model.96.

[ Normal View | Magnified View ]

Bayesian conditionalization. P0 = prior probability, for example, prior to learning that a is a bird; P1 = posterior probability, for example, after learning that a is a bird. By Bayesian conditionalization P1(flys(a)) = P0(flys(a)|bird(a)). Note that (a) and (b) are perfectly probabilistically compatible, that is, Bayesian conditionalization is non‐monotonic.

[ Normal View | Magnified View ]

Monotonic (a) and non‐monotonic (b) conditional inference by MP. In (a), the additional information, that the particular triangle a is red, cannot override the original conclusion that qua triangle, a has three sides. In contrast, in (b), the additional information, that the particular bird a is an Ostrich does override the original conclusion that qua bird, a can fly.

[ Normal View | Magnified View ]

The valid inferences, modus ponens (MP) and modus tollens (MT), and the fallacies, denying the antecedent (DA) and affirming the consequent (AC), investigated in conditional inference. These inference schema are to be read that if the list of premises above the line are true so must be the conclusion below the line.

[ Normal View | Magnified View ]

Empirical effects. (a) Similarity: when premise and conclusion are more similar (rabbits–dogs) inference is stronger than when they are less similar (rabbits–bears). (b) Typicality: typical categories (bluejays) lead to stronger inferences than less typical (geese). Variability: variable categories (c) lead to stronger inferences than less variable categories (d). Diversity. : diverse categories (f) lead to stronger inferences than less diverse categories (e).

[ Normal View | Magnified View ]

Related Articles

Cognitive Science: Overviews

Browse by Topic

Psychology > Theory and Methods

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts