Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Cogn Sci

Bayesian statistical approaches to evaluating cognitive models

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Cognitive models aim to explain complex human behavior in terms of hypothesized mechanisms of the mind. These mechanisms can be formalized in terms of mathematical structures containing parameters that are theoretically meaningful. For example, in the case of perceptual decision making, model parameters might correspond to theoretical constructs like response bias, evidence quality, response caution, and the like. Formal cognitive models go beyond verbal models in that cognitive mechanisms are instantiated in terms of mathematics and they go beyond statistical models in that cognitive model parameters are psychologically interpretable. We explore three key elements used to formally evaluate cognitive models: parameter estimation, model prediction, and model selection. We compare and contrast traditional approaches with Bayesian statistical approaches to performing each of these three elements. Traditional approaches rely on an array of seemingly ad hoc techniques, whereas Bayesian statistical approaches rely on a single, principled, internally consistent system. We illustrate the Bayesian statistical approach to evaluating cognitive models using a running example of the Linear Ballistic Accumulator model of decision making (Brown SD, Heathcote A. The simplest complete model of choice response time: linear ballistic accumulation. Cogn Psychol 2008, 57:153–178). WIREs Cogn Sci 2018, 9:e1458. doi: 10.1002/wcs.1458

This article is categorized under:

  • Neuroscience > Computation
  • Psychology > Reasoning and Decision Making
  • Psychology > Theory and Methods
Prior distributions represent our subjective a priori beliefs about parameter values. This figure illustrates two different prior distributions chosen for v A in the LBA model, which in one common instantiation is constrained to fall between 0 and 1. Both priors are beta distributions with different shapes (controlled by the rate parameter α and the shape parameter β of the beta). When α = β, the beta distribution is mathematically equivalent to the uniform distribution, depicted in red. This prior is noninformative because it represents the belief that all values of v A are equally likely. Depicted in blue is a beta distribution with α = 5 and β = 20 and represents the prior belief that relatively large values of v A are more likely than relatively small values. This prior is informative because we have concentrated a large amount of mass over a relatively small range of parameter values.
[ Normal View | Magnified View ]
The Linear Ballistic Accumulator (LBA) model is an example of a formal cognitive model that predicts response probabilities and distributions of response times. LBA can be used to decompose response time and accuracy into core cognitive parameters: evidence accumulation, response caution, and perceptual encoding. LBA assumes that after the stimulus is perceptually encoded after time τ, evidence toward each response alternative, i, accumulates with drift rate d i. Drift rates across trials are sampled from normal distributions with mean v i and standard deviation s. In our examples, we constrain mean drift rate for the Response B accumulator to be 1 minus mean drift rate for the Response A accumulator. The starting point of the evidence accumulation process on each is sampled from a uniform distribution between 0 and a. The response is determined by the first accumulator to reach threshold a + k.
[ Normal View | Magnified View ]
(a) The path of the Markov chain for τ and v A. The chain begins in low density region around τ = .2 and v A = .2 and quickly moves to a higher density region as per the Metropolis acceptance probability ratio. (b) (below (a)) The resulting samples drawn from the joint posterior distribution of τ and v A, excluding the first 100 samples as burn‐in. (c) The path the chain took over the marginal distribution for v A at each iteration of the algorithm. The resulting marginal distribution is plotted below in (d). The path of the chain over the marginal distribution of τ and the resulting samples are shown in (e) and (f), respectively.
[ Normal View | Magnified View ]
Graphical depiction of the Savage‐Dickey ratio test. The dotted line is the prior placed on the effect size and solid line is posterior. The black dots represent the height of the prior and posterior when the effect size is 0. The ratio of these heights is the Bayes factor, the weight of the evidence. The height of the posterior at zero is 178 times less than the prior at zero, indicating the data have decreased our belief in the effect size being zero by a factor of 178.
[ Normal View | Magnified View ]
Shows the predictive distribution for each of several posterior samples (black lines) and the overall posterior predictive distribution (red line) plotted against the response time distribution simulated from the LBA (histogram bars).
[ Normal View | Magnified View ]

Browse by Topic

Psychology > Theory and Methods
Neuroscience > Computation
Psychology > Reasoning and Decision Making

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts