Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 2.218

Prototype‐based models in machine learning

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

An overview is given of prototype‐based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high‐dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so‐called neural gas approach and Kohonen's topology‐preserving self‐organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. WIREs Cogn Sci 2016, 7:92–111. doi: 10.1002/wcs.1378

This article is categorized under:

  • Psychology > Development and Aging
  • Psychology > Learning
Representation of two‐dimensional data points by prototypes. In each panel, 200 data points are displayed as red dots, and prototype positions corresponding to the minimum of H VQ are marked by filled black circles. The subpanels are referred to in Competitive Vector Quantization.
[ Normal View | Magnified View ]
Visualization of the Iris flower data set (small symbols, see legend for class memberships) and corresponding GMLVQ prototypes (large symbols) in the space spanned by the two leading eigenvalues of Λ. See Adaptive Distances in Relevance Learning for details.
[ Normal View | Magnified View ]
Visualization of the generalized matrix relevance learning vector quantization (GMLVQ) systems obtained for the z‐score transformed Iris flower data set, see Adaptive Distances in Relevance Learning for details. Panel (a) displays the class prototypes bar plots with respect to the four feature space components corresponding to sepal length (feature 1), sepal width (2), petal length (3), and petal width (4). Panel (b) shows diagonal elements of Λ as a bar plot (upper) and off‐diagonal elements in color code (lower).
[ Normal View | Magnified View ]
Illustration of the k‐nearest neighbor classifier (a) and nearest‐prototype learning vector quantization (LVQ) classification (b). The same two‐dimensional dataset with three different classes and piecewise linear decision boundaries is displayed in both panels, see (b) for a legend. Prototypes are marked by larger symbols in panel (b).
[ Normal View | Magnified View ]
Visualization of the U‐matrix in an self‐organizing map (SOM) with 6 × 4 prototypes, as obtained from the UCI Iris flower data set. Panel (a) shows the U‐matrix elements in color‐code. For comparison, panel (b) displays the post hoc class labels as assigned to the prototypes by majority vote. Empty sites correspond to prototypes with empty receptive field, which are the winner for none of the samples. See Self‐Organizing Maps for details.
[ Normal View | Magnified View ]
Visualization of the component planes in an self‐organizing map (SOM) with 6 × 4 prototypes, as obtained from the UCI Iris flower data set, see Self‐Organizing Maps for details.
[ Normal View | Magnified View ]

Related Articles

Top Ten WCS Articles

Browse by Topic

Psychology > Development and Aging
Psychology > Learning

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts