Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

Support vector machine regularization

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Finding the best decision boundary for a classification problem involves covariance structures, distance measures, and eigenvectors. This article considers how eigenstructures are an inherent part of the support vector machine (SVM) functional basis that encodes the geometric features of a separating hyperplane. SVM learning capacity involves an eigenvector set that spans the parameter space being learned. The linear SVM has been shown to have insufficient learning capacity when the number of training examples exceeds the dimension of the feature space. For this case, an incomplete eigenvector set spans the observation space. SVM architectures based on insufficient eigenstructures lack sufficient learning capacity for good separating hyperplanes. However, proper regularization ensures that two essential types of ‘biases’ are encoded within SVM functional mappings: an appropriate set of algebraic (and thus geometric) relationships and a sufficient eigenstructure set. WIREs Comp Stat 2011 3 204–215 DOI: 10.1002/wics.149 This article is categorized under: Statistical Learning and Exploratory Methods of the Data Sciences > Support Vector Machines

The Bayes decision boundary for linearly separable Gaussian binary classification.

[ Normal View | Magnified View ]

Soft margin L2 support vector machine decision boundary estimate based on 600 training data points with ϵ = 10−8.

[ Normal View | Magnified View ]

Soft margin L2 support vector machine decision boundary estimate based on 400 training data points with ϵ = 10−8.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 600 training data points with C = 1.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 600 training data points with C = 1000.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 600 training data points with C = 1000.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 400 training data points with C = 1.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 400 training data points with C = 100.

[ Normal View | Magnified View ]

Soft margin L1 support vector machine decision boundary estimate based on 400 training data points with C = 1000.

[ Normal View | Magnified View ]

Browse by Topic

Statistical Learning and Exploratory Methods of the Data Sciences > Support Vector Machines

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts