This Title All WIREs
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 7.250

Support vector machines in engineering: an overview

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

This paper provides an overview of the support vector machine (SVM) methodology and its applicability to real‐world engineering problems. Specifically, the aim of this study is to review the current state of the SVM technique, and to show some of its latest successful results in real‐world problems present in different engineering fields. The paper starts by reviewing the main basic concepts of SVMs and kernel methods. Kernel theory, SVMs, support vector regression (SVR), and SVM in signal processing and hybridization of SVMs with meta‐heuristics are fully described in the first part of this paper. The adoption of SVMs in engineering is nowadays a fact. As we illustrate in this paper, SVMs can handle high‐dimensional, heterogeneous and scarcely labeled datasets very efficiently, and it can be also successfully tailored to particular applications. The second part of this review is devoted to different case studies in engineering problems, where the application of the SVM methodology has led to excellent results. First, we discuss the application of SVR algorithms in two renewable energy problems: the wind speed prediction from measurements in neighbor stations and the wind speed reconstruction using synoptic‐pressure data. The application of SVMs in noninvasive cardiac indices estimation is described next, and results obtained there are presented. The application of SVMs in problems of functional magnetic resonance imaging (fMRI) data processing is further discussed in the paper: brain decoding and mental disorder characterization. The following application deals with antenna array processing, namely SVMs for spatial nonlinear beamforming, and the SVM application in a problem of arrival angle detection. Finally, the application of SVMs to remote sensing image classification and target detection problems closes this review. WIREs Data Mining Knowl Discov 2014, 4:234–267. doi: 10.1002/widm.1125 This article is categorized under: Technologies > Computational Intelligence Technologies > Machine Learning
Illustration of kernel classifiers. (a) Support vector machine (SVM): Linear decision hyperplanes in a nonlinearly transformed, feature space, where slack variables ξi are included to deal with errors. (b) Support vector domain description (SVDD): The hypersphere containing the target data is described by center ‘a’ and radius R. Samples in the boundary and outside the ball are unbounded and bounded support vectors, respectively. (c) One class‐support vector machine (OC‐SVM): another way of solving the data description problem, where all samples from the target class are mapped with maximum distance to the origin.
[ Normal View | Magnified View ]
From left to right: True thematic map for the Rome dataset in 1999, and classification maps by different one‐class methods. White pixels represent the class ‘nonurban’, black pixels are ‘unknown class’, and gray pixels are ‘urban’. We indicate the kappa coefficient and overall accuracy averaged over 10 realizations.
[ Normal View | Magnified View ]
Six signals in AWGN noise, where the first and the second are coherent. Left panel shows the comparison between the spatial smoothed versions of MUSIC and MVDR. Right panel shows the spatial smoothing version of SVR‐MUSIC and SVR‐MVDR algorithms with 25 elements and 50 snapshots.
[ Normal View | Magnified View ]
Bit error rate (BER) performance as a function of thermal noise power for linear algorithms, SVR with temporal reference (SVR–TR), SVR with spatial reference (SVR–SR), kernel temporal reference (kernel‐TR) and kernel spatial reference (kernel‐SR).
[ Normal View | Magnified View ]
Left: Independent component analysis (ICA) task related brain map; center, ICA default mode brain map; Right: General linear model (GLM) brain map. Areas in which the maps were partitioned are color‐coded according to their associated relevance coefficients.
[ Normal View | Magnified View ]
Obtained maps for visual (a), motor (b), cognitive (c), and auditory (d) activation using the Adaboost algorithm and radial basis function–support vector machine (RBF–SVM) local classifiers highlight areas that are important for classification with respect to other three conditions.
[ Normal View | Magnified View ]
Example of activation t‐maps corresponding to visual (a), motor (b), cognitive (c), and auditory (d) activations in a single subject.
[ Normal View | Magnified View ]
Linear support vector regression (SVR) coefficients representation. (a) Example of centered and normalized Doppler M‐mode image. (b, c) Map of standardized average coefficients for Emax and τ regression models, respectively (d) Space‐averaged standardized coefficients. NET (ND) denotes normalized ejection time (normalized distance).
[ Normal View | Magnified View ]
Predicted values of Emax (a) and τ with d = 4 and linear kernel, depicted by noninvasive prediction for each observation (continuous) and catheter measurement (dashed), with vertical lines separating observations from different animals.
[ Normal View | Magnified View ]
Wind speed reconstruction with the support vector regression‐genetic algorithm (SVR + GA) approach in part of the considered test set (January to June 2005).
[ Normal View | Magnified View ]
Wind farm location and grid (0.75°) considered.
[ Normal View | Magnified View ]
Situation of the wind measuring towers in Spain and within the wind farm.
[ Normal View | Magnified View ]
Brief taxonomy of the support vector machine (SVM) techniques and applications discussed in this paper.
[ Normal View | Magnified View ]
Illustration of the support vector regression (SVR) model. Samples in the original input space are first mapped to a reproducing kernel hilbert space (RKHS) where a linear regression is performed. All samples outside a fixed tube of size ε are penalized, and are support vectors (double‐circled symbols). Penalization is carried out by applying (a) Vapnik's ε‐insensitive or (b) ε‐Huber cost functions.
[ Normal View | Magnified View ]

Related Articles

Imaging: An Interdisciplinary View

Browse by Topic

Technologies > Machine Learning
Technologies > Computational Intelligence

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts