Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 2.111

Generating ensembles of heterogeneous classifiers using Stacked Generalization

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta‐classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains. WIREs Data Mining Knowl Discov 2015, 5:21–34. doi: 10.1002/widm.1143

Influence of diversity on the ensemble decision.
[ Normal View | Magnified View ]
Generating an ensemble of classifiers using Stacking.
[ Normal View | Magnified View ]
Overview of the stacking procedure.
[ Normal View | Magnified View ]

Browse by Topic

Technologies > Classification

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts