1.0 Introduction

The practice of combining multiple classifiers into ensembles is inspired by the notion that the combined opinions of a number of human experts is more likely to be correct than that of a single expert. Ideally, a classifier ensemble will collectively perform better than any individual classifier in the ensemble. Although this can certainly be the case, it can also happen that the best classifier does perform better than the ensemble. It is, however, hoped that the ensemble will at least perform better than the average expert. This is significant, since it is often impossible to have foolproof a priori knowledge of which classifier is in fact the best, or of which is the worst, so the combination of classifiers into ensembles helps guard against mistakenly choosing a single a sub-normal classifier.

Typically, a realistic classifier will correctly classify some instances and incorrectly classify others. If a number of different classifiers are combined, it is hoped, from the perspective of classifier combination, that the incorrect classifications produced by each classifier will differ from one another. For example, say one has nine different classifiers that have similar reasonably high success rates (say 80% to 90%), and say that the particular instances that are incorrectly classified by each classifier are independent across the classifiers. It therefore follows that the performance of the ensemble as a whole is likely to be better than the classification of any one classifier, since incorrect classifications by individual classifiers are averaged out.

There are many factors that influence how well an ensemble performs relative to the individual performance of each of its component classifiers. A prominent such factor is the relative effectiveness of each of the classifiers. A single classifier that performs much worse than the other classifiers in an ensemble can have an important negative impact on overall classifications. Similarly, it can be disadvantageous to use an ensemble if one has a single classifier that performs much better than all of the other available classifiers.

An additional important factor is the amount of correlation between the incorrect classifications made by each classifier. If all of the classifiers tend to misclassify the same instances, then combining their results will have no benefit. In contrast, a greater amount of independence between the classifiers can result in errors by individual classifiers being overlooked when the results of the ensemble are combined. The important topic of classifier diversity is discussed in more detail in Section 8.

There are many different ways in which one can combine classifiers into ensembles, each of which can work well in certain scenarios but not in others. What portion of the training data each classifier is trained on and what subset of the features is made available to each classifier can also significantly affect overall performance.

The serious study of combining classifiers is still a relatively immature area of research, and it is still in the process of coalescing into a well understood and specified discipline. Parallel and occasionally contradictory research in the fields of pattern recognition, machine learning and data fusion has only recently begun to be combined. There are currently many unresolved differences of opinion in the published literature, with the result that constructing a good ensemble can still be as much of an art as a science.

There are a wide variety of techniques and approaches available, and one must make use of one’s knowledge in a particular application domain in order to select the best solution. A good selection can potentially result in better ensemble success rates than any one of the component classifiers could provide individually. A poor selection, however, can result in reduced performance relative to what one would have received from a single well chosen classifier. In either case, the use of an ensemble will most often increase training and classification computational demands, as well as system complexity.

One should therefore approach the use of ensembles with careful consideration. One must have a good understanding not only of the palette of ensemble techniques that are available, but must also have a good knowledge of each particular application domain.

Next: Reasons for using classifier ensembles

Last modified: April 18, 2005.
-top of page-