As can be seen from the earlier sections of this web page, classifier ensembles are a highly sophisticated area of study, with a daunting array of techniques and approaches. The high number of variants makes it impossible to even begin approaching a comprehensive demonstration of classifier ensemble with a single applet without overwhelming those trying to learn from the applet. So, rather than attempting to demonstrate a wide variety of ensemble algorithms, it has instead been decided to simplify the task as much as possible in order to allow readers to gain an intuitive understanding of how ensemble classifiers work on a fundamental level, and of when it is beneficial to use them rather than a single well-chosen classifier.
Section 8 explains the essential role of diversity in ensemble classification. Indeed, classifier diversity may well be the most important factor influencing the success of an ensemble. It has therefore been decided to design this applet for the primary purpose of instilling an intuitive understanding in users of how classifier diversity can affect ensembles.
Although a precise and non-controversial definition of diversity can be elusive, it is defined here, for the purposes of this applet, as a value inversely proportional to the probability that the component classifiers of an ensemble will misclassify the same input patterns. So, for example, two classifiers with a similar classification accuracy will tend to misclassify the same input patterns if they have a low diversity, but will be more likely to misclassify different input patterns if the have a high diversity. A high diversity between component classifiers is highly beneficial from the perspective of ensemble classification, as this diversity increases the probability that errors of individual classifiers will be “averaged out” by the ensemble.
Upon consideration, it can be seen that the relative success rates of component classifiers is a subject of significant interest with respect to diversity. For example, a set of classifiers with moderately high accuracy will tend to benefit more from high diversity than classifiers with lower classification accuracy. Relative classifier success rates also play a significant role in determining whether it is profitable to use an ensemble as a opposed to a single good classifier. For example, a single classifier that performs much better than the other available classifiers may well perform better than all of the classifiers operating together in an ensemble, even if there is high diversity. Scenarios such as these can be verified by experimenting with the applet below.
So, it was decided to emphasize diversity and component classifier success rates as the two basic parameters in the applet below. Although there are numerous other ensemble parameters that can be experimented with (e.g. the number of component classifiers, the type of classifier fusion or selection, the structure of the component classifiers, the architecture of each individual classifier, etc.), the inclusion of too many parameters would have compromised the pedagogical value of the applet by overly complicating it and causing the chain of cause and effect to become ambiguous. As is discussed in the previous sections of this web page, many of these factors can indeed by less influential than one might think. More significant is the lack of ability to vary the amount of exposure of each component classifier to the full sets of features or training instances, as these issues can play a very important role in ensemble effectiveness. Parameters relating to feature and training instance exposure were not included in this applet, however, because they are so closely coupled to the notion of diversity that it would be impossible to vary diversity independently, which is the primary pedagogical goal of this applet. However, the construction of a future separate applet that illustrates the role of varying exposure to feature and training sets would be highly instructive.
True classifiers were not implemented in this applet, as the goal of this applet is to demonstrate the overall effects of classifier diversity and the relative performance of different component classifiers. The implementation of a particular classifier architecture could have introduced an implementation-dependent bias to the applet, and would have significantly complicated the implementation of the user’s control of diversity and relative performance. What was done instead was to accurately statistically simulate general virtual classifiers using random Monte Carlo methods. These virtual classifiers are referred to simply as “classifiers” in the remainder of this text, and are discussed as if they were real classifiers.
The applet consists of five binary classifiers, each of which has a separate user-definable classification accuracy between 50% and 100%. These classifiers classify input patterns into one of two classes, namely class A and class B. These classes are equally distributed (i.e. P(A) = P(B) = 0.5). Each classifier may output only A or B, without any confidence measurement, and each classifier has one equal vote. The individual classifiers combine their results using a simple plurality vote, which is to say that the class that receives more votes than the other class (i.e. 3 or more) is output as the result of the ensemble.
The amount of diversity of the classifiers can be controlled using a slider. The diversity value determined by the slider position is inversely proportional to the probability that the component classifiers will misclassify the same input patterns. This is implemented here using a dart-board Monte Carlo approach.
The code, including full JavaDoc documentation, is available in a zipped archive.
The sliders on the left side of the applet control the classification accuracy of each of the five component classifiers. The slider on the top-right controls the diversity of the component classifiers.
The controls at the bottom of the applet allow the user to run simulations. A single input pattern can be fed to the classifiers by pressing the Process Input button. This will cause the classifiers to process an input pattern with the model classification specified in the INPUT MODEL CLASS combo box. The classification output of each classifier will then be displayed to the right of the classification accuracy sliders and the output of the entire ensemble is shown under the diversity slider. A blue result indicates a correct classification and a red result indicates an incorrect classification.
The performances to date of the ensemble and several individual classifiers are displayed in the right-central part of the screen. These success statistics are cumulative, and are added to each time that a test classification is done using the same settings as previous test classifications. Note that pressing the Clear History button or moving any of the sliders will reset these statistics.
A text box shows the number of correct classifications as well as the total number of attempted classifications. Bar graphs to the right of these text boxes graphically display the success rates. Four different sets of statistics are displayed:
It is, of course, best to run numerous trials in order to gain better estimates of performance. The user has the option of entering a number between 1 and 5000 in the ITERATIONS TO RUN box and then pressing the Run Iterations button in order to update the performance statistics with the specified number of trials. Note that the interface is automatically frozen during such simulations in order to prevent mid-process changes of parameters, as many iterations may take several moments on slow computers. Although 5000 iterations do give a statistically significant estimate in the case of this applet, one may run more iterations by continuing to press the Run Iterations button once each simulation is complete. The results shown in the results boxes are always those of the last iteration.
The scenario and architecture implemented in this applet are both simplistic. It is important to realize that both much more powerful ensemble architectures and much more difficult classification problems are typically encountered in real life. Although a great number of factors not included in the applet can and do influence the performance of an ensemble, as can be seen by reading the previous sections of this web page, this pedagogical applet provides a good way of gaining an intuitive understanding of the importance and effects of diversity in classifier ensembles, something which can reasonably be said to be one of the most valuable tools that a researcher interested in ensemble classification can possess.
Last modified: April 19, 2005.
-top of page-