Classification behavior of 6 classifiers. Red color means incorrect classification and yellow color means correct classification

Classification behavior of 6 classifiers. Red color means incorrect classification and yellow color means correct classification

Source publication
Article
Full-text available
In supervised pattern classification, it often happens that a single individual classifier is not able to meet the requirements of the problem. This is the main reason that leads to the successful use of systems composed of several classifiers (classifier ensembles) looking to obtain better results than a single classifier. The selection of the cla...

Contexts in source publication

Context 1
... results obtained using a majority vote or another combination way does not necessarily have to be the same. In Figure 1 we show a fragment of the classification behavior of 6 classifiers. ...
Context 2
... these cases, all classifiers are correct or incorrect respectively. In these two examples if we combine any of the classifiers shown in Figure 1, the result will be always the same and equal to result of any individual classifier. In example 124 if we applied different forms to combine the classifiers outputs using the classifier ensemble Vote the following can happen: ...
Context 3
... observe in Figure 10 that if the classifiers number increases, the proportion of the examples set where at least one classifier makes a mistake Figure 2) increases respect to the examples sets where the individual decision coincides (A and C, of Figure 2). Starting from a value of T = 71 the size of MR (B) and MRP (B+C of Figure 2) tends to be similar to the total of the examples set. ...
Context 4
... together to the previous analysis, can justify the bigger diversity in MR respect to the diversity calculated on all examples and on MRP. In Figure 11 we show the effect of calculating the diversity in each of the three sets for DF measure. ...
Context 5
... example, for the R measure calculated on the Reduced Matrix (see Figure 12), the best correlation (negative) is reached when the classifier ensembles are built with í µí±‡ = 3. If we increase the number of classifiers then the correlation increases until it is lost, to practically be null when classifier ensembles are built with í µí±‡ = 1001 classifiers. ...
Context 6
... most of the diversity measures, a good correlation is not achieved with the ensemble accuracy. Once again, DF and DIF (see Figure 13) are the measures that better correlation have with the accuracy, coinciding with results in [8,16,17]. ...
Context 7
... the other hand, we confirm that diversity calculated on the examples considered in the MR and in the MRP obtain better correlation values with the classifier ensembles formed (see Figure 14). Also, for non-pairwise measures it is better to calculate the diversity on MR or on MRP. ...
Context 8
... this section, we make analyze the results of the diversity measures proposed in this paper: coverage of the classification by the ensemble (CoP), similarity of the classification respect to the best individual classifier (SimBest) and similarity of the classification respect to a classifier average (SimProm). Figure 15 show a dispersion graph of the accuracy values in the ensemble formed vs the measured diversity with the coverage of the classification (CoP). We choose three values of í µí±‡: 3, 31 and 1001. ...
Context 9
... results are presented in [36], when they analyze the terms of good and bad diversity. Another analysis that comes from Figure 15 is respect to the contribution of the individual classifiers to the ensemble. ...
Context 10
... check the behavior of SimProm, we analyze the classifier ensembles accuracy and the average of the individual accuracy. Figure 16 shows as the average of individual accuracy of the classifiers used in each classifier ensemble is much smaller than the ensemble accuracy. Therefore, the measure SimProm is the one with higher values. ...
Context 11
... we expected, the SimProm measure is not extracted in any moment together with the variable of the ensemble accuracy. This corroborates the results observed in Figure 17 related to the little correlation among these two variables. In the case of SimBest measure, starting from í µí±‡ ≥ 19 it is included in the component that contains to the ensemble accuracy. ...
Context 12
... the other hand, SimBest begins to be related better with the ensemble accuracy. In fact, Figure 19 and Figure 20 show as how the second and fourth group respectively includes this measure together with the ensemble accuracy. Contrary to the observed with the measure CoP in the analysis of main components, except for í µí±‡ ≤ 13, this measure is not included together with the ensemble accuracy in the same group. ...
Context 13
... a value close to zero indicates no relation between the analyzed variables. For the analysis we use the test of correlation between paired samples (Test for correlation between Paired Samples) of the stats module of the statistical package R. Figure 21 presents the result of the correlations between the measures. For simplicity, we only show the upper triangular matrix of the correlations. ...