Figure 4 - uploaded by Zhenyu Gao
Content may be subject to copyright.
1.: Using NEAT for multiclass classification

1.: Using NEAT for multiclass classification

Source publication
Thesis
Full-text available
Multiclass classification is a fundamental and challenging task in machine learning. Class binarization is a popular method to achieve multiclass classification by converting multiclass classification to multiple binary classifications. Neuroevolution, such as Neuro Evolution of Augmenting Topologies (NEAT), is broadly used to generate Artificial N...

Contexts in source publication

Context 1
... use it as the baseline method in this thesis (also referred to as standard-NEAT [17]). Figure 4.1 shows the structure of this model. ...
Context 2
... McDonnell et al. (2018) proposed two class binarization techniques, One-vs-One NEAT (OvO-NEAT) and One-vs-All NEAT (OvA-NEAT), where NEAT generated base learners. The structure of each dichotomizer (shown in the left box in Figure 4.2) in these ensembles is very similar to what Figure 4.1 shows. The main difference is that each dichotomizer only has two output nodes here. ...
Context 3
... McDonnell et al. (2018) proposed two class binarization techniques, One-vs-One NEAT (OvO-NEAT) and One-vs-All NEAT (OvA-NEAT), where NEAT generated base learners. The structure of each dichotomizer (shown in the left box in Figure 4.2) in these ensembles is very similar to what Figure 4.1 shows. The main difference is that each dichotomizer only has two output nodes here. ...
Context 4
... Neat algorithm that generates ANNs for binary classification is referred to as binary-NEAT (B-Neat) by us. The left box in Figure 4.2 shows an example model of a binary-NEAT classifier. ...
Context 5
... generation for evolving f i is G/k; In training stage, we first design a random code matrix M using algorithm 2 (or design an optimized Minimal ECOC using algorithm 3). Then we train l dichotomizers using NEAT (champions of binary problems' population in Figure 4.2) to constitute the set of classifiers, F. In the decoding stage, we determine the final prediction according to the hamming distance. ...
Context 6
... plot a line graph ( Figure 6.3) and a box plot ( Figure 6.4) to present the relationship between testing accuracy (y-axis, dependent variable) and the number of dichotomizers used in Ecoc-Neat (x-axis, independent variable) on the Digit dataset. As shown in Figure 6.3, testing accuracy increases with the number of base classifiers becoming massive. ...
Context 7
... we describe in the "Average Classifiers' Training Accuracy" paragraph of subsection 6.2.1, for large Ecoc-Neat, the limited generations for each dichotomizer do not affect its high accuracy (see Table 6.1 and Figure 6.4). A similar assertion can be drawn based on the results of Satellite (see Table 6.2) and Ecoli (see Table 6.3). ...
Context 8
... plot training curves of optimized minimal Ecoc-Neat ( Figure 6.13) and random 10-bit Ecoc-Neat ( Figure 6.14) on Digit. Both graphs describe the trend of training accuracy (orange line) and ACTA (blue line) through generations. ...
Context 9
... number of generations for each classifier (g) is inversely proportional to the number of classifiers (l) based on Equation 6.2. Therefore, to match the new x-axis, training curves need to expand horizontally l times its original length (e.g., training curve of 10-bit Ecoc-Neat in Figure 6.15 extend 10 times its original size in Figure 6.14). ...