Figure 6 - uploaded by Zhao Dongcheng
Content may be subject to copyright.
Four few-shot learning baseline methods for the proposed data descriptors. The image sequence represents the encoded frame. (a) The LSTM method is applied to ANN networks, and the corresponding SNN version replaces the LSTM layer with the fullly connected layer. (c) Neartest Neighbor method calculates the Euclidean distance between samples as the classification basis. (b) Siamese Net compares the spiking convolution feature representation of the two samples and use the fullly connected regression to classify their differences. (d) MAML method optimizes the learning process from the perspective of the gradient to make the classifier have stronger generalization.

Four few-shot learning baseline methods for the proposed data descriptors. The image sequence represents the encoded frame. (a) The LSTM method is applied to ANN networks, and the corresponding SNN version replaces the LSTM layer with the fullly connected layer. (c) Neartest Neighbor method calculates the Euclidean distance between samples as the classification basis. (b) Siamese Net compares the spiking convolution feature representation of the two samples and use the fullly connected regression to classify their differences. (d) MAML method optimizes the learning process from the perspective of the gradient to make the classifier have stronger generalization.

Source publication
Preprint
Full-text available
Few-shot learning (learning with a few samples) is one of the most important capacities of the human brain. However, the current artificial intelligence systems meet difficulties in achieving this ability, so as the biologically plausible spiking neural networks (SNNs). Datasets for traditional few-shot learning domains provide few amounts of tempo...

Contexts in source publication

Context 1
... a classical pattern classification method, the nearest neighbor 23, 24 (NN) method can evaluate the separability of samples to a certain extent and provide a benchmark for other algorithms to compare. As shown in Figure 6 (c), the NN method compares the input sample with each in the training set and finds the sample closest to the input according to the given distance measurement function. Then the category of the input sample can be decided by the neighbor. ...
Context 2
... the sample pairs belong to the same category, they are marked as 1; otherwise, they are marked as 0. The network compares the two samples to determine whether they belong to the same category. As shown in Figure 6 (b), the two samples share the first half of the network structurally, and the difference between the two feature maps is input into the later fully connected layer. During the test phase, the given query set is compared with the samples of the support set one by one, and the category with the largest probability value is output as the classification result. ...
Context 3
... a classical pattern classification method, the nearest neighbor 23, 24 (NN) method can evaluate the separability of samples to a certain extent and provide a benchmark for other algorithms to compare. As shown in Figure 6 (c), the NN method compares the input sample with each in the training set and finds the sample closest to the input according to the given distance measurement function. Then the category of the input sample can be decided by the neighbor. ...
Context 4
... the sample pairs belong to the same category, they are marked as 1; otherwise, they are marked as 0. The network compares the two samples to determine whether they belong to the same category. As shown in Figure 6 (b), the two samples share the first half of the network structurally, and the difference between the two feature maps is input into the later fully connected layer. During the test phase, the given query set is compared with the samples of the support set one by one, and the category with the largest probability value is output as the classification result. ...

Similar publications

Article
Full-text available
As one of the important artificial intelligence fields, brain-like computing attempts to give machines a higher intelligence level by studying and simulating the cognitive principles of the human brain. A spiking neural network (SNN) is one of the research directions of brain-like computing, characterized by better biogenesis and stronger computing...
Article
Full-text available
Few-shot learning (learning with a few samples) is one of the most important cognitive abilities of the human brain. However, the current artificial intelligence systems meet difficulties in achieving this ability. Similar challenges also exist for biologically plausible spiking neural networks (SNNs). Datasets for traditional few-shot learning dom...
Article
Full-text available
In this paper, an impulsive control algorithm with time-delay is proposed to alleviate the issue of interrelated state variables and the adverse effect of input voltage or load perturbations on output in the highly nonlinearity of the DC converter. The impulsive control is introduced concerning the state information of the control system, while the...