Detailed structure of a Transformer Encoder layer.

Detailed structure of a Transformer Encoder layer.

Source publication
Preprint
Full-text available
Being unconscious and spontaneous, micro-expressions are useful in the inference of a person's true emotions even if an attempt is made to conceal them. Due to their short duration and low intensity, the recognition of micro-expressions is a difficult task in affective computing. The early work based on handcrafted spatio-temporal features which sh...

Context in source publication

Context 1
... encoder contains L T transformer layers; herein we use L T = 12, adopting this value from the ViT-Base model of Dosovitskiy et al. [27] (the pre-trained encoder we use in experiments). Each layer involves two blocks, a Multi-head Self-attention Mechanism (MSM) and a Position-wise Fully connected Feed-Forward Network (FFN), as shown in Fig. 6. Layer Normalisation (LN) is applied before each block and residual connections after each block [50], [51]. The output of the transformer layer can be written as ...

Similar publications

Article
Full-text available
Being spontaneous, micro-expressions are useful in the inference of a person's true emotions even if an attempt is made to conceal them. Due to their short duration and low intensity, the recognition of micro-expressions is a difficult task in affective computing. The early work based on handcrafted spatio-temporal features which showed some promis...