FIG 1 - uploaded by Ji Woong Yu
Content may be subject to copyright.
Schematic of (a) indistinguishable particles and (b) distinguishable particles. Permutation operation (π) over particles affect nothing in (a) but changes the configuration in (b). (c) Three types of permutation symmetry upon permutation operation. Given the model function (f ), f is permutation invariant if the output does not change upon the action of π (i.e. f (x) = f (πx)). f is permutation equivariant if the output also permutate in the same way as input (i.e. πf (x) = f (πx)). The rest of the cases has no relation with permutation symmetry and the outcome is the composite of several influences from the input.

Schematic of (a) indistinguishable particles and (b) distinguishable particles. Permutation operation (π) over particles affect nothing in (a) but changes the configuration in (b). (c) Three types of permutation symmetry upon permutation operation. Given the model function (f ), f is permutation invariant if the output does not change upon the action of π (i.e. f (x) = f (πx)). f is permutation equivariant if the output also permutate in the same way as input (i.e. πf (x) = f (πx)). The rest of the cases has no relation with permutation symmetry and the outcome is the composite of several influences from the input.

Source publication
Preprint
Full-text available
The combination of neural network potential (NNP) with molecular simulations plays an important role in an efficient and thorough understanding of a molecular system's potential energy surface (PES). However, grasping the interplay between input features and their local contribution to NNP is growingly evasive due to heavy featurization. In this wo...

Contexts in source publication

Context 1
... of atomic labels i and j, i.e., equivariant under the permutation. Thus, the input should be treated as a set rather than an ordered sequence. However, simple neural networks such as multilayer perceptron (MLP) mix and entangle the outputs from the previous layer as the input goes deep into the layers; instead of n identical particles (see Fig. 1(a)), what the network encounter is a group of n colored particles (see Fig. 1(b)). In our case, the model should be equivariant (see Fig. 1(c)) under symmetric operations, otherwise the complexity of the task increases by ∼ n! and makes the learning almost impossible even for a handful of ...
Context 2
... the input should be treated as a set rather than an ordered sequence. However, simple neural networks such as multilayer perceptron (MLP) mix and entangle the outputs from the previous layer as the input goes deep into the layers; instead of n identical particles (see Fig. 1(a)), what the network encounter is a group of n colored particles (see Fig. 1(b)). In our case, the model should be equivariant (see Fig. 1(c)) under symmetric operations, otherwise the complexity of the task increases by ∼ n! and makes the learning almost impossible even for a handful of ...
Context 3
... sequence. However, simple neural networks such as multilayer perceptron (MLP) mix and entangle the outputs from the previous layer as the input goes deep into the layers; instead of n identical particles (see Fig. 1(a)), what the network encounter is a group of n colored particles (see Fig. 1(b)). In our case, the model should be equivariant (see Fig. 1(c)) under symmetric operations, otherwise the complexity of the task increases by ∼ n! and makes the learning almost impossible even for a handful of ...

Similar publications

Preprint
Full-text available
In view of the fundamental distinction between the force-controlled model and the displacement-controlled model in buckling problems of structures and the complexity of the asymptotic post-buckling analysis traditionally based on the force-controlled model, alternatively, we provide a straightforward theoretical procedure of the energy method for t...