Fig 1 - uploaded by Qi Cao
Content may be subject to copyright.
(a) Gaussian random noise. (b) Diagonal structure noise. (c) Simulated illumination noise.

(a) Gaussian random noise. (b) Diagonal structure noise. (c) Simulated illumination noise.

Source publication
Preprint
Full-text available
In this paper, we develop the weighted error entropy-regularized tensor learning method for multi-view subspace clustering (WETMSC), which integrates the noise disturbance removal and subspace structure discovery into one unified framework. Unlike most existing methods which focus only on the affinity matrix learning for the subspace discovery by d...

Contexts in source publication

Context 1
... that the noise is independent and identically distributed (i.i.d.), i.e. they treat the entries of error term independently, ignoring their structural information. Corresponding to ITL, the entropy of different distribution noise should be different and thus the i.i.d. assumption is not sufficient to fully describe the noise behavior. As shown in Fig. 1, three images are contaminated with Gaussian noise, but they have completely different spatial distributions. The traditional MSE or CIM describe the noise behavior under the i.i.d. assumption, and they will obey the same value. Obviously, the noise in Fig. 1(a) has higher randomness than that in Fig. 1(b) and (c), so the entropy is ...
Context 2
... i.i.d. assumption is not sufficient to fully describe the noise behavior. As shown in Fig. 1, three images are contaminated with Gaussian noise, but they have completely different spatial distributions. The traditional MSE or CIM describe the noise behavior under the i.i.d. assumption, and they will obey the same value. Obviously, the noise in Fig. 1(a) has higher randomness than that in Fig. 1(b) and (c), so the entropy is larger. In addition, Fig. 1(b) and (c) have non-i.i.d. structured noise. Hence, it is not realistic to be accurately described by a single noise distribution criterion. Specially, Fig. 1(c) simulates different illumination in face images, which is typical and ...
Context 3
... describe the noise behavior. As shown in Fig. 1, three images are contaminated with Gaussian noise, but they have completely different spatial distributions. The traditional MSE or CIM describe the noise behavior under the i.i.d. assumption, and they will obey the same value. Obviously, the noise in Fig. 1(a) has higher randomness than that in Fig. 1(b) and (c), so the entropy is larger. In addition, Fig. 1(b) and (c) have non-i.i.d. structured noise. Hence, it is not realistic to be accurately described by a single noise distribution criterion. Specially, Fig. 1(c) simulates different illumination in face images, which is typical and common noise. Therefore, the weak ability of ...
Context 4
... images are contaminated with Gaussian noise, but they have completely different spatial distributions. The traditional MSE or CIM describe the noise behavior under the i.i.d. assumption, and they will obey the same value. Obviously, the noise in Fig. 1(a) has higher randomness than that in Fig. 1(b) and (c), so the entropy is larger. In addition, Fig. 1(b) and (c) have non-i.i.d. structured noise. Hence, it is not realistic to be accurately described by a single noise distribution criterion. Specially, Fig. 1(c) simulates different illumination in face images, which is typical and common noise. Therefore, the weak ability of existing methods to describe complex noise in real scenes needs ...
Context 5
... the i.i.d. assumption, and they will obey the same value. Obviously, the noise in Fig. 1(a) has higher randomness than that in Fig. 1(b) and (c), so the entropy is larger. In addition, Fig. 1(b) and (c) have non-i.i.d. structured noise. Hence, it is not realistic to be accurately described by a single noise distribution criterion. Specially, Fig. 1(c) simulates different illumination in face images, which is typical and common noise. Therefore, the weak ability of existing methods to describe complex noise in real scenes needs to be improved. To address these challenges, we utilize the weighted error entropy with the independent and piecewise identically distributed (i.p.i.d.) model ...
Context 6
... demonstrate the robustness of the proposed WETMSC method, we add 5%, 10%, 15%, 20% of Gamma noise (GM), Gaussian noise (GS) (Fig. 1(a)) and Poisson noise (PS) into the UCI-3views dataset to construct 12 noises datasets. Besides, we also add 5%, 10% mixed noise (GM, GS, PS) on the MSCR-V1 dataset. In addition, we add 50% and 100% simulated illumination noise (SI) with different mean values, diagonal noise and block noise on MSCR-V1 dataset. Specifically, for SI noise ...
Context 7
... (Fig. 1(a)) and Poisson noise (PS) into the UCI-3views dataset to construct 12 noises datasets. Besides, we also add 5%, 10% mixed noise (GM, GS, PS) on the MSCR-V1 dataset. In addition, we add 50% and 100% simulated illumination noise (SI) with different mean values, diagonal noise and block noise on MSCR-V1 dataset. Specifically, for SI noise (Fig. 1(c)), each disjoint d (v) × 10 region on each view of MSCR-V1 is added with Gaussian noise that obeys the i.i.d., their means and standard deviations increases gradually at a rate of 0.05 and 0.01 from left to right, respectively. For diagonal noise ( Fig. 1(b)), each view of MSCR-V1 is destroyed by Gaussian noise with the mean of 0 and ...
Context 8
... values, diagonal noise and block noise on MSCR-V1 dataset. Specifically, for SI noise (Fig. 1(c)), each disjoint d (v) × 10 region on each view of MSCR-V1 is added with Gaussian noise that obeys the i.i.d., their means and standard deviations increases gradually at a rate of 0.05 and 0.01 from left to right, respectively. For diagonal noise ( Fig. 1(b)), each view of MSCR-V1 is destroyed by Gaussian noise with the mean of 0 and the standard deviation of 1 along the diagonal region. For block noise, each view of MSCR-V1 is destroyed by outliers generated from the uniform distribution from interval [0,15]. Tables ...

Citations

... For ECMSC, three parameters are tuned from interval [0. For other compared methods, we set the parameters as in [46]. ...
Article
Full-text available
Multi-view subspace clustering aims to integrate the complementary information contained in different views to facilitate data representation. Currently, low-rank representation (LRR) serves as a benchmark method. However, we observe that these LRR-based methods would suffer from two issues: limited clustering performance and high computational cost since (1) they usually adopt the nuclear norm with biased estimation to explore the low-rank structures; (2) the singular value decomposition of large-scale matrices is inevitably involved. Moreover, LRR may not achieve low-rank properties in both intra-views and inter-views simultaneously. To address the above issues, this paper proposes the Bi-nuclear tensor Schatten-p norm minimization for multi-view subspace clustering (BTMSC) . Specifically, BTMSC constructs a third-order tensor from the view dimension to explore the high-order correlation and the subspace structures of multi-view features. The Bi-Nuclear Quasi-Norm (BiN) factorization form of the Schatten- p norm is utilized to factorize the third-order tensor as the product of two small-scale third-order tensors, which not only captures the low-rank property of the third-order tensor but also improves the computational efficiency. Finally, an efficient alternating optimization algorithm is designed to solve the BTMSC model. Extensive experiments with ten datasets of texts and images illustrate the performance superiority of the proposed BTMSC method over state-of-the-art methods.