Figure - available from: Applied Intelligence
This content is subject to copyright. Terms and conditions apply.
The four subjects of the Extended Yale B dataset. a, b, c and d are six faces which are different light condition

The four subjects of the Extended Yale B dataset. a, b, c and d are six faces which are different light condition

Source publication
Article
Full-text available
The kernel subspace clustering algorithm aims to tackle the nonlinear subspace model. The block diagonal representation subspace clustering has a more promising capability in pursuing the k-block diagonal matrix. Therefore, the low-rankness and the adaptivity of the kernel subspace clustering can boost the clustering performance, so an adaptive low...

Similar publications

Article
Full-text available
In this paper, strong convergence results for $\alpha -$inverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modifie...

Citations

... To overcome the drawback that the existing linear subspace clustering methods cannot deal with nonlinear data, some kernel self-expression methods [26][27][28][29][30][31] extended linear subspace clustering to nonlinear subspace clustering by employing the "kernel strategy", where the linear subspace clustering can be carried out. The two typical methods are kernelized SSC (KSSC) [30] and kernelized LRR (KLRR) [14]. ...
Article
Full-text available
Subspace clustering methods based on the low-rank and sparse model are effective strategies for high-dimensional data clustering. However, most existing low-rank and sparse methods with self-expression can only deal with linear structure data effectively, but they cannot handle data with complex nonlinear structure well. Although kernel subspace clustering methods can efficiently deal with nonlinear structure data, some similarity information between samples may be lost when the original data are reconstructed in the kernel space. Moreover, these kernel subspace clustering methods may not obtain an affinity matrix with an optimal block diagonal structure. In this paper, we propose a novel subspace clustering method termed kernel block diagonal representation subspace clustering with similarity preservation (KBDSP). KBDSP contains three contributions: (1) an affinity matrix with block diagonal structure is generated by introducing a block diagonal representation term; (2) a similarity-preserving regularizer is constructed and embedded into our model by minimizing the discrepancy between inner products of original data and inner products of reconstructed data in the kernel space, which better preserve the similarity information between original data; (3) the KBDSP model is proposed by integrating the block diagonal representation term and similarity-preserving regularizer into the kernel self-expressing frame. The optimization of our proposed model is solved efficiently by utilizing the alternating direction method of multipliers (ADMM). Experimental results on nine datasets demonstrate the effectiveness of the proposed method.
... Subspace clustering algorithms can be roughly divided into five categories, namely algebraic methods [5][6][7], statistical methods [8,9], matrix factorization methods [10,11], iterative methods [12,13], and spectral clustering methods [14][15][16][17][18][19][20]. Due to the development of sparse representation, B Jinglei Liu jinglei_liu@sina.com 1 School of Computer and Control Engineering, Yantai University, Yantai 264005, Shandong, China subspace clustering has attracted extensive attention in the past few years. ...
Article
Full-text available
Subspace clustering is very significant and widely used in computer vision and pattern recognition. Traditional self-expressive subspace clustering methods usually separate similarity measurement and data clustering into two steps, so they fail to fully consider the interdependence between them. Moreover, since some graph parameters need to be defined in advance, it is difficult to select the optimal graph parameters, which leads to the loss of local smoothness. Furthermore, classic methods are optimal for independent and identically distributed gaussian noise but sensitive to outliers. To solve the above problems effectively, one-step subspace clustering based on adaptive graph regularization and correntropy induced metric (OSCA) is proposed in this paper. Specifically, OSCA applies subspace structured norm to measure the uncertainty of two steps of similarity measurement and data clustering, integrating these two independent steps into a unified framework. Meanwhile, according to the local connectivity of data, an adaptive optimal neighborhood is assigned to each data point to learn the coefficient matrix, so that the global structure and local structure of data can be considered. In addition, the correntropy, which is insensitive to outliers, is cleverly exploited to calculate the reconstruction error to handle complex noise better. Finally, the HQ-ADMM algorithm (an efficient interactive algorithm) is proposed to optimize the model. Experimental results on ten datasets of four types show that the proposed method can significantly improve the clustering performance.
... This is the author's version which has not been fully edited and content may change prior to final publication. 15 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. ...
Article
Full-text available
Ongoing researches on multiple view data are showing competitive behavior in the machine learning field. Multi-view clustering has gained widespread acceptance for managing multi-view data and improves clustering efficiency. Large dimensionality in data from various views has recently drawn a lot of interest from researchers. How to efficiently learns the appropriate lower dimensional subspace which can manage the valuable information from the diverse views is challenging and considerable issue. To concentrate on the mentioned issue, we asserted a novel clustering approach for multiple view data through low-rank representation.We consider the importance of each view by assigning the weight control factor.We combine consensus representation with the degree of disagreement among lower rank matrices. The single objective function unifies all factors. Furthermore, we give the efficient solution to update the variable and to optimized the objective function through the Augmented Lagrange’s Multiplier strategy. Real-world datasets are utilized in this study to exemplify the efficiency of the introduced technique, and it is contemplated to preceding algorithms to demonstrate its superiority.
... Therefore, it usually uses the Ky Fans theorem to relax n − k rank constraints and solve the relaxed problem. Liu et al. [107] presented an adaptive low-rank kernel block diagonal representation (ALKBDR) subspace clustering method, which can be written as ...
Article
Full-text available
With the rapid development of science and technology, high-dimensional data have been widely used in various fields. Due to the complex characteristics of high-dimensional data, it is usually distributed in the union of several low-dimensional subspaces. In the past several decades, subspace clustering (SC) methods have been widely studied as they can restore the underlying subspace of high-dimensional data and perform fast clustering with the help of the data self-expressiveness property. The SC methods aim to construct an affinity matrix by the self-representation coefficient of high-dimensional data and then obtain the clustering results using the spectral clustering method. The key is how to design a self-expressiveness model that can reveal the real subspace structure of data. In this survey, we focus on the development of SC methods in the past two decades and present a new classification criterion to divide them into three categories based on the purpose of clustering, i.e., low-rank sparse SC, local structure preserving SC, and kernel SC. We further divide them into subcategories according to the strategy of constructing the representation coefficient. In addition, the applications of SC methods in face recognition, motion segmentation, handwritten digits recognition, and speech emotion recognition are introduced. Finally, we have discussed several interesting and meaningful future research directions.
... Inspired by the fact that high-dimension data may be well denoted as the union of several low-dimension subspaces, the subspace clustering has become the most popular method. The task of subspace clustering [25] is to identify the latent subspaces and then assign data points into several clusters, subsequently resulting in two steps: (1) representation learning; and (2) spectral clustering. The first step aims to learn the affinity matrix with the block diagonal structure based on the representation by different regularizers, such as l 0 -norm [3], l 1 -norm [16], trace Lasso [27], the nuclear norm [24], laplace norm [9]. ...
... The basic idea of these methods is to transform the clustering problem into a graph segmentation one, and finally realize fast clustering in different subspaces. It mainly includes two steps: (1) learn the representation matrix by different regularizers such as sparseness [16,44], low-rankness [14,24,50], block diagonal representation [18,25,26]; (2) construct the nonnegative and symmetric affinity matrix by the representation matrix in the first step and then perform spectral clustering algorithm on it to obtain the final clusters of all data points. The success of the spectral clustering algorithm usually depends on the quality of the learned similarity matrix. ...
Article
Full-text available
Graph learning methods have been widely used for multi-view clustering. However, such methods have the following challenges: (1) they usually perform simple fusion of fixed similarity graph matrices, ignoring its essential structure. (2) they are sensitive to noise and outliers because they usually learn the similarity matrix from the raw features. To solve these problems, we propose a novel multi-view subspace clustering method named Frobenius norm-regularized robust graph learning (RGL), which inherits desirable advantages (noise robustness and local information preservation) from the subspace clustering and manifold learning. Specifically, RGL uses Frobenius norm constraint and adjacency similarity learning to simultaneously explore the global information and local similarity of views. Furthermore, the l2,1 norm is imposed on the error matrix to remove the disturbance of noise and outliers. An effectively iterative algorithm is designed to solve the RGL model by the alternation direction method of multipliers. Extensive experiments on nine benchmark databases show the clear advantage of the proposed method over fifteen state-of-the-art clustering methods.
... Recently, many works have been devoted to studying deep learning clustering algorithms, which extract the potential features of samples through unsupervised deep learning methods, and then transform the samples into low-dimensional space for clustering. Unlike the general deep clustering algorithms, deep subspace clustering [6][7][8][9] focuses on acquiring similarity information of the whole sample in the low-dimensional space rather than the feature acquisition of a single sample. The concept of self-expression layer is proposed in subspace clustering, which approximately replaces the original samples with the linear combination of other samples in the subspace. ...
Article
Full-text available
The subspace clustering algorithms for image datasets apply a self-expression coefficient matrix to obtain the correlation between samples and then perform clustering. However, such algorithms proposed in recent years do not use the cluster labels in the subspace to guide the deep network and do not get an end-to-end feature extraction and trainable clustering framework. In this paper, we propose a self-supervised subspace clustering model with a deep end-to-end structure, which is called Deep Subspace Image Clustering Network with Self-expression and Self-supervision (DSCNSS). The model embeds the self-supervised module into the subspace clustering. In network model training, alternating iterative optimization is applied to realize the mutual promotion of the self-supervised module and the subspace clustering module. Additionally, we design a new self-supervised loss function to improve the overall performance of the model further. To verify the performance of the proposed method, we conducted experimental tests on standard image datasets such as Extended Yale B, COIL20, COIL100, and ORL. The experimental results show that the performance of the proposed method is better than the existing traditional subspace clustering algorithm and deep clustering algorithm.
... Besides, a technique founded on a finite mixture of exponential power (MoEP) was proposed in [19] to handle complex noise contamination in data. Furthermore, the recent work of [25] introduced an adaptive kernel into LRR to boost the accuracy of clustering. Besides, a coupled low-rank representation (CLR) strategy was presented in [6] to learn accurate clustering from data using the k bock diagonal regularizer [26]. ...
Article
Full-text available
Traditional clustering methods neglect the data quality and perform clustering directly on the original data. Therefore, their performance can easily deteriorate since real-world data would usually contain noisy data samples in high-dimensional space. In order to resolve the previously mentioned problem, a new method is proposed, which builds on the approach of low-rank representation. The proposed approach first learns a low-rank coefficient matrix from data by exploiting the data's self expressiveness property. Then, a regularization term is introduced to ensure that the representation coefficient of two samples, which are similar in original high-dimensional space, is close to maintaining the samples' neighborhood structure in the low-dimensional space. As a result, the proposed method obtains a clustering structure directly through the low-rank coefficient matrix to guarantee optimal clustering performance. A wide range of experiments shows that the proposed method is superior to compared state-of-the-art methods.
... In our future work, there are two possible directions to further improve the performance of the CRAA. The first direction is to generalize the CRAA to kernel space with the socalled kernel trick [39][40][41] since the CRAA does not make use of the nonlinear information hidden in the data. The second direction is to use the recently proposed tensor SVD technique [42] and generalize the CRAA to tensor space since the CRAA does not consider the high-order correlations in the multi view data. ...
Article
Full-text available
Recently, the popularity of multi-view clustering (MVC) has increased, and many MVC methods have been developed. However, the affinity matrix that is learned by the MVC method is only block diagonal if noise and outliers are not included in the data.; however, data always contain noise and outliers. As a result, the affinity matrix is unreliable for subspace clustering because it is neither clean nor robust enough, which affects clustering performance. To compensate for these shortcomings, in this paper, we propose a novel clean and robust affinity matrix (CRAA) learning method for MVC. Specifically, firstly, the global structure of data is obtained by constructing the representation space shared by all views. Next, by borrowing the idea of robust principal component analysis (RPCA), the affinity matrix is divided into two parts, i.e., a cleaner and more robust affinity matrix and a noisy matrix. Then, the two-step procedure is integrated into a unified optimization framework and a cleaner and robust affinity matrix is learned. Finally, based on the augmented Lagrangian multiplier (ALM) method, an efficient optimization procedure for obtaining the CRAA is also developed. In fact, the main idea for obtaining a cleaner and more robust affinity matrix can also be generalized to other MVC methods. The experimental results on eight benchmark datasets show that the clustering performance of the CRAA is better than that of some of the state-of-the-art clustering methods in terms of NMI, ACC, F-score, Recall and ARI metrics.