Figure - available from: Cluster Computing
This content is subject to copyright. Terms and conditions apply.
Wavelet decomposition of human facial image

Wavelet decomposition of human facial image

Source publication
Article
Full-text available
In this paper, a face recognition fusion algorithm, namely WT-LLE-LSSVM, based on wavelet transform (WT) and local linear embedding (LLE) is proposed. Firstly, the face image is pre-processed, and decomposed by Wavelet Transform to obtain four components of face image; then, the LLE algorithm is carried out to extract the features from the four com...

Similar publications

Article
Full-text available
Background: Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. Objectives: The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. Material and methods: The design phase of the classification model comprises the following stag...
Article
Full-text available
Graph-based methods are developed to efficiently extract data information. In particular, these methods are adopted for high-dimensional data classification by exploiting information residing on weighted graphs. In this paper, we propose a new hyperspectral texture classifier based on graph-based wavelet transform. This recent graph transform allow...
Article
Full-text available
High blood pressure early screening remains a challenge due to the lack of symptoms associated with it. Accordingly, noninvasive methods based on photoplethysmography (PPG) or clinical data analysis and the training of machine learning techniques for hypertension detection have been proposed in the literature. Nevertheless, several challenges arise...
Article
Full-text available
Landslide displacement prediction is considered as an essential component for developing early warning systems. The modelling of conventional forecast methods requires enormous monitoring data that limit its application. To conduct accurate displacement prediction with limited data, a novel method is proposed and applied by integrating three comput...
Article
Full-text available
Epilepsy is a common neurological disorder characterized by the recurrence of seizures, which can significantly impact the lives of patients. Electroencephalography (EEG) can provide important physiological information on human brain activity which can be useful to diagnose epilepsy. However, manual analysis and visual inspection of many EEG signal...

Citations

... Automatic image classification is an important task in many areas, such as face recognition [1,15,34], content-based image retrieval [2,3,29] and computer-aided medical images classification [4,28,30]. The performance of advanced automatic images classification models relys on good features abstracted from images. ...
Article
Full-text available
Integrating deep learning with traditional machine learning methods is an intriguing research direction. For example, PCANet and LDANet adopts Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (LDA) to learn convolutional kernels separately. It is not reasonable to adopt LDA to learn filter kernels in each convolutional layer, local features of images from different classes may be similar, such as background areas. Therefore, it is meaningful to adopt LDA to learn filter kernels only when all the patches carry information from the whole image. However, to our knowledge, there are no existing works that study how to combine PCA and LDA to learn convolutional kernels to achieve the best performance. In this paper, we propose the convolutional coverage theory. Furthermore, we propose the PLDANet model which adopts PCA and LDA reasonably in different convolutional layers based on the coverage theory. The experimental study has shown the effectiveness of the proposed PLDANet model.
... Zhou et al. [5] and Song et al. [6] designed a target recognition algorithm from improved least squares SVM (LSSVM). ...
Article
Full-text available
Deep learning (DL) is a hot topic in the machine vision. A large number of data sets are necessary for efficient image recognition. Otherwise the overfitting will easily occur. However, most actual samples are limited and unbalanced. To diminish the negative impact of small unbalanced samples on image recognition, the Deep Convolutional Generative Adversarial Network (DC-GAN) was improved to simulate data distribution, and relied on the improved network to generate a highly diverse dataset of balanced fire images in the work. Then, the number of output layer nodes was finetuned for the training of the target dataset by the layer freezing method. The training of small unbalanced samples was realized, using exponentially decaying learning rate, L2 regularization method, and Adam optimization algorithm. Simulation results showed that the proposed algorithm converged faster by fixing the convolutional layer parameters of the pre-trained model and finetuning the fully connected layer through transfer learning. Besides, 99% of fire images were correctly recognized, without inducing the problem of small sample overfitting. The proposed algorithm provides a desirable tool for outdoor fire recognition.
... From the experimental results using Yale Face, ORL, AR, and LF dataset [4], it shows a clear advantage of combining the firefly algorithm with the neural network that significantly optimizes the feature selection and speeds up the convergence rate. Article [8] proposes an algorithm that utilizes a combination of three different algorithms: Wavelet Transformation, Local Linear Embedding, and Support Vector Machines. The proposed algorithm breaks the face image into four components using wavelet transformation, then uses local linear embedding to analyze the key features of the four components, after which a weighted fusion is determined to perform face recognition. ...
Preprint
Full-text available
Face Recognition is most used for biometric user authentication that identifies a user based on his or her facial features. The system is in high demand, as it is used by many businesses and employed in many devices such as smartphones and surveillance cameras. However, one frequent problem that is still observed in this user-verification method is its accuracy rate. Numerous approaches and algorithms have been experimented to improve the stated flaw of the system. This research develops one such algorithm that utilizes a combination of two different approaches. Using the concepts from Linear Algebra and computational geometry, the research examines the integration of Principal Component Analysis with Delaunay Triangulation; the method triangulates a set of face landmark points and obtains eigenfaces of the provided images. It compares the algorithm with traditional PCA and discusses the inclusion of different face landmark points to deliver an effective recognition rate.
... From the experimental results using Yale Face, ORL, AR, and LF dataset [4], it shows a clear advantage of combining the firefly algorithm with the neural network that significantly optimizes the feature selection and speeds up the convergence rate. Article [8] proposes an algorithm that utilizes a combination of three different algorithms: Wavelet Transformation, Local Linear Embedding, and Support Vector Machines. The proposed algorithm breaks the face image into four components using wavelet transformation, then uses local linear embedding to analyze the key features of the four components, after which a weighted fusion is determined to perform face recognition. ...
Conference Paper
Face Recognition is most used for biometric user authentication that identifies a user based on his or her facial features. The system is in high demand, as it is used by many businesses and employed in many devices such as smartphones and surveillance cameras. However, one frequent problem that is still observed in this user-verification method is its accuracy rate. Numerous approaches and algorithms have been experimented to improve the stated flaw of the system. This research develops one such algorithm that utilizes a combination of two different approaches. Using the concepts from Linear Algebra and computational geometry, the research examines the integration of Principal Component Analysis with Delaunay Triangulation; the method triangulates a set of face landmark points and obtains eigenfaces of the provided images. It compares the algorithm with traditional PCA and discusses the inclusion of different face landmark points to deliver an effective recognition rate.
... For example, although STFT can successfully separate the source information with underdetermined blind-source separation, the window function used obscures the time-frequency (TF) representation and is confined by the Heisenberg uncertainty principle, which further restricts the sparsity of a signal [26][27][28]. When a wavelet transform performs underdetermined blind-source separation, various wavelet bases and decomposition layers create a difference between the vortex and approximate signals, thus, influencing the separation of signals and making it difficult to select the wavelet base and decomposition layer number [29][30][31][32][33]. Empirical mode decomposition can cause modal aliasing when dealing with underdetermined blind-source separation, making the separation of source signals difficult [34,35]. ...
Article
Full-text available
To reduce the consumption of receiving devices, a number of devices at the receiving end undergo low-element treatment (the number of devices at the receiving end is less than that at the transmitting ends). The underdetermined blind-source separation system is a classic low-element model at the receiving end. Blind signal extraction in an underdetermined system remains an ill-posed problem, as it is difficult to extract all the source signals. To realize fewer devices at the receiving end without information loss, this paper proposes an image restoration method for underdetermined blind-source separation based on an out-of-order elimination algorithm. Firstly, a chaotic system is used to perform hidden transmission of source signals, where the source signals can hardly be observed and confidentiality is guaranteed. Secondly, empirical mode decomposition is used to decompose and complement the missing observed signals, and the fast independent component analysis (FastICA) algorithm is used to obtain part of the source signals. Finally, all the source signals are successfully separated using the out-of-order elimination algorithm and the FastICA algorithm. The results show that the performance of the underdetermined blind separation algorithm is related to the configuration of the transceiver antenna. When the signal is 3 × 4antenna configuration, the algorithm in this paper is superior to the comparison algorithm in signal recovery, and its separation performance is better for a lower degree of missing array elements. The end result is that the algorithms discussed in this paper can effectively and completely extract all the source signals.
Article
Face image recognition technology plays an important role in biometric recognition field. Among the present face recognition methods, the method based on subspace learning have aroused wide concern due to their favorable properties, such as convenience for computation and effectiveness for identification. However, existing methods based on subspace learning are out of work when the sample-specific corruptions and outliers come along. To solve this problem, we build a novel model for face image recognition named truncated nuclear norm on low rank discriminant embedding (TNNL). The TNNL can mitigate the negative impact of noise and enhance the discriminability of features. Furthermore, we propose two iterative algorithms to extract the robust low dimensional image feature. To testify the effectiveness and robustness of TNNL, we conduct experiments on two benchmark face image databases for low dimensional feature extraction. The experimental results show that TNNL is better than the existing methods.