Figure - available from: Remote Sensing
This content is subject to copyright.
Indian Pines image: (a) three-band color composite, (b) reference data, (c) class names, Classification maps (Indian Pines) obtained by (d) the SVM method (48.63%), (e) the extended morphological profiles (EMP) method (61.42%), (f) the EPF method (63.76%), (g) the image fusion and recursive filtering (IFRF) method (68.97%), (h) the joint sparse representation (JSR) method (68.57%), (i) the superpixel-based classification via multiple kernels (SCMK) method (59.32%), (j) the MASR method (69.62%), and (k) the multi-scale feature extraction (MSFE) method (73.92%).

Indian Pines image: (a) three-band color composite, (b) reference data, (c) class names, Classification maps (Indian Pines) obtained by (d) the SVM method (48.63%), (e) the extended morphological profiles (EMP) method (61.42%), (f) the EPF method (63.76%), (g) the image fusion and recursive filtering (IFRF) method (68.97%), (h) the joint sparse representation (JSR) method (68.57%), (i) the superpixel-based classification via multiple kernels (SCMK) method (59.32%), (j) the MASR method (69.62%), and (k) the multi-scale feature extraction (MSFE) method (73.92%).

Source publication
Article
Full-text available
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale structures, and...

Citations

... To overcome the constraint of complexity, feature extraction is thought to be a crucial step in classifying the hyper spectral images [8]. Numerous studies have revealed that multi element classification can significantly enhance the classification efficiency [9].A multi-feature extracting method was presented by centering upon Gaussian pyramid because features of various scales provide complementary but related information for categorization [10]. An integrated methodology that merges spectral data and spatial data at various scales was postulated, and developed two techniques for building integrated concepts. ...
Article
Today, biometrics is most often employed for a variety of mundane activities, such as mobile authentication process and border crossing. Biometrics is subject to strict accuracy and efficiency standards in high-security circumstances. Multi-biometric techniques that combine the data from many biometric elements have shown to reduce erroneous rates and ameliorate the inherent flaws of the separate biometric systems in order to achieve this goal. Thus, to reduce the false acceptance rate in the multi-biometric system, the proposed approach uses the weighted multi-feature extraction technique. In this multi-feature extraction process, the image is initially segmented into multiple parts. Each part is then treated separately for noise reduction and cancellation. Especially the HSI (Hyper-spectral Image) is broken down into multiple Gaussian pyramid for extracting the multiple scaling features and the noise is eliminated by the usage of averaging filter. Further the extracted features are given weights and are featured to form cluster for each and every feature that is being extracted. This method reduces the error rate and provides more efficiency of the system.
... In multiscale models, the interaction of discriminant information from different scales or areas during the process of feature or decision fusion results in the output of more discriminant information, achieving a greater-than-the-sum-of-itsparts effect. We categorize multiscale information interaction methods into two types: 1) equally treated approaches (e.g., CDCNN [30], DRCNN [17]), where multiscale features or decisions are equally treated, concatenated and projected to the fused space, added, or pooled, and 2) weighting approaches (e.g., multiscale feature extraction model [MSFE [37])], where the importance of each scale is learnable and multiscale features or decisions are weighted pooled using different trainable weights for different scales. ...
Article
Full-text available
In hyperspectral image (HSI) classification, objects corresponding to pixels of different classes exhibit varying size characteristics, which causes a challenge for effective pixelwise feature extraction and classification. In this article, we propose a novel multiscale model, called multiarea target attention (MATA). The proposed MATA uses an architecture that includes a shared feature extractor (FE) and classifier to capture multiscale spectral–spatial information effectively and efficiently. The FE uses a multiscale target attention module (MSTAM) to extract spectral–spatial information from target pixels and their similar pixels across multiscale areas, while $L_{2}$ -normalization is used to address discrepancies between features of different scales. The classifier adopts a classwise decision weighting strategy to account for the varying sizes of different classes and the different contributions of semantic features at each scale to each class. Experimental results on five public HSI datasets demonstrate that the proposed MATA outperforms existing state-of-the-art single- and multiscale models, confirming its effectiveness and efficiency in HSI classification. Code is available at https://github.com/huanliu233/MATA .
... A multi-scale strategy is used to observe continuous images at different scales which helps to understand the image content (15,16). By applying a Gaussian kernel for multiscale changes (17), the multi-resolution strategy proposed in this study can enhance the robustness of the algorithm and improve the registration efficiency. ...
Article
Full-text available
Objective Today, cerebrovascular disease has become an important health hazard. Therefore, it is necessary to perform a more accurate and less time-consuming registration of preoperative three-dimensional (3D) images and intraoperative two-dimensional (2D) projection images which is very important for conducting cerebrovascular disease interventions. The 2D–3D registration method proposed in this study is designed to solve the problems of long registration time and large registration errors in 3D computed tomography angiography (CTA) images and 2D digital subtraction angiography (DSA) images. Methods To make a more comprehensive and active diagnosis, treatment and surgery plan for patients with cerebrovascular diseases, we propose a weighted similarity measure function, the normalized mutual information-gradient difference (NMG), which can evaluate the 2D–3D registration results. Then, using a multi-resolution fusion optimization strategy, the multi-resolution fused regular step gradient descent optimization (MR-RSGD) method is presented to attain the optimal value of the registration results in the process of the optimization algorithm. Result In this study, we adopt two datasets of the brain vessels to validate and obtain similarity metric values which are 0.0037 and 0.0003, respectively. Using the registration method proposed in this study, the time taken for the experiment was calculated to be 56.55s and 50.8070s, respectively, for the two sets of data. The results show that the registration methods proposed in this study are both better than the Normalized Mutual (NM) and Normalized Mutual Information (NMI). Conclusion The experimental results in this study show that in the 2D–3D registration process, to evaluate the registration results more accurately, we can use the similarity metric function containing the image gray information and spatial information. To improve the efficiency of the registration process, we can choose the algorithm with gradient optimization strategy. Our method has great potential to be applied in practical interventional treatment for intuitive 3D navigation.
... Tu et al. [47] noted that rich features could effectively reflect the differences between land types and distinguish their boundaries to improve the accuracy of image classification. This study ensured the diversity of features when compared to the extraction of land types using spectral and phenological information alone. ...
Article
Full-text available
The quick and precise assessment of rice distribution by remote sensing technology is important for agricultural development. However, mountain rice is limited by the complex terrain, and its distribution is fragmented. Therefore, it is necessary to fully use the abundant spatial, temporal, and spectral information of remote sensing imagery. This study extracted 22 classification features from Sentinel-2 imagery (spectral features, texture features, terrain features, and a custom spectral-spatial feature). A feature selection method based on the optimal extraction period of features (OPFSM) was constructed, and a multitemporal feature combination (MC) was generated based on the separability of different vegetation types in different periods. Finally, the extraction accuracy of MC for mountain rice was explored using Random Forest (RF), CatBoost, and ExtraTrees (ET) machine learning algorithms. The results show that MC improved the overall accuracy (OA) by 3–6% when compared to the feature combinations in each rice growth stage, and by 7–14% when compared to the original images. MC based on the ET classifier (MC-ET) performed the best for rice extraction, with the OA of 86%, Kappa coefficient of 0.81, and F1 score of 0.95 for rice. The study demonstrated that OPFSM could be used as a reference for selecting multitemporal features, and the MC-ET classification scheme has high application potential for mountain rice extraction.
... The CNNs used the end-toend learning method, where, each kernel convolution focus only to the sub-region (feature information), that makes the image lose their global context, and leads to poor performances of the segmentation of many regions. To overcome these disadvantages, some studies tried new solutions [16], including the self-attention mechanism [17], image pyramid [18], multiscale fusion [19]. However, all of these studies have some limitations on the medical images, in particular to extract global context features. ...
... Initially, the HSI classification was based only on information provided by the spectral dimension, but those spectral features are not enough to represent and distinguish the different objects or elements present in a hyperspectral image (HSI). In [28], a multi-scale feature extraction classification framework is introduced, capable of improving the accuracy of HSI classification due to the decomposition of the HSI into multi-scale structures through a Gaussian pyramid model, with the advantage of preserving detailed structures at the edge regions of the image. ...
... Both [28] and [29] approaches take into consideration the HSI classification problem by having an altered representation of the HSI data cube, incorporating a stage dedicated to feature extraction followed by a stage for feature classification. Recent advances in deep learning methods challenge this approach, by providing processing architectures that no longer require handcrafting feature extraction methods, but rather rely on the training step to extract information that structures the weights of a neural network. ...
Article
Full-text available
Obtaining relevant classification results for hyperspectral images depends on the quality of the data and the proposed selection of the samples and descriptors for the training and testing phases. We propose a hyperspectral image classification machine learning framework based on image processing techniques for denoising and enhancement and a parallel approach for the feature extraction step. This parallel approach is designed to extract the features by employing the wavelet transform in the spectral domain, and by using Local Binary Patterns to capture the texture-like information linked to the geometry of the scene in the spatial domain. The spectral and spatial features are concatenated for a Support Vector Machine-based supervised classifier. For the experimental validation, we propose a controlled sampling approach that ensures the independence of the selected samples for the training data set, respectively the testing data set, offering unbiased performance results. We argue that a random selection applied on the hyperspectral dataset to separate the samples for the learning and testing phases can cause overlapping between the two datasets, leading to biased classification results. The proposed approach, with the controlled sampling strategy, tested on three public datasets, Indian Pines, Salinas and Pavia University, provides good performance results.
... Feature extraction is also a vigorous chapter for all computerized systems. The features merging and selection procedures present much devotion last couple of years in computer vision (CV), and various associated techniques are presented, expanding the system recognition correctness [15][16][17]. The synthesis of several features gives improved performance compared to a particular feature kind. ...
Article
Full-text available
The characterization of aircraft in remote sensing satellite imagery has many armed and civil applications. For civil purposes, such as in tragedy and emergency aircraft searching, airport scrutiny and aircraft identification from satellite images are very important. This study presents an automated methodology based on handcrafted and deep convolutional neural network (DCNN) features. The presented aircraft classification technique consists of the following steps. The handcrafted features achieved from a local binary pattern (LBP) and DCNN are fused by feature fusion techniques. The DCNN features are extracted from Alexnet and Inception V3. Then we adopted a feature selection technique called principal component analysis (PCA). PCA removes the redundant and irrelevant information and improves the classification performance. Then, Famous supervised methodologies categorize these selected features. We chose the best classifier based on its highest accuracy. The proposed technique is executed on the multi-type aircraft remote sensing images (MTARSI) dataset, and the overall highest accuracy that we achieved from our proposed method is 96.8% by the linear support vector machine (SVM) classifier.
... For FA, the conditional distribution of the observed variables given the latent variable have diagonal rather than an isotropic covariance matrix. In addition to these classical unsupervised FE methods, there are several novel unsupervised FE methods in the literature, such as orthogonal total variation component analysis (OTVCA) [54], edge-preserving filtering [125], Gaussian pyramid based multi-scale feature extraction (MSFE) [126], sparse and smooth low-rank analysis (SSLRA) [47], etc. For supervised FE methods, the new features should contain most discriminative information based on the labeled samples. ...
... Accurate feature extraction [24,25] can better retain target information and eliminate interference information. By observing the resulting image after the background suppression, it can be found that the small target is closer to the "ellipse shape" and has strong texture characteristics in both the horizontal and vertical directions, and the Gabor filter just has the ability to extract the "ellipse shape" characteristics. ...
Article
Full-text available
How to accurately detect small targets from the complex maritime environment has been a bottleneck problem. The strong wind-wave backlight conditions (SWWBC) is the most common situation in the process of distress target detection. In order to solve this problem, the main contribution of this paper is to propose a small target detection method suitable for SWWBC. First of all, for the purpose of suppressing the gray value of the background, it is analyzed that some minimum points with the lowest gray value tend to gather in the interior of the small target. As the distance from the extreme point increases, the gray value of the pixel in all directions also increases by the same extent. Therefore, an inverse Gaussian difference (IGD) preprocessing method similar to the distribution of the target pixel value is proposed to suppress the uniform sea wave and intensity of the sky background. So as to achieve the purpose of background suppression. Secondly, according to the feature that the small target tends to “ellipse shape” in both horizontal and vertical directions, a multi-scale and multi-directional Gabor filter is applied to filter out interference without “ellipse shape”. Combined with the inter-scale difference (IsD) operation and iterative normalization operator to process the results of the same direction under different scales, it can further suppress the noise interference, highlight the significance of the target, and fuse the processing results to enrich the target information. Then, according to different texture feature distributions of the target and noise in the multi-scale feature fusion results, a cross-correlation (CC) algorithm is proposed to eliminate noise. Finally, according to the dispersion of the number of extreme points and the significance of the intensity of the small target compared with the sea wave and sky noise, a new peak significance remeasurement method is proposed to highlight the intensity of the target and combined with a binary method to achieve accurate target segmentation. In order to better evaluate the performance index of the proposed method, compared with current state-of-art maritime target detection technologies. The experimental results of multiple image sequence sets confirm that the proposed method has higher accuracy, lower false alarm rate, lower complexity, and higher stability.
... The spatial features extracted in the above research are generally single scale features. Because it is very difficult for using single scale feature to accurately express the differences between object categories, and cannot well distinguish the boundary of objects, the idea of multi-scale feature extraction has been widely used in the field of hyperspectral image processing [26][27][28][29] . The core idea of multi-scale feature extraction is to realize the abstraction of image information at different scales. ...
Article
Full-text available
To solve the problem that the traditional hyperspectral image classification method cannot effectively distinguish the boundary of objects with a single scale feature, which leads to low classification accuracy, this paper introduces the idea of guided filtering into hyperspectral image classification, and then proposes a multi-scale guided feature extraction and classification (MGFEC) algorithm for hyperspectral images. Firstly, the principal component analysis theory is used to reduce the dimension of hyperspectral image data. Then, guided filtering algorithm is used to achieve multi-scale spatial structure extraction of hyperspectral image by setting different sizes of filtering windows, so as to retain more edge details. Finally, the extracted multi-scale features are input into the support vector machine classifier for classification. Several practical hyperspectral image datasets were used to verify the experiment, and compared with other spectral feature extraction algorithms. The experimental results show that the multi-scale features extracted by the MGFEC algorithm proposed in this paper are more accurate than those extracted by only using spectral information, which leads to the improvement of the final classification accuracy. This fully shows that the proposed method is not only effective, but also suitable for processing different hyperspectral image data.