Fig 2 - uploaded by Yakoub Bazi
Content may be subject to copyright.
Flowchart of the proposed optimal SVM classification system. (a) Training phase. (b) Classification phase.

Flowchart of the proposed optimal SVM classification system. (a) Training phase. (b) Classification phase.

Source publication
Article
Full-text available
Recent remote sensing literature has shown that support vector machine (SVM) methods generally outperform traditional statistical and neural methods in classification problems involving hyperspectral images. However, there are still open issues that, if suitably addressed, could allow further improvement of their performances in terms of classifica...

Similar publications

Article
Full-text available
The paper presents a algorithm of visual simultaneous localization and mapping (vSLAM) for a small-size humanoid robot. The algorithm includes the procedures of image feature detection, good feature selection, image depth calculation, and feature state estimation. To ensure robust feature detection and tracking, the procedure is improved by utilizi...
Conference Paper
Full-text available
Current Intrusion Detection Systems (IDS) examine all data features to detect intrusion or misuse patterns. Some of the features may be redundant or contribute little (if anything) to the detection pro- cess. The purpose of this study is to identify important input features in building an IDS that is computationally e-cient and efiective. This pape...
Article
Full-text available
The advancement of technology allows video acquisition devices to have a better performance, thereby increasing the number of applications that can effectively utilize digital video. Compared to still images, video sequences provide more information about how objects and scenarios change over time. Tracking humans is of interest for a variety of ap...
Conference Paper
Full-text available
One of the most common problems in existing detection techniques is the high curse of dimensionality, due to multidimensional features of the network attack data. This paper investigates the performances of genetic algorithm (GA) with support vector machine (SVM) classification method for feature selection, the forward feature selection algorithm (...
Conference Paper
Full-text available
A new feature selection method for reliable tracking is presented. In this paper, it is assumed that features are tracked by template matching where small regions around the features are defined as templates. The proposed method selects features based on the upper bound of the average template matching error. This selection criterion is directly re...

Citations

... The rich spectral information endows hyperspectral images with strong ground feature differentiation ability, and has a wide range of applications in environmental monitoring, urban planning, military reconnaissance, crop yield estimation, and other fields. In recent years, many methods have been applied to hyperspectral image classification tasks, including threshold based segmentation, support vector machine (SVM) (Bazi et al., 2006)、random forest (RF) (Yu et al., 2019), and polynomial logistic regression (Li et al., 2010). These methods have certain limitations, as they can only extract shallow features and lack research on extracting deep feature information. ...
Article
Full-text available
Hyperspectral images contain dozens or even hundreds of spectral bands, which contain rich spectral information and help distinguish different ground objects. Hyperspectral images have a wide range of applications in urban planning, environmental monitoring, and other fields. The semantic segmentation of hyperspectral images is one of the current research hotspots. The difficulty lies in the rich spectral information and strong correlation of hyperspectral images. Traditional semantic segmentation methods cannot fully extract information, which affects the accuracy of classification. This article utilizes an encoding decoding structure to simultaneously extract deep and shallow features of images. A REGCS convolution module was constructed using the idea of group convolution to extract spectral and spatial features of images. We compared the Salinas Valley dataset and MUUFL dataset with various classification algorithms. The experimental results show that compared with other classification models, the RESSU model has achieved stable and excellent results in hyperspectral image classification experiments. Among them, in the classification experiment of the Salinas Valley dataset, the accuracy of single class classification reached over 92%. In the effectiveness analysis experiment, we calculated different model parameter quantities to verify the performance of our method, and ultimately achieved good results.
... The SBBR algorithm is analogous to recursive feature elimination (RFE), a technique earlier presented with support vector machines or random forests. In RFE, the feature with the lowest ranking score is eliminated, iteratively removing insignificant features until only the most relevant ones remain (e.g., [86][87][88][89]). This SBBR approach allows us to pinpoint the bands that most strongly influence the prediction of our target classes. ...
Article
Full-text available
Early and accurate disease diagnosis is pivotal for effective phytosanitary management strategies in agriculture. Hyperspectral sensing has emerged as a promising tool for early disease detection, yet challenges remain in effectively harnessing its potential. This study compares parametric spectral Vegetation Indices (VIs) and a nonparametric Gaussian Process Classification based on an Automated Spectral Band Analysis Tool (GPC-BAT) for diagnosing plant bacterial diseases using hyperspectral data. The study conducted experiments on tomato plants in controlled conditions and kiwi plants in field settings to assess the performance of VIs and GPC-BAT. In the tomato experiment, the modeling processes were applied to classify the spectral data measured on the healthy class of plants (sprayed with water only) and discriminate them from the data captured on plants inoculated with the two bacterial suspensions (108 CFU mL−1). In the kiwi experiment, the standard modeling results of the spectral data collected on nonsymptomatic plants were compared to the ones obtained using symptomatic plants’ spectral data. VIs, known for their simplicity in extracting biophysical information, successfully distinguished healthy and diseased tissues in both plant species. The overall accuracy achieved was 63% and 71% for tomato and kiwi, respectively. Limitations were observed, particularly in differentiating specific disease infections accurately. On the other hand, GPC-BAT, after feature reduction, showcased enhanced accuracy in identifying healthy and diseased tissues. The overall accuracy ranged from 70% to 75% in the tomato and kiwi case studies. Despite its effectiveness, the model faced challenges in accurately predicting certain disease infections, especially in the early stages. Comparative analysis revealed commonalities and differences in the spectral bands identified by both approaches, with overlaps in critical regions across plant species. Notably, these spectral regions corresponded to the absorption regions of various photosynthetic pigments and structural components affected by bacterial infections in plant leaves. The study underscores the potential of hyperspectral sensing in disease diagnosis and highlights the strengths and limitations of VIs and GPC-BAT. The identified spectral features hold biological significance, suggesting correlations between bacterial infections and alterations in plant pigments and structural components. Future research avenues could focus on refining these approaches for improved accuracy in diagnosing diverse plant–pathogen interactions, thereby aiding disease diagnosis. Specifically, efforts could be directed towards adapting these methodologies for early detection, even before symptom manifestation, to better manage agricultural diseases.
... To overcome the drawbacks of traditional meteorological recognition based on physical sensors, satellite cloud maps, and manual observation, manual feature engineering based on specific image filtering algorithms has begun to be applied to the field of weather image recognition. Yan et al. [15] introduced three sets of features, namely, gradient magnitude histogram, HSV color histogram, and road information, and used Real AdaBoost's algorithm to classify them; Bazi et al. [16] used a customized method to extract feature vectors, such as sky, shadow, and saturation and recognized them by dictionary learning and kernel classifier classification. However, these methods are too complex, with weak generalization ability, and can only be applied to some specific scenarios [17]. ...
... Therefore, fine-tuning this network can extract richer meteorological data features to address the issue of relatively weak feature extraction capabilities. Subsequently, through extensive experiments to find another machine learning classification algorithm, we discover that, after approximating features with the Nystroem method, further fusion with the LinearSVC algorithm [16], [23] for classification can improve accuracy and model robustness. This helps address the shortcomings of simple classification abilities and low generalization. ...
Article
Full-text available
Meteorological identification and observation are crucial in production activities closely related to meteorology, such as agricultural production. Currently, some machine learning methods applied in meteorological identification exhibit low richness of semantic features and weak transferability in pre-trained models, leading to insufficient feature extraction capabilities. Additionally, these models often have relatively simple classification layers, and they tend to train as a holistic model. To address the aforementioned shortcomings, this paper proposes a mechanism of fused training, constructing a meteorological identification model based on the enhanced fusion of EVA02 and Linear Support Vector Classification (LinearSVC). The model fine-tunes the pre-trained EVA02 backbone network, fully stimulating the qualitative transformation of transfer learning’s high-level semantic abstractions and meteorological data representations. This enhances feature extraction capabilities. Additionally, the model integrates the Nystroem method with the LinearSVC algorithm for classification, further improving classification accuracy and the robustness of the model on small datasets. Through simulation experiments, the model achieves F1-scores on the public datasets MWD and WEAPD that surpass the current state-of-the-art methods by 0.75% and 9.89%, respectively, demonstrating the effectiveness of the proposed method.
... The increasing dimensions of hyperspectral data volumes pose methodological challenges that may compromise classification accuracy when the amount of available features and training samples is out of balance. To overcome this issue, several classification approaches based on handcrafted features have been posed in the literature (Bazi and Melgani 2006) presented a Support Vector Machine (SVM) classification system for hyperspectral imagery based on a genetic optimization framework. This system detects the best discriminative features without requiring the a priori setting of their number by the user and the estimation of the best SVM parameters is carried out in a fully automatic fashion. ...
... Feature selection for optimizing predictor variables can improve distribution model performance 58 . Therefore, prior to model development we applied recursive feature elimination (RFE), a backward feature extraction method used to reduce and optimize predictor variables for each ecosystem type 59 . We implemented random forest tree functions ('rfFuncs') in the R caret package to rank predictors important to ecosystem distribution models. ...
Article
Full-text available
Climate change shifts ecosystems, altering their compositions and instigating transitions, making climate change the predominant driver of ecosystem instability. Land management agencies experience these climatic effects on ecosystems they administer yet lack applied information to inform mitigation. We address this gap, explaining ecosystem shifts by building relationships between the historical locations of 22 ecosystems (c. 2000) and abiotic data (1970 – 2000; bioclimate, terrain) within the southwestern United States using ‘ensemble’ machine learning models. These relationships identify the conditions required for establishing and maintaining southwestern ecosystems (i.e., ecosystem suitability). We projected these historical relationships to mid (2041-2060) and end-of-century (2081-2100) periods using CMIP6 generation BCC-CSM2-MR and GFDL-ESM4 climate models with SSP3-7.0 and SSP5-8.5 emission scenarios. This procedure reveals how ecosystems shift, as suitability typically increases in area (~50% (~40% SD)), elevation (12-15%) and northing (4-6%) by mid-century. We illustrate where and when ecosystems shift, by mapping suitability predictions temporally and within 52,565 properties (e.g., Federal, State, Tribal). All properties had >50% changes in suitability for >1 ecosystem within them, irrespective of size (>16.7 km2). We integrated 9 climate models to quantify predictive uncertainty and exemplify its relevance. Agencies must manage ecosystem shifts transcending jurisdictions. Effective mitigation requires collective action heretofore rarely instituted. Our procedure supplies the climatic context to inform their decisions.
... In research works [23,24], a robust model was developed using SVM which was integrated with biophysical factors for the quantitative assessment and monitoring of soil salinity within the study area Yanqi Basin, Xinjiang, China. The performance of SVM showed highest accuracy when compared to ANN. ...
Article
Full-text available
Soil salinization is a leading cause of soil and land degradation, necessitating early detection for efficient soil management. This study presents an integrated approach combining Remote Sensing and Geographic Information Systems (GIS) to identify salt affected soils, employing the support vector machine (SVM). The research focuses on the town of Ballari in Karnataka, India, an area highly susceptible to soil salinization with severe consequences. To evaluate, monitor, and implement remedial measures, Ballari was selected as the study area. Data inputs for the SVM model were extracted from nine raster layers derived from the 2011 Landsat 9 imagery and DEM SRTM data. These layers include the Digital Elevation Model (DEM), Topographic Roughness Index (TRI), Topographic Position Index (TPI), Aspect, Slope, Normalized Differential Salinity Index (NDSI), Normalized Differential Vegetation Index (NDVI), Normalized Differential Moisture Index (NDMI), and Normalized Differential Built-up Index (NDBI). Topographical parameters, such as slope, aspect, and other metrics derived from DEM, were found to be instrumental in identifying salt-affected soil due to their ability to indicate land surface texture. Spectral indices NDSI and NDVI, computed using Red and NIR bands, along with the SWIR band, were identified as highly effective in delineating salt-affected soils. Following the layer stacking of these nine layers to form a multiband composite image, the data set was divided into a 70:30 ratio for training and testing, respectively. The model demonstrated an overall accuracy of 89.59% and a Kappa coefficient of 0.84, underlining the efficacy of this approach in predicting soil salinity.
... In the early stage of the study on the HSI classification, the spectral information played the leading role. Most methods focus on exploring the discrepancy of original spectral signatures in HSI to distinguish the pixels into different categories, including k-nearest neighbor(KNN) [10], support vector machines(SVM) [11], logistic regression [12], and so on. However, the original spectral features in HSI always obey a complex high-dimensional nonlinear distribution where traditional machine learning based methods can not handle it well. ...
Preprint
Full-text available
"Finding fresh water in the ocean of data." is a challenge that all deep learning domains struggle with, especially in the area of hyperspectral image analysis. As hyperspectral remote sensing technology advances by leaps and bounds, there are increasing amounts of hyperspectral images(HSIs) can be available. Whereas in fact, these unlabeled HSIs are powerless to be used as material to driven a supervised learning task due to the extremely expensive labeling costs and some unknown regions. Although learning-based methods have achieved remarkable performance due to their superior ability to represent features, at the cost, these methods are complex, inflexible and tough to carry out transfer learning. In this paper, we propose the "Instructional Mask AutoEncoder"(IMAE), which is a simple and powerful self-supervised learner for HSI classification that uses a transformer-based mask autoencoder to extract the general features of HSIs through a self-reconstructing agent task. Moreover, we utilize the metric learning to perform an instructor which can direct the model focus on the human interested region of the input so that we can alleviate the defects of transformer-based model such as local attention distraction, lack of inductive bias and tremendous training data requirement. In downstream forward propagation, instead of global average pooling, we employ a learnable aggregation to put the tokens into fullplay. The obtained results illustrate that our method effectively accelerates the convergence rate and promotes the performance in downstream task.
... It is a data-driven approach that learns from a large number of training samples of known categories to build a classifier model, which is then applied to new unknown samples for classification [20,21]. Common classifiers include decision tree (DT) [22], maximum-likelihood estimation (MLE) [23], artificial neural network (ANN) [24], support vector machine (SVM) [25], and random forest (RF) [26]. Machine learning classification has higher adaptivity and scalability. ...
Article
Full-text available
Sentinel-2 serves as a crucial data source for monitoring forest cover change. In this study, a sub-pixel mapping of forest cover is performed on Sentinel-2 images, downscaling the spatial resolution of the positioned results to 2.5 m, enabling sub-pixel-level forest cover monitoring. A novel sub-pixel mapping with edge-matching correction is proposed on the basis of the Sentinel-2 images, combining edge-matching technology to extract the forest boundary of Jilin-1 images at sub-meter level as spatial constraint information for sub-pixel mapping. This approach enables accurate mapping of forest cover, surpassing traditional pixel-level monitoring in terms of accuracy and robustness. The corrected mapping method allows more spatial detail to be restored at forest boundaries, monitoring forest changes at a smaller scale, which is highly similar to actual forest boundaries on the surface. The overall accuracy of the modified sub-pixel mapping method reaches 93.15%, an improvement of 1.96% over the conventional Sub-pixel-pixel Spatial Attraction Model (SPSAM). Additionally, the kappa coefficient improved by 0.15 to reach 0.892 during the correction. In summary, this study introduces a new method of forest cover monitoring, enhancing the accuracy and efficiency of acquiring forest resource information. This approach provides a fresh perspective in the field of forest cover monitoring, especially for monitoring small deforestation and forest degradation activities.
... Based on the preliminary selection of wavebands with higher correlation coefficients by the PCC, we use the SVM-RFECV algorithm to further screen out the wavebands that contribute greatly to the modeling as the best variables for establishing the inversion model. The Support Vector Machine Recursive Feature Elimination (SVM-RFE) is an algorithm that uses support vector machines for feature selection of high-dimensional data and was first used in the field of molecular biology (Gu et al., 2002); later, it was also used in remote sensing (Bazi & Melgani, 2006). Recursive Feature Elimination (RFE) is a feature selection method using feature ranking techniques; RFE performs backward sequential approximate reduction from the full set, eliminating the most irrelevant features one by one, and finally obtaining the optimal feature subset. ...
Article
Full-text available
Soil contamination with heavy metals is a relatively serious issue in China. Traditional soil heavy metal survey methods cannot meet the demand for rapid and real-time large-scale area soil heavy metal surveys. We chose a typical mining area in Henan Province as the study area, collected 124 soil samples in the field and obtained their soil hyperspectral data indoors using a spectrometer. After different spectral transformations of the soil spectral curves, Pearson correlation coefficients (PCC) between them and the heavy metals Cd, Cr, Cu, and Ni were calculated, and after correlation evaluation, the best spectral transformations for each heavy metal were determined and preselected characteristic wavebands were obtained. Then the support vector machine recursive feature elimination cross-validation (SVM-RFECV) is used to select among the preselected feature wavebands to obtain the final modeled wavebands, and the Adaptive Boosting (AdaBoost), Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Partial Least Squares (PLS) methods were used to establish the inversion model. The results showed that the PCC-SVM-RFECV can effectively select characteristic wavebands with high contribution to modeling from high-dimensional data. Spectral transformations methods can improve the correlation of spectra with heavy metals. The location and quantity of characteristic wavebands for the four heavy metals were different. The accuracy of AdaBoost was significantly better than that of GBDT, RF, and PLS (i.e., Ni: RAdaBoost2=0.735,RGBDT2=0.679,RRF2=0.596,RPLS2=0.510). This study can provide a technical reference for the use of hyperspectral inversion models for large-scale monitoring of soil heavy metal content.
... Hyperspectral images (HSI) consist of hundreds of continuous spectral bands and are rich in spectral and spatial information. Early hyperspectral image classification models often utilized traditional machine-learning methods, such as Support Vector Machine (SVM) [5], Multiclass Logistic Regression (MLR) [6], and K-Nearest Neighbor (KNN) [7], and some dimensionality reduction methods based on spectral features, such as Principal Component Analysis (PCA) [8], Independent Component Analysis (ICA) [9], and Linear Discriminant Analysis (LDA) [10]. However, these methods ignore the connection between neighboring pixels and do not make use of the spatial information of the image, so the classification is not effective. ...
Article
Full-text available
Marine oil spills can cause serious damage to marine ecosystems and biological species, and the pollution is difficult to repair in the short term. Accurate oil type identification and oil thickness quantification are of great significance for marine oil spill emergency response and damage assessment. In recent years, hyperspectral remote sensing technology has become an effective means to monitor marine oil spills. The spectral and spatial features of oil spill images at different levels are different. To accurately identify oil spill types and quantify oil film thickness, and perform better extraction of spectral and spatial features, a multilevel spatial and spectral feature extraction network is proposed in this study. First, the graph convolutional neural network and graph attentional neural network models were used to extract spectral and spatial features in non-Euclidean space, respectively, and then the designed modules based on 2D expansion convolution, depth convolution, and point convolution were applied to extract feature information in Euclidean space; after that, a multilevel feature fusion method was developed to fuse the obtained spatial and spectral features in Euclidean space in a complementary way to obtain multilevel features. Finally, the multilevel features were fused at the feature level to obtain the oil spill information. The experimental results show that compared with CGCNN, SSRN, and A2S2KResNet algorithms, the accuracy of oil type identification and oil film thickness classification of the proposed method in this paper is improved by 12.82%, 0.06%, and 0.08% and 2.23%, 0.69%, and 0.47%, respectively, which proves that the method in this paper can effectively extract oil spill information and identify different oil spill types and different oil film thicknesses.