Figure - uploaded by José M Peña-Barragán
Content may be subject to copyright.
10-fold cross-validation results for the various methods applied to all the studied crops: Correct Classification Rate (CCR) and Minimum Sensitivity (MS).

10-fold cross-validation results for the various methods applied to all the studied crops: Correct Classification Rate (CCR) and Minimum Sensitivity (MS).

Source publication
Article
Full-text available
The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine lea...

Contexts in source publication

Context 1
... correct classification rate and the minimum sensitivity attained by each method are shown in Table 2. In order to assure the significance of the results, we performed a Friedman's statistical test [44], which is a nonparametric test to compare the effect of two factors. ...
Context 2
... the standard classification methods, MLP, SVM and LR obtained similar CCR results for spectral and textural features, with overall accuracies between 87% and 85%, notably higher than 79% accuracy of C4.5 (Table 2). The classifiers based only on textural features attained the worst results, reporting CCR values between 47% for C4.5 and 66% for SVM. ...
Context 3
... investigations have concluded that the combination of two or more classifiers can provide better classification accuracy than a single flat classifier [45]. The combination of SVM + SVM, with spectral features, attained the highest overall accuracy (89%) for the classification of all of the crops, although, as previously stated, the standard classification of SVM reported accuracy nearly as high (88%) ( Table 2). The standard classifications of MLP and C4.5 were also slightly improved if SVM was integrated in the model, increasing MLP accuracy from 85% to 88% and C4.5 accuracy from 79% to 84%. ...

Similar publications

Chapter
Sentimental analysis involves determination of opinions, feelings, and subjectivity of text. Twitter is a social networking service where millions of people share their thoughts. Twitter sentimental analysis on fan engagement focuses on how fans in different sports industry actively engage on social media. Fans are identified based on their opinion...
Conference Paper
This paper shows the importance of Artificial Intelligence (AI) techniques as a practical engineering tool for predicting and estimating the gas flow rate through chokes. Studying the single gas flow through wellhead chokes is vital to the oil industry, not only to ensure the accurate estimation of gas flow rate but also to keep equipments protecte...

Citations

... common ones include Euclidean distance between particles in space, Hamming distance between words, etc.) In a regression setting (where regression is a machine learning technique commonly used to obtain continuous outputs as opposed to discrete outputs in classification), an average (or maximum or minimum) of the KNN is typically used to determine the value of the variable being regressed [76]. Explanation of how KNN works is discussed below: (1) Selecting the optimal value of K; (2) Calculating distance; (3) Finding Nearest Neighbors; (4) Voting for Classification or Taking Average for Regression. ...
Article
Full-text available
Leaf water content (LWC) is a vital indicator of crop growth and development. While visible and near-infrared (VIS–NIR) spectroscopy makes it possible to estimate crop leaf moisture, spectral preprocessing and multiband spectral indices have important significance in the quantitative analysis of LWC. In this work, the fractional order derivative (FOD) was used for leaf spectral processing, and multiband spectral indices were constructed based on the band-optimization algorithm. Eventually, an integrated index, namely, the multiband spectral index (MBSI) and moisture index (MI), is proposed to estimate the LWC in spring wheat around Fu-Kang City, Xinjiang, China. The MBSIs for LWC were calculated from two types of spectral data: raw reflectance (RR) and the spectrum based on FOD. The LWC was estimated by combining machine learning (K-nearest neighbor, KNN; support vector machine, SVM; and artificial neural network, ANN). The results showed that the fractional derivative pretreatment of spectral data enhances the implied information of the spectrum (the maximum correlation coefficient appeared using a 0.8-order differential) and increases the number of sensitive bands, especially in the near-infrared bands (700–1100 nm). The correlations between LWC and the two-band index (RVI1156, 1628 nm), three-band indices (3BI-3(766, 478, 1042 nm), 3BI-4(1129, 1175, 471 nm), 3BI-5(814, 929, 525 nm), 3BI-6(1156, 1214, 802 nm), 3BI-7(929, 851, 446 nm)) based on FOD were higher than that of moisture indices and single-band spectrum, with r of − 0.71**, 0.74**, 0.73**, − 0.72**, 0.75** and − 0.76** for the correlation. The prediction accuracy of the two-band spectral indices (DVI(698, 1274 nm) DVI(698, 1274 nm) DVI(698, 1274 nm)) was higher than that of the moisture spectral index, with R² of 0.81 and R² of 0.79 for the calibration and validation, respectively. Due to a large amount of spectral indices, the correlation coefficient method was used to select the characteristic spectral index from full three-band indices. Among twenty seven models, the FWBI-3BI− 0.8 order model performed the best predictive ability (with an R² of 0.86, RMSE of 2.11%, and RPD of 2.65). These findings confirm that combining spectral index optimization with machine learning is a highly effective method for inverting the leaf water content in spring wheat.
... The OBIA is a knowledge-driven methodology that attempts to imitate human perception to represent real-world features by merging a set of similar pixels into meaningful image objects through an image segmentation process [32]. Some commonly used models, such as Support Vector Machine (SVM), decision tree (DT) [33], Knearest Neighbors (KNN) [34], artificial neural network (ANN) [35][36] and random forest (RF) have been integrated with OBIA for image classification, natural hazards mapping, and risk assessment. For example, Rodriguez-Galiano et al. (2019) [37] used Sentinel-1 SAR imagery to track floods. ...
Article
Full-text available
This study presents flood extent extraction and mapping from Sentinel images. Here we suggest an algorithm for extracting flooded areas from object-based image analysis (OBIA) using Sentinel-1A and Sentinel-2A images to map and assess the flood extent from the beginning to one week after the event. This study used multi-scale parameters in OBIA for image segmentation. First, we identified the flooded regions by applying our proposed algorithm on the Sentinel-1A. Then, to evaluate the effects of the flood on each land-use/land cover (LULC) class, Sentinel-2A images is classified using the OBIA after the event. Besides, we also used the threshold method to compare the proposed algorithm applying OBIA to determine the efficiency in computing parameters for change detection and flood extent mapping. The findings revealed the best performance for the segmentation process with an Object Fitness Index (OFI) is 0.92 when the scale parameter of 60 is applied. The results also show that 2099.4 km2 of the study area is flooded at the beginning of the flood. Furthermore, we found that the most flooded LULC classes are agricultural land and orchards with 695.28km2 (32.4%) and 708.63 km2 (33.7%), respectively. In comparison, about 33.9% of the remaining flooded area has occurred in other classes (i.e., fish farm, built-up, bare land and water bodies). The resulting object of each scale parameter was evaluated by Object Pureness Index (OPI), Object Matching Index (OMI), and OFI. Finally, our Overall Accuracy (OA) method incorporated field data using the Global Positioning System (GPS) shows 93%, 90%, and 89% for LULC, flood map (i.e., using our proposed algorithm), and threshold method, respectively.
... The study area was divided into the north test site and the south test site to investigate the power of machine learning models to classify crops by extrapolating the algorithms to regions which were not included into the learning process [70]. Dividing the study area and performing 18 test scenarios (Table 6), which were cross-validated with each other, allows for the assessment of environmental influences on the temporal, spatial, and phenological behavior of vegetation with consequences for quantities of training data [40,71]. The multi-temporal training and testing datasets were resampled based on geographical area (north and south) and time intervals (day, week, and month) and then used for training and testing the model (Section 2.4). ...
Article
Full-text available
Machine learning models are used to identify crops in satellite data, which achieve high classification accuracy but do not necessarily have a high degree of transferability to new regions. This paper investigates the use of machine learning models for crop classification using Sentinel-2 imagery. It proposes a new testing methodology that systematically analyzes the quality of the spatial transfer of trained models. In this study, the classification results of Random Forest (RF), eXtreme Gradient Boosting (XGBoost), Stochastic Gradient Descent (SGD), Multilayer Perceptron (MLP), Support Vector Machines (SVM), and a Majority Voting of all models and their spatial transferability are assessed. The proposed testing methodology comprises 18 test scenarios to investigate phenological, temporal, spatial, and quantitative (quantitative regarding available training data) influences. Results show that the model accuracies tend to decrease with increasing time due to the differences in phenological phases in different regions, with a combined F1-score of 82% (XGBoost) when trained on a single day, 72% (XGBoost) when trained on the half-season, and 61% when trained over the entire growing season (Majority Voting).
... In particular, we use L1 regularized support vector machine (SVM) and k-nearest neighbor (kNN)-based training sample selection strategy to learn classifiers for each target image voxel from adjacent voxels in the atlas based on image intensity and texture features. Peña et al. 11 combined object-based image analysis and advanced machine learning methods to improve crop identification, evaluating decision trees, logistic regression (LR), SVM, and multilayer perceptron (MLP), and neural network methods to map nine major summer crops from ASTER satellite images captured on two different dates (Arganda-Carreras et al., 2016). 12 We introduced trainable Weka segmentation, which is customized to use userdesigned image features or classifiers by providing an unsupervised segmentation learning scheme (clustering) that uses a limited number of manual annotations to train the classifier and automatically segments the remaining data. ...
... The softmax layer increases the computation but does not increase the complexity, so it is not included in this comparison. The computational complexity of the traditional VIT model is shown in equation (11). ...
Article
Full-text available
Nasopharyngeal carcinoma is a malignant tumor that occurs in the epithelium and mucosal glands of the nasopharynx, and its pathological type is mostly poorly differentiated squamous cell carcinoma. Since the nasopharynx is located deep in the head and neck, early diagnosis and timely treatment are critical to patient survival. However, nasopharyngeal carcinoma tumors are small in size and vary widely in shape, and it is also a challenge for experienced doctors to delineate tumor contours. In addition, due to the special location of nasopharyngeal carcinoma, complex treatments such as radiotherapy or surgical resection are often required, so accurate pathological diagnosis is also very important for the selection of treatment options. However, the current deep learning segmentation model faces the problems of inaccurate segmentation and unstable segmentation process, which are mainly limited by the accuracy of data sets, fuzzy boundaries, and complex lines. In order to solve these two challenges, this article proposes a hybrid model WET-UNet based on the UNet network as a powerful alternative for nasopharyngeal cancer image segmentation. On the one hand, wavelet transform is integrated into UNet to enhance the lesion boundary information by using low-frequency components to adjust the encoder at low frequencies and optimize the subsequent computational process of the Transformer to improve the accuracy and robustness of image segmentation. On the other hand, the attention mechanism retains the most valuable pixels in the image for us, captures the remote dependencies, and enables the network to learn more representative features to improve the recognition ability of the model. Comparative experiments show that our network structure outperforms other models for nasopharyngeal cancer image segmentation, and we demonstrate the effectiveness of adding two modules to help tumor segmentation. The total data set of this article is 5000, and the ratio of training and verification is 8:2. In the experiment, accuracy = 85.2% and precision = 84.9% can show that our proposed model has good performance in nasopharyngeal cancer image segmentation.
... Numerous vegetation indices are used to increase the mapping accuracy, but the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) are commonly used in crop mapping (Hao et al., 2015;Peña et al., 2014). Potatoes and their products are the primary agricultural commodities of the Island. ...
... Considering studies using only the satellite platform and both ML and DL models, only one study was identified. Peña et al. [53] used RS to identify nine summer crops from ASTER satellite imagery, combining OBIA with advanced ML. Woody crops (Almond, Walnut, Vineyard) and herbaceous crops were considered. Evaluating decision tree (DT), logistic regression (LR), SVM, and multilayer perceptron (MLP) methods, MLP and SVM stood out, achieving a high overall accuracy of 88%, surpassing LR (86%) and outperforming DT (79%). ...
... Looking at Figure 10a, satellite is the most used platform in 50% of the studies, followed by UAV in 37% and MAV in 13%. Figure 10b demonstrates consistently high OA results, regardless of whether ML or DL classification models were employed. Some studies even show cases where ML models performed equally well or even better than DL models [53,56]. This observation is noteworthy given the resource-intensive nature of DL models, coupled with their often-complex operational mechanisms (black box models). ...
... In contrast, ML models, such as decision trees (DT) and RF, offer a high level of interpretability, allowing for a clear understanding of the decision-making process [61]. models [53,56]. This observation is noteworthy given the resource-intensive nature of DL models, coupled with their often-complex operational mechanisms (black box models). ...
Article
Full-text available
Almond cultivation is of great socio-economic importance worldwide. With the demand for almonds steadily increasing due to their nutritional value and versatility, optimizing the management of almond orchards becomes crucial to promote sustainable agriculture and ensure food security. The present systematic literature review, conducted according to the PRISMA protocol, is devoted to the applications of remote sensing technologies in almond orchards, a relatively new field of research. The study includes 82 articles published between 2010 and 2023 and provides insights into the predominant remote sensing applications, geographical distribution, and platforms and sensors used. The analysis shows that water management has a pivotal focus regarding the remote sensing application of almond crops, with 34 studies dedicated to this subject. This is followed by image classification, which was covered in 14 studies. Other applications studied include tree segmentation and parameter extraction, health monitoring and disease detection, and other types of applications. Geographically, the United States of America (USA), Australia and Spain, the top 3 world almond producers, are also the countries with the most contributions, spanning all the applications covered in the review. Other studies come from Portugal, Iran, Ecuador, Israel, Turkey, Romania, Greece, and Egypt. The USA and Spain lead water management studies, accounting for 23% and 13% of the total, respectively. As far as remote sensing platforms are concerned, satellites are the most widespread, accounting for 46% of the studies analyzed. Unmanned aerial vehicles follow as the second most used platform with 32% of studies, while manned aerial vehicle platforms are the least common with 22%. This up-to-date snapshot of remote sensing applications in almond orchards provides valuable insights for researchers and practitioners, identifying knowledge gaps that may guide future studies and contribute to the sustainability and optimization of almond crop management.
... The application of further classifiers could also increase the prediction accuracy [49]. Note, for more complex classification approaches in precision agriculture, e.g., phenotyping, detecting plant diseases, observing BBCH stages, counting fruit bodies, or similar classifiers, e.g., [50,51] or a deep learning approach [52] could be more appropriate. Certainly, with regard to the extent of the applicability to various plant types and observation sites, the exploration of unsupervised learning algorithms becomes particularly relevant. ...
Article
Full-text available
Precision agriculture relies on understanding crop growth dynamics and plant responses to short-term changes in abiotic factors. In this technical note, we present and discuss a technical approach for cost-effective, non-invasive, time-lapse crop monitoring that automates the process of deriving further plant parameters, such as biomass, from 3D object information obtained via stereo images in the red, green, and blue (RGB) color space. The novelty of our approach lies in the automated workflow, which includes a reliable automated data pipeline for 3D point cloud reconstruction from dynamic scenes of RGB images with high spatio-temporal resolution. The setup is based on a permanent rigid and calibrated stereo camera installation and was tested over an entire growing season of winter barley at the Global Change Experimental Facility (GCEF) in Bad Lauchstädt, Germany. For this study, radiometrically aligned image pairs were captured several times per day from 3 November 2021 to 28 June 2022. We performed image preselection using a random forest (RF) classifier with a prediction accuracy of 94.2% to eliminate unsuitable, e.g., shadowed, images in advance and obtained 3D object information for 86 records of the time series using the 4D processing option of the Agisoft Metashape software package, achieving mean standard deviations (STDs) of 17.3–30.4 mm. Finally, we determined vegetation heights by calculating cloud-to-cloud (C2C) distances between a reference point cloud, computed at the beginning of the time-lapse observation, and the respective point clouds measured in succession with an absolute error of 24.9–35.6 mm in depth direction. The calculated growth rates derived from RGB stereo images match the corresponding reference measurements, demonstrating the adequacy of our method in monitoring geometric plant traits, such as vegetation heights and growth spurts during the stand development using automated workflows.
... Thanks to remote sensing imagery, a wide variety of space-based sources of information are now available, empowering crop monitoring to be carried out on broader scales [2]. As such, significant efforts have been made in the remote sensing community to adequately utilize these sources for crop characterization and classification throughout various studies [13][14][15][16]. ...
Article
Full-text available
With the rapid advancements in SAR systems aiming for operational capabilities, crop characterization using Compact-Polarimetric (CP) Synthetic Aperture Radar (CP-SAR) data has gained considerable attention. This study thoroughly assesses the potential usefulness of C-band SAR data in CP mode using the RADARSAT Constellation Mission (RCM) for crop monitoring. The research unfolds across two separate phases: (1) extensive crop scattering characterization and (2) crop classification. In the first part, we introduce three descriptors: compact-polarimetric SAR signature ( $CPS$ ), differential compact-polarimetric signature ( $DCPS$ ), and the Geodesic Distance ( $GD$ ) between signatures, to characterize the scattering pattern of four crop types: Soybean, Hay, Corn, and Cereal. We then derive the μ parameter and employ it in the $\mu -\chi$ decomposition method. Time-series investigation of the proposed descriptors and the three power components: $P_{s}$ , $P_{d}$ , and $P_{v}$ provides valuable insights into the scattering responses exhibited by crops, facilitating a robust assessment and tracking of their growing cycle, thus enabling the potential for improving crop discrimination. In the second part, we employ the $\mu -\chi$ and $m-\chi$ decompositions and wave descriptors to extract a stack of CP features for crop mapping. Combining diverse feature types and leveraging single and multi-date RCM images, classification experiments yield an optimal classification map with an overall accuracy of 89.71%, particularly when utilizing features extracted from multi-date datasets. This study illustrates a substantial effort in crop classification, underscoring the potential of the RCM Circular Polarization Synthetic Aperture Radar (CP-SAR) mission. Furthermore, our findings emphasize the potential of CP-SAR data from the RCM mission in contributing to precision agriculture and sustainable crop management practices.
... For example, comprehensive current agricultural maps of arid and semi-arid regions such as the Mediterranean are essential for water planning and irrigation management (EL-Magd andTanton 2003, Xie et al. 2007;Qader, Dash, and Atkinson 2018;Zhong et al. 2011). However, crop-type mapping using traditional methods generally requires routine ground visits to small fields with limited access, which may result in inconsistent outputs owing to the lack of standardized reporting protocols (Peña et al. 2014). Additionally, this field-based procedure is expensive, time-consuming (Maponya, van Niekerk, and Mashimbye 2020;Peña et al. 2014) and biased (Gilbertson, Kemp, and van Niekerk 2017). ...
... However, crop-type mapping using traditional methods generally requires routine ground visits to small fields with limited access, which may result in inconsistent outputs owing to the lack of standardized reporting protocols (Peña et al. 2014). Additionally, this field-based procedure is expensive, time-consuming (Maponya, van Niekerk, and Mashimbye 2020;Peña et al. 2014) and biased (Gilbertson, Kemp, and van Niekerk 2017). ...
... Remote sensing provides an alternative method for crop monitoring which is more reliable and cost-effective (Gilbertson, Kemp, and van Niekerk 2017;Peña et al. 2014;Peña-Barragán et al. 2008) and offers freely available Earth Observation satellite data over large areas (Maponya, van Niekerk, and Mashimbye 2020) at different spectral, spatial and temporal resolutions. Nevertheless, spatial and temporal resolutions are generally complementary characteristics of satellite data (Ranghetti et al. 2020). ...
Article
Olives are a crucial economic crop in Mediterranean countries. Detailed spatial information on the distribution and condition of crops at regional and national scales is essential to ensure the continuity of crop quality and yield efficiency. However, most earlier studies on olive tree mapping focused mainly on small parcels using single-sensor, very high resolution (VHR) data, which is time-consuming, expensive and cannot feasibly be scaled up to a larger area. Therefore, we evaluated the performance of Sentinel-1 and Sentinel-2 data fusion for the regional mapping of olive trees for the first time, using the Izmir Province of Türkiye, an ancient olive-growing region, as a case study. Three different monthly composite images reflecting the different phenological stages of olive trees were selected to separate olive trees from other land cover types. Seven land-cover classes, including olives, were mapped separately using a random forest classifier for each year between 2017 and 2021. The results were assessed using the k-fold cross-validation method, and the final olive tree map of Izmir was produced by combining the olive tree distribution over two consecutive years. District-level areas covered by olive trees were calculated and validated using official statistics from the Turkish Statistical Institute (TUIK). The K-fold cross-validation accuracy varied from 94% to 95% between 2017 and 2021, and the final olive map achieved 98% overall accuracy with 93% producer accuracy for the olive class. The district-level olive area was strongly related to the TUIK statistics (R 2 = 0.60, NRMSE = 0.64). This study used Sentinel data and Google Earth Engine (GEE) to produce a regional-scale olive distribution map that can be scaled up to the entire country and replicated elsewhere. This map can, therefore, be used as a foundation for other scientific studies on olive trees, particularly for the development of effective management practices. ARTICLE HISTORY
... In addition, this approach makes use of object-based classification, which involves image segmentation using spatial, spectral, and size information. This approach more closely mimics real-world features and yields more accurate categorization results for high-resolution imageries (Blaschke et al. 2014;Li and Shao 2014;Peña et al. 2014). ...
Article
Full-text available
Urbanization, changes in land use and land cover (LULC), and an increase in population collectively have significant impacts on urban catchments. However, a vast majority of LULC studies have been conducted using readily available satellite imagery, which often presents limitations due to its coarse spatial resolution. Such imagery fails to accurately depict the surface characteristics and diverse spectrum of LULC classifications contained within a single pixel. This study focused on the highly urbanized Dry Creek catchment in Adelaide, South Australia and aimed to determine the impact of urbanization on spatiotemporal changes in LULC and its implications for the land surface condition of the catchment. Very high spatial resolution imagery was utilized to examine changes in LULC over the past four decades. Support Vector Machine-learning-based image classification was utilized to classify and identify the changes in LULC over the study area. The classification accuracy showed strong agreement, with a kappa value greater than 0.8. The findings of this analysis showed that extensive urban development, which expanded the built-up area by 34 km 2 , were responsible for the decline in grass cover by 43.1 km 2 over the last 40 years (1979-2019). Moreover, built-up areas, plantation, and water features, in contrast to grass cover, have demonstrated an increasing trend during the study period. The overall urban expansion over the study period was 136.6%. Urbanization intensified impervious area coverage, increasing the runoff coefficient, equivalent impervious area, and curve number by 60.6%, 60.6%, and 7.9%, respectively, while decreasing the retention capacity by 38.6%. These modifications suggest a potential variability in catchment surface runoff, prompting the need for further research to understand the surface runoff changes brought by the changes in LULC resulting from urbanization. The findings of this study can be used for land use planning and flood management.