Figure - available from: SN Computer Science
This content is subject to copyright. Terms and conditions apply.
Row 1: input colour query image; row 2: gray-scale of the colour image in row 1; row 3: structure component of the query image in row 2; row 4: microstructure components of the query image in row 2; row 5: texture components of the query image in row 2

Row 1: input colour query image; row 2: gray-scale of the colour image in row 1; row 3: structure component of the query image in row 2; row 4: microstructure components of the query image in row 2; row 5: texture components of the query image in row 2

Source publication
Article
Full-text available
This paper introduces a novel method for satellite image retrieval based on an adaptive Gaussian Markov random field (AGMRF) model with Bayes-based back-propagation neural network (BBPNN) method. The proposed method segregates the input query image into the structure, microstructure, and texture components and estimates the parameters on each compo...

Citations

... As the Markov random field model attributes the conditional probability, it utilises the pixels' linear dependency in imagery. In ( Poornachandran et al. 2022), the RS imageries are assumed to be a Gaussian-Markov random field. Thus, this paper proposes a Gaussian-Markov random field model, which adapts itself according to the nature of the imageries; nature means the texture or semi-structure or structure. ...
... The sample window is tested, whether similar to the test-window (topmost-left corner of the Y), by deploying the Student's t test. If it is similar, then the value of the estimated parameters are divided by 2, and they are regarded as prior information (Seetharaman 2012;Poornachandran et al. 2022) and taken into account for estimating the actual parameters of the model for the testwindow. Otherwise, it samples another window and performs the t test for similarity. ...
Article
Full-text available
This paper introduces a novel method for satellite colour imagery retrieval, based on an adaptive Gaussian–Markov random field (AGMRF) model with the Bayes-driven deep convolutional neural network (AGMRF–BDCNN). The given input imagery is segregated into the structure, microstructure, and texture components, and the AGMRF-driven features and statistical features are extracted from the segregated components and are formulated as a feature vector of the query imagery. Cosine direction and Bhattacharyya distance measures are deployed to match the feature vector with the feature vector of the feature vector database. If the query imagery features match the feature-vector database's features, then the reference imagery in the database is marked and indexed. The indexed imageries are retrieved. Three different benchmark data sets, SceneSat, PatternNet, and UC Merced, have been used to validate the proposed AGMRF–BDCNN method. For the SceneSat data set, the AGMRF–BDCNN method results in 0.2319 scores for ANMRR and 0.7156 scores for mAP; for the UC Merced data set, it yields 0.2316 scores for ANMRR and 0.7816 scores for mAP; for PatternNet data set, it achieves 0.2405 scores for ANMRR and 0.6979 scores for mAP. The obtained results are comparable to state-of-the-art methods.
... denotes the neighbouring pixels of ). ( l k, X As the Markov random field model attributes the conditional probability, it utilises the pixels' linear dependency in imagery. In (Krishnamachari and Chellappa, 1997;Salzenstein and Collet, 2006;Chen and Strobl, 2013;Rezende et al., 2014;Poornachandran et al., 2022), the RS imageries are assumed to be a Gaussian Markov Random Field. Thus, this paper proposes a Gaussian Markov Random Field model, which adapts itself according to the nature of the imageries; nature means the texture or semi-structure or structure. ...
... The sample window is tested, whether similar to the test-window (topmost-left corner of the Y), by deploying the Student's t-test. If it is similar, then the value of the estimated parameters are divided by 2, and they are regarded as prior information (Seetharaman, 2012;Poornachandran et al., 2022) and taken into account for estimating the actual parameters of the model for the test-window. Otherwise, it samples another window and performs the t-test for similarity. ...
Preprint
Full-text available
This paper introduces a novel method for satellite colour imagery retrieval, based on an Adaptive Gaussian Markov Random Field (AGMRF) model with the Bayes-driven Deep Convolutional Neural Network (AGMRF-BDCNN). The given input imagery is segregated into the structure, microstructure, and texture components, and the AGMRF-driven features and statistical features are extracted from the segregated components and are formulated as a feature vector of the query imagery. Cosine direction and Bhattacharyya distance measures are deployed to match the feature vector with the feature vector of the feature vector database. If the query imagery features match the feature-vector database's features, then the reference imagery in the database is marked and indexed. The indexed imageries are retrieved. Three different benchmark datasets, SceneSat, PatternNet, and UC Merced, have been used to validate the proposed AGMRF-BDCNN method. For the SceneSat dataset, the AGMRF-BDCNN method results in 0.2319 scores for ANMRR and 0.7156 scores for mAP; for the UC Merced dataset, it yields 0.2316 scores for ANMRR and 0.7816 scores for mAP; for PatternNet dataset, it achieves 0.2405 scores for ANMRR and 0.6979 scores for mAP. The obtained results are comparable to state-of-the-art methods.