The comparison between Measurements of general IQA and FR-PIQA. The top is the measurements of General IQA, and the bottom is the FR-PIQA.

The comparison between Measurements of general IQA and FR-PIQA. The top is the measurements of General IQA, and the bottom is the FR-PIQA.

Source publication
Article
Full-text available
Since the introduction of pansharpening, quality assessment has played a pivotal role in related remote sensing research to ensure the overall system's reliability. Full-resolution quality assessment is a debated research topic for applications. However, full-resolution (FR) assessment faces challenges due to the absence of reference compared to re...

Context in source publication

Context 1
... it comes to measuring the effectiveness of QA methods, as Fig. 2 revealed, the general IQA is done by comparing the subjective (human-rated) data and predicting quality scores to measure the effectiveness of the IQA techniques. This enables an accurate measurement how well IQA techniques perform in comparison to human visual perception. However, for the task of PIQA, it is challenging to obtain a ...

Similar publications

Article
Full-text available
Super-resolution enhances the spatial resolution of remote sensing images, yielding clearer data for diverse satellite applications. However, existing methods often lose true detail and produce pseudo-detail in reconstructed images due to an insufficient number of ground truth images for supervision. To address this issue, a prediction-to-predictio...

Citations

... Pan-sharpening uses a higher-resolution panchromatic image (or raster band) to fuse with a fully overlapped lower-resolution multiband raster dataset. The result produces a multiband raster dataset with the resolution of the panchromatic raster [Guan et al. 2023;Javan et al. 2021;Vivone et al. 2020]. High-resolution PAN (HR-PAN) and low-resolution RGB (LR-RGB) are often provided together in pairs. ...
... [Javan et al. 2021], Variational optimization (VO)-based methods apply the variational theory with the deep image priors [Guan et al. 2023]. ...
Preprint
    Most current NeRF variants for satellites are designed for one specific scene and fall short of generalization to new geometry. Additionally, the RGB images require pan-sharpening as an independent preprocessing step. This paper introduces psPRF, a Planar Neural Radiance Field designed for paired low-resolution RGB (LR-RGB) and high-resolution panchromatic (HR-PAN) images from satellite sensors with Rational Polynomial Cameras (RPC). To capture the cross-modal prior from both of the LR-RGB and HR-PAN images, for the Unet-shaped architecture, we adapt the encoder with explicit spectral-to-spatial convolution (SSConv) to enhance the multimodal representation ability. To support the generalization ability of psRPF across scenes, we adopt projection loss to ensure strong geometry self-supervision. The proposed method is evaluated with the multi-scene WorldView-3 LR-RGB and HR-PAN pairs, and achieves state-of-the-art performance.
    ... Even though many efforts have been made since the problem was originally stated [28], quality assessments of fusion products remains an open problem [29,30]. Objective evaluations at full scale have been carried out in terms of separate measurements of spectral consistency of the fused HS data to the original HS data [31]. ...
    Article
    Full-text available
    The definition and calculation of a spectral index suitable for characterizing vegetated landscapes depend on the number and widths of the bands of the imaging instrument. Here, we point out the advantages of performing the fusion of hyperspectral (HS) satellite data with the multispectral (MS) bands of Sentinel-2 to calculate such vegetation indexes as the normalized area over reflectance curve (NAOC) and the red-edge inflection point (REIP), which benefit from the availability of quasi-continuous pixel spectra. Unfortunately, MS data may be acquired from satellite platforms with very high spatial resolution; HS data may not. Despite their excellent spectral resolution, satellite imaging spectrometers currently resolve areas not greater than 30 × 30 m2, where different thematic classes of landscape may be mixed together to form a unique pixel spectrum. A way to resolve mixed pixels is to perform the fusion of the HS dataset with the same dataset produced by an MS scanner that images the same scene with a finer spatial resolution. The HS dataset is sharpened from 30 m to 10 m by means of the Sentinel-2 bands that have all been previously brought to 10 m. To do so, the hyper-sharpening protocol, that is, m:n fusion, is exploited in two nested steps: the first one to bring the 20 m bands of Sentinel-2 all to 10 m, the second one to sharpen all the 30 m HS bands to 10 m by using the Sentinel-2 bands previously hyper-sharpened to 10 m. Results are presented on an agricultural test site in The Netherlands imaged by Sentinel-2 and by the satellite imaging spectrometer recently launched as a part of the environmental mapping and analysis program (EnMAP). Firstly, the excellent match of statistical consistency of the fused HS data to the original MS and HS data is evaluated by means of analysis tools, existing and developed ad hoc for this specific case. Then, the spatial and radiometric accuracy of REIP and NAOC calculated from fused HS data are analyzed on the classes of pure and mixed pixels. On pure pixels, the values of REIP and NAOC calculated from fused data are consistent with those calculated from the original HS data. Conversely, mixed pixels are spectrally unmixed by the fusion process to resolve the 10 m scale of the MS data. How the proposed method can be used to check the temporal evolution of vegetation indexes when a unique HS image and many MS images are available is the object of a final discussion.
    ... Even though many efforts have been done since the problem has been originally stated [37], quality assessments of fusion products remains an open problem [38], [39]. Objective evaluations at full scale have been carried out in terms of separate measurements of the spectral consistency of the fused HS data to the original HS data [40]. Full-scale quality evaluations should avoid spectral and spatial consistency indexes that have been found to be sensitive to MS-to-PAN misregistration [41]. ...
    Article
    Full-text available
    The paper presents an original method for the spatial resolution enhancement of satellite hyperspectral (HS) data by means of the Sentinel-2 visible-near infrared (VNIR) and shortwave infrared (SWIR) bands at 10 and 20 m spatial resolution. Presently, HS data are available from PRISMA (Italian acronym for hyperspectral precursor of the application mission) and EnMAP (environmental mapping and analysis program): both map the spectral interval of the solar radiation onto 240 and 224 bands, respectively, with 10 nm and 6.5/10 nm widths. A 5 m×5 m panchromatic (PAN) band is also acquired by PRISMA. When the PAN band is unavailable, or better, the higher spatial-resolution sharpening band is not unique, advantage can be taken from the hyper-sharpening protocol. Firstly, the 20 m bands of Sentinel-2 are hypersharpened to 10 m by means of the four 10 m VNIR bands of the same instrument. Then, the 10 m hyper-sharpened bands of Sentinel-2 are used to sharpen the 30 m bands of PRISMA at 10 m as well, still according to the hyper-sharpening protocol. Eventually, the 10 m hyper-sharpened bands are pansharpened at 5 m by means of the PAN image, if available. Results show that for PRISMA the nested hyper-sharpening followed by pansharpening is better than plain HS pansharpening, both visually and according to full-scale indexes of spectral and spatial consistence. For EnMAP data, in which the PAN image is missing, the improvement of the fused data with respect to the original EnMAP and Sentinel-2 data has been quantified by means of two novel statistical indexes capable of measuring the spatial and intersensor consistencies between sharpened and sharpening data.
    ... S UPER-resolution (SR) is the process of restoring a highresolution (HR) image from a given low-resolution (LR) image. With the development of remote sensing image application technology, remote sensing images are extensively used in hyperspectral application [1][2][3][4][5][6][7][8], object detection [9,10], change detection [11][12][13], and other fields. However, image SR is an ill-posed problem because one LR image may degenerate from several different HR images. ...
    Article
    Full-text available
    Remote sensing images are essential in many fields, such as land cover classification and building extraction. The huge difference between the directly acquired remote sensing images and the actual scene, due to the complex degradation process and hardware limitations, seriously affects the performance achieved by the same classification or segmentation model. Therefore, using super-resolution (SR) algorithms to improve image quality and achieve better results is an effective method. However, current SR methods only focus on the similarity of pixel values between SR and high-resolution (HR) images without considering perceptual similarities, which usually leads to the problem of over-smoothed and blurred edge details. Moreover, there is little attention to human visual habits and machine vision applications for remote sensing images. In this work, we propose the Context aware Edge-Enhanced Generative Adversarial Network (CEEGAN) SR framework to reconstruct visually pleasing images that can be practically applied in actual scenarios. In the generator of CEEGAN, we build an Edge Feature Enhanced Module (EFEM) to enhance the edges by combining the edge features with context information. Edge Restoration Block (ERB) is designed to fuse multi-scale edge features enhanced by EFEM and reconstruct a refined edge map. Furthermore, we designed an Edge Loss function to constrain the generated SR and HR similarity at the edge domain. Experimental results show that our proposed method can obtain SR images with better reconstruction performance. Meanwhile, CEEGAN can achieve the best results on classification and semantic segmentation datasets for machine vision applications
    ... As machine learning and deep learning strategies advance, researchers are increasingly concentrating on extracting deeplevel features based the AI methods for quality prediction [29][30][31][32] and other remote sensing tasks [33][34][35][36][37][38][39][40][41][42][43]. Researchers often regressively predict quality scores by analyzing and comparing the features of Pan, LR-MS, and HR-MS imagery [30,[44][45][46][47][48]. Such deep learning methods have been proven effective. ...
    Article
    Full-text available
      Full-resolution quality evaluation model for pansharpened images is significant for remote sensing applications, yet presents a challenge of the absence of reference compared with the reduced-resolution approach. To predict the image quality accurately, it is necessary to consider the distortion during the pansharpening process. Based on an observation that the quality of pairwise images can more easily be ranked, we propose a Rank Learning Based Full-Resolution Quality Evaluation Method for Pansharpened Images. Our approach begins with the synthesizing of ranked distortion images in spatial and spectral domains. Then, we develop a pansharpening distortion-perceiving model. This model employs spatial and spectral Siamese networks to perceive distortions and applies a pair-wise learning strategy for ranked images. Consequently, we establish a Distortion-Guided Full-Resolution Quality Evaluation framework for pansharpening. This framework integrates the spatial and spectral distortion-perceiving network and is enhanced with a Dimension Alignment module and a Discrepancy Representation module, enabling effective distortion extraction among High-Resolution Multispectral, Panchromatic, and Low-Resolution Multispectral images. We conducted a series of experiments on a large-scale public pansharpened database. The experimental results demonstrate the effectiveness of our proposed approach.
      Article
      In this letter, to better supplement the advantages of features at different levels and improve the feature extraction ability of the network, a novel multi-level feature interaction transformer network (MFITN) is proposed for pansharpening, aiming to fuse multispectral (MS) and panchromatic (PAN) images. In MFITN, a multi-level feature interaction transformer encoding module is designed to extract and correct global multi-level features by considering the modality difference between source images. These features are then fused using the proposed multi-level feature mixing (MFM) operation, which enables features to fuse interactively to obtain richer information. Furthermore, the global features are fed into a CNN-based local decoding module to better reconstruct high-spatial-resolution multispectral (HRMS) images. Additionally, based on the spatial consistency between MS and PAN images, a band compression loss is defined to improve the fidelity of fused images. Numerous simulated and real experiments demonstrate that the proposed method has the optimal performance compared to state-of-the-art methods. Specifically, the proposed method improves the SAM metric by 7.89% and 6.41% compared to the second-best comparison approach on Pléiades and WorldView-3, respectively.
      Article
      Full-text available
      Pansharpening involves the fusion of panchromatic (PAN) and multispectral (MS) images to obtain a high-resolution image with enhanced spectral and spatial information. Assessing the quality of the resulting fused image poses a challenge due to the absence of a high-resolution reference image. Numerous methods have been proposed to address this, from assessing quality at reduced resolution to full-resolution evaluations. Many existing approaches are pixel-based, where quality metrics are applied and averaged on individual pixels. In this article, we introduce a novel object-based method for assessing the quality of pansharpened images at full resolution. In object-based quality assessment methods, the reaction of different areas of the fused image to the fusion process is reflected. Our approach revolves around extracting objects from the given image and evaluating extracted objects. By doing so, the distinct responses of different objects within the fused image to the fusion process are captured. The proposed method leverages a unique object extraction technique known as segmen-tation by nearest neighbor (SNN) to extract objects of the MS image. This method extracts the objects based on the image's characteristics without any requirement for parameter tuning. These extracted objects are then mapped onto both PAN and fused images. The proposed spectral index measures the spectral homogeneity of the fused image's objects and the proposed spatial index measures the injected spatial content from the PAN image to the fused image's objects. Experimental results underscore the robustness and reliability of the proposed method. Additionally, by visualizing distortion values on object-maps, we gain insights into fusion quality across diverse areas within the scene.