Hongliang Guan's research while affiliated with Capital Normal University and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (16)


A depth map fusion algorithm with improved efficiency considering pixel region prediction
  • Article

August 2023

·

17 Reads

·

1 Citation

ISPRS Journal of Photogrammetry and Remote Sensing

·

Xiaoli Liu

·

Hongliang Guan

·

[...]

·

Wenhu Qv
Share

Figure 2. Image feature point distribution diagram.
Figure 4. Improving partial trace loss.
Classification of dynamic properties of objects in the scene.
Dynamic object determination strategy table.
Image sequence of data set.

+4

A Dynamic Scene Vision SLAM Method Incorporating Object Detection and Object Characterization
  • Article
  • Full-text available

February 2023

·

116 Reads

·

9 Citations

Sustainability

Simultaneous localization and mapping (SLAM) based on RGB-D cameras has been widely used for robot localization and navigation in unknown environments. Most current SLAM methods are constrained by static environment assumptions and perform poorly in real-world dynamic scenarios. To improve the robustness and performance of SLAM systems in dynamic environments, this paper proposes a new RGB-D SLAM method for indoor dynamic scenes based on object detection. The method presented in this paper improves on the ORB-SLAM3 framework. First, we designed an object detection module based on YOLO v5 and relied on it to improve the tracking module of ORB-SLAM3 and the localization accuracy of ORB-SLAM3 in dynamic environments. The dense point cloud map building module was also included, which excludes dynamic objects from the environment map to create a static environment point cloud map with high readability and reusability. Full comparison experiments with the original ORB-SLAM3 and two representative semantic SLAM methods on the TUM RGB-D dataset show that: the method in this paper can run at 30+fps, the localization accuracy improved to varying degrees compared to ORB-SLAM3 in all four image sequences, and the absolute trajectory accuracy can be improved by up to 91.10%. The localization accuracy of the method in this paper is comparable to that of DS-SLAM, DynaSLAM and the two recent target detection-based SLAM algorithms, but it runs faster. The RGB-D SLAM method proposed in this paper, which combines the most advanced object detection method and visual SLAM framework, outperforms other methods in terms of localization accuracy and map construction in a dynamic indoor environment and has a certain reference value for navigation, localization, and 3D reconstruction.

Download

An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes

January 2023

·

88 Reads

·

5 Citations

Remote Sensing

Remote Sensing

The structure from motion (SfM) method has achieved great success in 3D sparse reconstruction, but it still faces serious challenges in large-scale scenes. Existing hybrid SfM methods usually do not fully consider the compactness between images and the connectivity between subclusters, resulting in a loose spatial distribution of images within subclusters, unbalanced connectivity between subclusters, and poor robustness in the merging stage. In this paper, an efficient and robust hybrid SfM method is proposed. First, the multifactor joint scene partition measure and the preassignment balanced image expansion algorithm among subclusters are constructed, which effectively solves the loose spatial distribution of images in subclusters problem and improves the degree of connection among subclusters. Second, the global GlobalACSfM method is used to complete the local sparse reconstruction of the subclusters under the cluster parallel framework. Then, a decentralized dynamic merging rule considering the connectivity of subclusters is proposed to realize robust merging among subclusters. Finally, public datasets and oblique photography datasets are used for experimental verification. The results show that the method proposed in this paper is superior to the state-of-the-art methods in terms of accuracy and robustness and has good feasibility and advancement prospects.


Figure 1. Refocusing principle schematic.
Number of matching pairs of image feature points.
A Light Field Full-Focus Image Feature Point Matching Method with an Improved ORB Algorithm

December 2022

·

34 Reads

·

2 Citations

Sensors

Most of the traditional image feature point extraction and matching methods are based on a series of light properties of images. These light properties easily conflict with the distinguishability of the image features. The traditional light imaging methods focus only on a fixed depth of the target scene, and subjects at other depths are often easily blurred. This makes the traditional image feature point extraction and matching methods suffer from a low accuracy and a poor robustness. Therefore, in this paper, a light field camera is used as a sensor to acquire image data and to generate a full-focus image with the help of the rich depth information inherent in the original image of the light field. The traditional ORB feature point extraction and matching algorithm is enhanced with the goal of improving the number and accuracy of the feature point extraction for the light field full-focus images. The results show that the improved ORB algorithm extracts not only most of the features in the target scene but also covers the edge part of the image to a greater extent and produces extracted feature points which are evenly distributed for the light field full-focus image. Moreover, the extracted feature points are not repeated in a large number in a certain part of the image, eliminating the aggregation phenomenon that exists in traditional ORB algorithms.


The number of pixels in some foreground images of different focus calculation methods.
The calculation result of the foreground score value of the focus image.
Saliency Detection of Light Field Images by Fusing Focus Degree and GrabCut

September 2022

·

16 Reads

·

3 Citations

Sensors

In the light field image saliency detection task, redundant cues are introduced due to computational methods. Inevitably, it leads to the inaccurate boundary segmentation of detection results and the problem of the chain block effect. To tackle this issue, we propose a method for salient object detection (SOD) in light field images that fuses focus and GrabCut. The method improves the light field focus calculation based on the spatial domain by performing secondary blurring processing on the focus image and effectively suppresses the focus information of out-of-focus areas in different focus images. Aiming at the redundancy of focus cues generated by multiple foreground images, we use the optimal single foreground image to generate focus cues. In addition, aiming at the fusion of various cues in the light field in complex scenes, the GrabCut algorithm is combined with the focus cue to guide the generation of color cues, which realizes the automatic saliency target segmentation of the image foreground. Extensive experiments are conducted on the light field dataset to demonstrate that our algorithm can effectively segment the salient target area and background area under the light field image, and the outline of the salient object is clear. Compared with the traditional GrabCut algorithm, the focus degree is used instead of artificial Interactively initialize GrabCut to achieve automatic saliency segmentation.



Optimal Bands Combination Selection for Extracting Garlic Planting Area with Multi-Temporal Sentinel-2 Imagery

August 2021

·

78 Reads

·

11 Citations

Sensors

Garlic is one of the main economic crops in China. Accurate and timely extraction of the garlic planting area is critical for adjusting the agricultural planting structure and implementing rural policy actions. Crop extraction methods based on remote sensing usually use spectral-temporal features. Still, for garlic extraction, most methods simply combine all multi-temporal images. There has been a lack of research on each band's function in each multi-temporal image and optimal bands combination. To systematically explore the potential of the multi-temporal method for garlic extraction, we obtained a series of Sentinel-2 images in the whole garlic growth cycle. The importance of each band in all these images was ranked by the random forest (RF) method. According to the importance score of each band, eight different multi-temporal combination schemes were designed. The RF classifier was employed to extract garlic planting area, and the accuracy of the eight schemes was compared. The results show that (1) the Scheme VI (the top 39 bands in importance score) achieved the best accuracy of 98.65%, which is 6% higher than the optimal mono-temporal (February, wintering period) result, and (2) the red-edge band and the shortwave-infrared band played an essential role in accurate garlic extraction. This study gives inspiration in selecting the remotely sensed data source, the band, and phenology for accurately extracting garlic planting area, which could be transferred to other sites with larger areas and similar agriculture structures.


Land cover change detection with VHR satellite imagery based on multi-scale SLIC-CNN and SCAE features

December 2020

·

321 Reads

·

15 Citations

IEEE Access

Change detection with very high resolution (VHR) satellite images is of great application values when evaluating and monitoring land use changes. However, intrinsic complexity of satellite images will introduce more difficulties to change detection tasks. In this study, a new change detection method is proposed by combining multi-scale simple linear iterative clustering-convolutional neural network (SLIC-CNN) with stacked convolutional auto-encoder (SCAE) features to improve change detection capabilities with VHR satellite images. First, the multi-scale SLIC-based image segmentation is performed on multi-temporal images to generate segment objects while keeping their edge information as much as possible. Second, the convolutional layers in a CNN architecture are used to generate change map, then, an SCAE feature-based classification procedure is performed to generate “from-to” change information. Finally, a Bayesian information criterion is used to optimize the results of change detection. In this study, the experiments carried out reveal that the multi-scale SLIC image segmentation algorithm affects the integrity of change regions; CNN features have an effect on the consistency of change regions; and SCAE features influence the performance of support vector machine (SVM) classifiers. And, features extracted from the architectures enhance the ability of information extraction from ground objects. Comparison results also show the superiority to other change detection methods.


Object-based change detection for VHR remote sensing images based on a Trisiamese-LSTM

August 2020

·

248 Reads

·

17 Citations

Change detection has been a research hotspot in remote sensing fields for decades. However, the increasing use of very high-resolution (VHR) remote sensing images have introduced more difficulties in change detection because of the complex details these images contain. In this paper, we propose a novel deep learning architecture for change detection composed of a Trisiamese subnetwork and a long short-term memory (LSTM) subnetwork that fully utilizes the spatial, spectral and multiphase information and improves the change detection capabilities for VHR remote sensing images. Multiscale simple linear iterative clustering (SLIC)-based image segmentation is first performed on multitemporal images at different image scales to obtain edge information-based objects. A Trisiamese subnetwork with six inputs can extract abundant spectral-spatial feature representations; the LSTM subnetwork then uses the extracted image features to effectively analyse the multiphase information in bitemporal images. The proposed method has the following advantages: 1) it can fully utilize the significant spatial information to improve the detection task; 2) it combines the advantages of convolutional architectures for image feature representation and recurrent neural network (RNN) architectures for sequential data representation, unlike most of the algorithms that use either method or that merely use image differencing or stacking operations. The controlled experiments reveal that the multiphase information extracted by the LSTM subnetwork is important to improve the accuracy of the change detection results. The influence of the Trisiamese subnetwork on change detection is even more significant than that of the LSTM subnetwork. Comparisons with other state-of-the-art change detection methods indicate that in areas with clear surface features and limited interference, the proposed method obtains more competitive results compared to state-of-the-art methods, and in regions where the changed objects occur in complex patterns, the proposed method exhibited an ideal performance.



Citations (12)


... Zhang et al. [19] augmented the ORB-SLAM2 system with a YOLOv5-based object detection and recognition module, achieving real-time and rapid detection of dynamic features. Guan et al. [20] incorporated a YOLOv5 target detection module into the tracking module of ORB-SLAM3 and generated static environment point cloud maps using RGB-D cameras. Wang et al. [21] proposed YPD-SLAM, a system based on Yolo-FastestV2 target detection and CAPE plane extraction, capable of running on the CPU while maintaining relatively high detection accuracy. ...

Reference:

GY-SLAM: A Dense Semantic SLAM System for Plant Factory Transport Robots
A Dynamic Scene Vision SLAM Method Incorporating Object Detection and Object Characterization

Sustainability

... Real-time manipulation is defined as the ability to alter a view of space through tools and observe the changes and the effects instantly [17], [18], [23], [24]. ...

An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes
Remote Sensing

Remote Sensing

... The significance of this research lies in the fact that the accuracy and feasibility of three-dimensional measurements directly depend on the precise matching of marker points. 4 If we cannot ensure the accurate matching of marker points, it will have adverse effects on three-dimensional reconstruction and target self-calibration processes, subsequently affecting the overall effectiveness of three-dimensional measurements. Therefore, our research aims to fill this crucial research gap to enhance the feasibility and precision of the entire three-dimensional measurement process. ...

A Light Field Full-Focus Image Feature Point Matching Method with an Improved ORB Algorithm

Sensors

... The wide applicability of salient region extraction techniques makes it a hot topic of research. A saliency detection technique that combines out-of-focus region suppression with grab cut is discussed in [1]. This technique aims to eliminate redundant pixels from the focal stack images from further processing in the saliency extraction pipeline. ...

Saliency Detection of Light Field Images by Fusing Focus Degree and GrabCut

Sensors

... For example, Wu Shuang and others obtained Sentinel-2 remote sensing images covering the entire growth cycle of garlic. They made progress in garlic identification by utilizing different combinations of multiple temporal phases [22]. Additionally, some studies used convolutional neural networks to create garlic land classification models based on the growth stages. ...

Optimal Bands Combination Selection for Extracting Garlic Planting Area with Multi-Temporal Sentinel-2 Imagery

Sensors

... This model incorporates a pyramid of two input images and employs advanced up-sampling techniques to enhance the granularity of change detection, further advancing the stateof-the-art in the field. Jing et al. [35] proposed a novel approach which combined multi-scale Simple Linear Iterative Clustering-Convolutional Neural Network (SLICCNN) and Stacked Convolutional Auto-Encoder (SCAE) features. Basavaraju et al. [36] introduced the Urban Change Detection Network (UCDNet) model for urban change detection. ...

Land cover change detection with VHR satellite imagery based on multi-scale SLIC-CNN and SCAE features

IEEE Access

... • Recurrent Neural Networks (RNNs): designed to detect efficiently sequential relationships in data (text, videos or time series). It can be trained to detect changes in images, such as in [16] and [17], since it is performed between successive images. • Convolution Neural Networks (CNNs): powerful NN with robust feature extraction. ...

Object-based change detection for VHR remote sensing images based on a Trisiamese-LSTM
  • Citing Article
  • August 2020

International Journal of Remote Sensing

International Journal of Remote Sensing

... It is widely used for image segmentation due to its simplicity of computation and flexibility concerning luminance and contrast [36]. The OTSU method has been shown to have a high accuracy for waterline extraction and has also achieved satisfactory results in other studies [10,37]. It is a simple unsupervised classification that can be well scaled up to larger studies on tidal flats on coastal beaches, as large amounts of training data are not required. ...

Extracting tidal creek features in a heterogeneous background using Sentinel-2 imagery: a case study in the Yellow River Delta, China
  • Citing Article
  • May 2020

International Journal of Remote Sensing

International Journal of Remote Sensing

... Different scholars have extensively studied the spectral characteristics of soil in complex large-scale areas [19,20]. Condit et al. conducted a study in which they carefully selected 160 soil samples from 36 states across the United States [21]. They determined the spectral characteristics of the near ultraviolet and visible bands and subsequently established an empirical regression equation to describe the reflectance characteristics of the soil. ...

Soil Organic Matter Estimation Using Hyperspectral Remote Sensing Techniques in a Water-Level-Fluctuating Zone Around Guanting Reservoir, Beijing, China
  • Citing Conference Paper
  • July 2019

... Hebei is part of the BTH region and surrounding areas. It is one of the economic core regions and the most severe air pollution region in China [17]. Emission inventories for OC and EC show that the BTH region and surrounding areas are the regions with the largest carbonaceous aerosol emissions in China [18]. ...

Spatio-Temporal Variation Characteristics of PM2.5 in the Beijing-Tianjin-Hebei Region, China, from 2013 to 2018