Figure - available from: Journal of Real-Time Image Processing
This content is subject to copyright. Terms and conditions apply.
Regular triangular terrain mesh model: leftS is part of the triangular mesh, right projection of S

Regular triangular terrain mesh model: leftS is part of the triangular mesh, right projection of S

Source publication
Article
Full-text available
In the geographic information field, triangular mesh modes are often used to describe terrain, where the normal vector to the surface at the node of a triangular mesh plays an important role in reconstruction and display. However, the normal vectors on the nodes of triangular meshes cannot be given directly, but instead must be computed using known...

Similar publications

Article
Full-text available
Uncertainty about global change requires alternatives to quantify the availability of water resources and their dynamics. A methodology based on different satellite imagery and surface elevation models to estimate surface water volumes would be useful to monitor flood events and reservoir storages. In this study, reservoirs with associated digital...

Citations

... Residual blocks with channel attention were adopted in RCAN [14] to achieve better performance, which had more than 400 layers. However, high execution speed is also required in many fields such as pedestrian detection [15], terrain description [16], and image segmentation [17]. Although better performance can be obtained by deepening the network regardless of the cost such as model size and computational resources, it is not suitable for these high cost networks to be applied in resource-constrained applications. ...
Article
Full-text available
In recent years, convolutional neural network-based methods have achieved remarkable performance for the single-image super-resolution task. However, huge computational complexity and memory consumption of these methods limit their applications on the resource-constrained device. In this paper, we propose a lightweight network named one-shot aggregation network (OAN) to address this problem for image super-resolution. Specifically, to take advantage of diversified features with multiple receptive fields and overcome the inefficiency of dense aggregation which aggregates all previous feature maps to the subsequent layer, we propose an one-shot aggregation block as the cascaded block to adopt one-shot aggregation strategy by aggregating the intermediate features with multiple receptive fields only once in the last feature map. Experimental results on benchmark datasets demonstrate that our proposed OAN outperforms the state-of-the-art SR methods in terms of the reconstruction quality, the number of parameters, and multiply-accumulate operations.
... bright) background, as indicated in Fig. 1. The Eigen value computing is widely used in several image analysis methods such as the ones dedicated for junction detection 6 , circle and spherical shape detection 7 and geometric active contour model for object outlining 8 . Moreover, Eigen value computing allows performing higher accuracies when employed in several applications such as the arterial improvement in magnetic resonance angiography 9 , the membrane segmentation in tomography images 10 , the shape detection in computed tomography scan imaging 11 , the vascular structure segmentation and enhancement 2,9,[12][13][14] and the pathological lesion detection 15,16 . ...
... Thereupon, the predefined function for Eigen processing included in the Open Computer Vision (OpenCV) library 28 is time consuming, which requires 4 seconds for a single image of the High-Resolution Fundus (HRF) database, as proved later in section 4.2.3. Likewise, several 7 Eigen-value-based methods suffer from higher execution times 17,18,23 . As an example, the cerebral micro bleeds are detected in approximately 17 minutes, when executed on 2.4 GHz ...
Article
Full-text available
Several leading-edge applications such as pathology detection, biometric identification, and face recognition are based mainly on blob and line detection. To address this problem, Eigen value computing has been commonly employed due to its accuracy and robustness. However, Eigen value computing requires a raised computational processing, intensive memory data access, and data overlapping, which involve higher execution times. To overcome these limitations, we propose in this paper a new parallel strategy to implement Eigen value computing using a graphics processing unit (GPU). Our contributions are (1) to optimize instruction scheduling to reduce the computation time, (2) to efficiently partition processing into blocks to increase the occupancy of streaming multiprocessors, (3) to provide efficient input data splitting on shared memory to benefit from its lower access time, and (4) to propose new data management of shared memory to avoid access memory conflict and reduce memory bank accesses. Experimental results show that our proposed GPU parallel strategy for Eigen value computing achieves speedups of 27 compared with a multithreaded implementation, of 16 compared with a predefined function in the OpenCV library, and of eight compared with a predefined function in the Cublas library, all of which are performed into a quad core multi-central-processing unit platform. Next, our parallel strategy is evaluated through an Eigen value-based method for retinal thick vessel segmentation, which is essential for detecting ocular pathologies. Eigen value computing is executed in 0.017 s when using Structured Analysis of the Retina database images. Accordingly, we achieved real-time thick retinal vessel segmentation with an average execution time of about 0.039 s.
... However, the run efficiency of our algorithm is slow. So, future works can be an implementation of the proposed method using a parallel device such as the GPU (Engel et al. 2015;Wu et al. 2018) in order to accelerate real-time images segmentation or introducing the ranking metric mechanism (Zou et al. 2016) for candidate pixels. ...
Article
Full-text available
Medical image segmentation has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, weak edges and intensity inhomogeneity (Niu et al. in 2nd IEEE international conference on computational intelligence and applications (ICCIA). https://doi.org/10.1109/ciapp.2017.816722, 2017) in medical images may hinder the accuracy of any traditional active contour segmentation methods. In this paper, we propose an improved active contour method by embedding a boundary constraint factor and adding the local information of images to the energy function of Chan–Vese model. Then graph cuts method is used to optimize the new energy function of the improved active contour method in this paper. There are several salient features: (1) we can obtain more accurate boundaries by embedding a constraint factor and adding local information of images. (2) The energy function is not easily to fall into local optimum. (3) Finally, our method only need to adjust one parameter. The evaluation results on Magnetic Resonance Imaging and Computed Tomography, blood vessel images, and mammograms masses show that the proposed method leads to more accurate boundary detection results than the state-of-the-art edge-based and region-based active contour segmentation methods.
... These characteristics have been proposed in many deinterlacing methods that have varying degrees of difficulty and complexity. Recently, some researchers have developed a graphics processing unit (GPU)-based interpolation approach for deinterlacing [25,26]. In general, deinterlacing methods can be categorized into three classes: intra-field, motion adaptive, and motion compensation-based. ...
Article
Full-text available
In this paper, we propose an efficient deinterlacing method for HDTV that preserves image structures, edges, and details. In the human visual system, the eyes are more sensitive to high-frequency information such as edge details than low-frequency information such as image background. Therefore, averaging low-pass filter results is not effective for image enhancement. The proposed method is a weighted filtering approach that generates a half-pixel 9-by-9 edge-based line average window. We also propose pixel-resemblance- and pixel-expansion-based fuzzy weights, which are assigned using a triangular membership function. Compared to conventional format conversion methods, the proposed method outperforms all benchmarks in terms of both objective and subjective qualities.
... been used in aspects of scientific computing [12][13][14][15][16][17][18] for their low cost and efficient power usage. Mussi and Daolio used the CUDA computing platform to optimize particle swarms [12]. ...
Article
With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultra high definition products that is more effective and has lower time complexity. Image interpolation, which is based on autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse CUDA optimization strategies to make full use of the GPU, including a shared memory and register and multi-GPU optimization. To be more suitable for GPU-parallel optimization, we modify the training window to obtain more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.
Article
Full-text available
In this paper, a new fuzzy-rule-based impulse noise denoising method that removes unwanted artifacts and reconstructs original images is proposed. The proposed method is based on the fuzzy rule selective approach, which has four sub-methods. The proposed method can effectively remove noise artifacts that are caused by various levels of impulse noise. Simulation results show that the presented method outperforms conventional methods. The proposed method can be directly applied to various consumer electronics displays.
Conference Paper
Full-text available
The Multi-mission Instrument Processing Lab (MIPL) is responsible for developing much of the ground support software for the Mars missions. The MIPL pipeline is used to generate several products from a one mega-pixel image within a 30 min time constraint. In future missions, this time constraint will be decreased to five minutes for 20 mega-pixels images, requiring a minimum 120 times speed-up from current operational hardware and software. Moreover, any changes to the current software must preserve the source code’s maintainability and portability for future missions. Therefore, the surface normal generation software has been implemented on a Graphical Processing Unit (GPU) through the use of the NVidia CUDA Toolkit and Hemi Library to allow for minimum code complexity. Several changes have been made to Hemi Library to allow for additional optimizations of the GPU code. In addition, several challenges to developing a parallelized GPU implementation of the surface normal generation algorithm are explored, while both tested and prospective solutions to these problems are described.