Fig 8 - uploaded by Yufeng Zheng
Content may be subject to copyright.
Two-band channel-based color fusion: (a-c) Color fusion of (RGB Å LWIR); (d-f) Color fusion of (RGB Å NIR). Original images are shown in Figs. 4-6a, b, and c. 

Two-band channel-based color fusion: (a-c) Color fusion of (RGB Å LWIR); (d-f) Color fusion of (RGB Å NIR). Original images are shown in Figs. 4-6a, b, and c. 

Source publication
Chapter
Full-text available
The diffused false-colored image is transformed into the lαβ color space. Each component (l, α or β) of the diffused image is clustered in the lαβ space by individually analyzing its histogram. Specifically, for each intensity component (image) l, α or β, (i) normalize the intensity onto [0,1]; (ii) bin the normalized intensity to a certain number...

Context in source publication

Context 1
... three-band input images used in the color fusion process are shown in Figs. 4-7a, b and c, respectively. The image resolutions are given in figure captions. The RGB images and LWIR images were taken by a FLIR SC620 two-in-one camera, which has LWIR camera (of 640×480 pixel original resolution and 7.5~13 μ m spectral range) and an integrated visible- band digital camera (2048×1536 pixel original resolution). The NIR images were taken by a FLIR SC6000 camera (640×512 pixel original resolution and 0.9~1.7 μ m spectral range). Two cameras (SC620 and SC6000) were sat on the same fixture by turns and aimed at the same direction. The images were captured during sunset time and dusk time in fall season. Of course, image registration as described in Section 2.2 was applied to the three band images shown in Figs. 4-7, where manual alignments were employed to the RGB images shown in Figs. 6-7a since those visible images are so dark and noisy. To better present the RGB images, contrast and brightness adjustments (as described in figure captions) were applied. Notice that piecewise contrast stretching (Eq. (1)) was used for NIR enhancements. The fused images using a DWT algorithm was shown in Figs. 4-7d. Two-band channel-based color fusion (Eqs. (10)) was applied to the NIR and LWIR images (shown in Figs. 4-7b, c), and the results are illustrated in Figs. 4-7e; while three-band color fusion (Eqs. (13)) of (RGB Å NIR Å LWIR) are shown in Figs. 4-7f. Relative to gray-fusion (Figs. 4-7d), the images shown in two-band color fusion (Figs. 4-7e) resemble natural colors, which makes scene classification much easier. In the color-fusion images, the trees and grasses can be easily distinguished from grounds (parking lots) and sky. The car and person are easily identified in Figs. 6-7e. In Fig. 6e, the water area (between ground and trees, shown in cyan color) is clearly noticeable, but it is hard to realize the water area in the gray-fusion image (Fig. 6d). There is some improvement in three-band color fusion of (RGB Å NIR Å LWIR) in Figs. 4-5f when the light condition is good. For example, the tree, sky and ground shown in Figs. 4-5f are represented in more realistic colors than that in Figs. 4-5e. However, there is no significant difference between two-band and three-band color fusions as shown in Figs. 6-7 because the RGB images were taken at poor lighting condition. The two-band channel-based color fusion of (RGB Å LWIR) as defined in Eq. (11) is demonstrated in Fig. 8a-c; while the color fusion of (RGB Å NIR) as defined in Eq. (12) is illustrated in Fig. 8d-f. No additional brightness or contrast adjustments were applied to these color-fusion images. In Fig. 8, the top-row images appear reddish, while the bottom- row images show greenish. These color-fusion images (under poor illumination) are not very realistic but have better representations and visibilities than the original RGB images (Figs. 4-6a). No color fusions of (RGB  LWIR) or (RGB  NIR) using the images shown in Fig. 7 are presented here due to the poor quality of RGB image (Fig. 7a). The segmentation-based colorization demonstrated here took two-band multispectral images (II and LWIR) as inputs. Actually, this segmentation-based colorization procedure can accept two or three input images (e.g., II, NIR, LWIR). If there are more than three bands of images available (e.g., II, NIR, MWIR, LWIR), we may choose the low-light intensified (visual band) image and two bands of IR images. As far how to choose two bands of IR images, we may use the image fusion algorithm as a screening process. The two selected IR images for colorization should be the two images that can produce the most (maximum) informative fused image among all possible fusions. For example, given three IR images, IR 1 , IR 2 , IR 3 , the two chosen images for colorization, I C1 , I C2 , should satisfy the following equation: Fus (I C1 , I C2 ) = max { Fus (IR 1 , IR 2 ), Fus (IR 1 , IR 3 ), Fus (IR 2 , IR 3 )}, where Fus stands for the fusion process and max means selecting the fusion of maximum ...

Citations

... Depending on the task of the observer, fused images should preferably use familiar representations (e.g., natural colors) to facilitate scene or target recognition or should highlight details of interest to speed up the search (e.g., by using color to make targets stand out from the clutter in a scene). This consideration has led to the development of numerous fusion schemes that use color to achieve these goals [2][3][4][5]. ...
... We, therefore, introduced a method to give fused multiband nighttime imagery a realistic color appearance by transferring the first order color statistics of color daylight images to the nighttime imager [41]. This approach has recently received considerable attention [3,5,[56][57][58][59][60][61][62][63][64][65][66], and has successfully been applied to colorize fused intensified visual and thermal imagery [5,57,58,60,61,64], FLIR imagery [67], SAR and FLIR imagery [38], remote sensing imagery [68], and polarization imagery [63]. However, color transfer methods based on global or semi-local (regional) image statistics typically do not achieve color constancy and are computationally expensive. ...
... We, therefore, introduced a method to give fused multiband nighttime imagery a realistic color appearance by transferring the first order color statistics of color daylight images to the nighttime imager [41]. This approach has recently received considerable attention [3,5,[56][57][58][59][60][61][62][63][64][65][66], and has successfully been applied to colorize fused intensified visual and thermal imagery [5,57,58,60,61,64], FLIR imagery [67], SAR and FLIR imagery [38], remote sensing imagery [68], and polarization imagery [63]. However, color transfer methods based on global or semi-local (regional) image statistics typically do not achieve color constancy and are computationally expensive. ...
Article
Full-text available
Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods
... We therefore introduced a method to give fused multiband nighttime imagery a realistic color appearance by transferring the first order color statistics of colour daylight images to the nighttime imagery 61 . This approach has recently received considerable attention [5][6][7]9,11,12,22,24,25,60,[76][77][78][79][80][81][82][83][84][85][86][87] , and has successfully been applied to colorize fused intensified visual and thermal imagery [5][6][7]9,11,12,22,24,25,77,78,81,82,88,89 , FLIR imagery 83 , SAR and FLIR imagery 60 , remote sensing imagery 76 , and polarization imagery 80 . However, color transfer methods based on global or semi-local (regional) image statistics typically do not achieve color constancy, and are computationally expensive. ...
... We therefore introduced a method to give fused multiband nighttime imagery a realistic color appearance by transferring the first order color statistics of colour daylight images to the nighttime imagery 61 . This approach has recently received considerable attention [5][6][7]9,11,12,22,24,25,60,[76][77][78][79][80][81][82][83][84][85][86][87] , and has successfully been applied to colorize fused intensified visual and thermal imagery [5][6][7]9,11,12,22,24,25,77,78,81,82,88,89 , FLIR imagery 83 , SAR and FLIR imagery 60 , remote sensing imagery 76 , and polarization imagery 80 . However, color transfer methods based on global or semi-local (regional) image statistics typically do not achieve color constancy, and are computationally expensive. ...
... 6 The increasing availability of multiband night vision systems has led to a growing interest in the color display of multispectral imagery. [6][7][8][9][10] The underlying assumption is that mapping the different spectral bands to a given color space can increase the dynamic range of a sensor system, 11 and can enhance the feature contrast and reduce the visual clutter, resulting in better human visual scene recognition, object detection, and depth perception. It has indeed been observed that appropriately designed false color rendering of nighttime multispectral imagery can significantly improve the observer performance and reaction times in tasks that involve scene segmentation and classification. ...
Article
Full-text available
Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modal-ities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.OE.53.4.043101] revised manuscript received Feb. 14, 2014; accepted for publication Mar. 4, 2014; published online Apr. 1, 2014. 1 Introduction We recently introduced a real-time color transform that enables full color rendering of multispectral nighttime images, giving them a realistic and stable color appearance.
... Finally they use decision-level fusion which enhances features in the fused image, while suppressing conflicts. Zheng [5] make use of channel-based colour fusion, he modified the red channel of the input image with the corresponding pixel value from the LWIR image. Li et. ...
Conference Paper
Full-text available
This paper presents a comparison of methods to fuse pre-registered colour visual and long wave infra-red images to create a new image containing both visual and thermal cues. Three methods of creating the artificially coloured fused images are presented. These three methods along with the raw visual and LWIR imagery are then evaluated using the Analytical Hierarchy Process for three different scenarios using a set of 32 observers. The scenarios entail bright, dim and dark conditions which directly affect the amount of visual information available. Both the standard method and a novel voting methodology are used to evaluate the results, the latter providing similar ranking but better discrimination between the voter's preferences. The results show that fused images are preferred for non-dark conditions with the thermal based hue offset algorithm being preferred.
... In this section, we designed a framework with the flowchart shown in Fig. 1. For non-color images (for example IR images), we first convert it to a virtual color images by the channel based color fusion algorithms [25]. Then we apply the linear smooth filters to reduce the image noise. ...
... For an IR image (Fig. 6a), we apply the channel based color fusion [25] with a similar RGB picture and obtained the virtual color image (Fig. 6b). The segmentation results of PureInImDyn clustering game method without linear smoothing filters are shown in Fig. 6c and Fig. 6d. ...
Conference Paper
Image segmentation decomposes a given image into segments, i.e. regions containing “similar” pixels, that aids computer vision applications such as face, medical, and fingerprint recognition as well as scene characterization. Effective segmentation requires domain knowledge or strategies for object designation as no universal segmentation algorithm exists. In this paper, we propose a holistic framework to perform image segmentation in color space. Our approach unifies the linear smoothing filter, a similarity calculation in selected color space, and a clustering game model with various evolution dynamics. In our framework, the problem of image segmentation can be considered as a “clustering game”. Within this context, the notion of a cluster turns out to be equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external cluster conditions. Experiments on image segmentation problems show the superiority of the proposed clustering game based image segmentation framework (CGBISF) using both the Berkeley segmentation dataset and infrared images (for which, we need to perform color fusion first) in autonomy, speed, and efficiency.
... 98 This approach has recently received considerable attention. 45,48,49,60,[62][63][64][65][66]97,[113][114][115][116][117][118][119][120] and has successfully been applied to colorize fused intensified visual and thermal imagery, 45,48,49,60,[62][63][64][65][66]114,115,118,119,121,122 forward-looking infrared (FLIR) imagery, 120 synthetic aperture and FLIR imagery, 97 remote sensing imagery, 113 and polarization imagery. 117 However, color transfer methods based on global or semilocal (regional) image statistics typically do not achieve color constancy and are computationally expensive. ...
... 98 This approach has recently received considerable attention. 45,48,49,60,[62][63][64][65][66]97,[113][114][115][116][117][118][119][120] and has successfully been applied to colorize fused intensified visual and thermal imagery, 45,48,49,60,[62][63][64][65][66]114,115,118,119,121,122 forward-looking infrared (FLIR) imagery, 120 synthetic aperture and FLIR imagery, 97 remote sensing imagery, 113 and polarization imagery. 117 However, color transfer methods based on global or semilocal (regional) image statistics typically do not achieve color constancy and are computationally expensive. ...
... It has been suggested to use semilocal or region-based methods to alleviate this problem. 49,60,[63][64][65][66] However, these approaches are only partly successful in achieving color constancy, and they typically require computationally expensive techniques like nonlinear diffusion, histogram matching, segmentation, and region merging, which diminishes their practical value. ...
Article
Full-text available
We present an overview of our recent progress and the current state-of-the-art in color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel based false color mapping scheme that yields fused false color images with large color contrast and preserves the identity of the input signals. This method has been successfully deployed in different areas of research. However, since this color mapping does not produce realistic colors, we continued to develop a statistical color mapping procedure that transfers the color distribution of a given example image to a multiband nighttime image. This procedure yields a realistic color rendering. However, it is computationally expensive and achieves no color constancy, since the mapping depends on the relative amounts of the different materials in the scene. By applying the statistical mapping approach in a color look-up-table framework we finally achieved both color constancy and computational simplicity. This sample based color transfer method is specific for different types of materials in the scene and is easily adapted for the intended operating theatre and the task at hand. The method can be implemented as a look-up table transform and is highly suitable for real-time implementations.