Examples of color filter array (CFA) patterns: (a) Bayer; (b) red–green–blue–white (RGBW); (c) red–white–blue (RWB); and (d) RWB with double-exposed W channel.  

Examples of color filter array (CFA) patterns: (a) Bayer; (b) red–green–blue–white (RGBW); (c) red–white–blue (RWB); and (d) RWB with double-exposed W channel.  

Source publication
Article
Full-text available
In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensit...

Contexts in source publication

Context 1
... insertion of the W filter renders the RGBW CFA more transparent than the Bayer CFA, and improves the image quality at poor illumination with minimal impact on color reproduction. However, the spatial resolution after color interpolation does not attain the level of the Bayer pattern CFA, because the RGBW CFA sensor is composed of many color components, as shown in Figure 1b. To overcome the spatial resolution problem while maintaining high-sensitivity imaging, Komatsu et al. [8] proposed an alternative approach. ...
Context 2
... overcome the spatial resolution problem while maintaining high-sensitivity imaging, Komatsu et al. [8] proposed an alternative approach. They developed an RWB pattern consisting of a repeating cell which is composed of two-by-two image pixels, with the two W pixels diagonally opposite from one another, and the other corners being R and B, as shown in Figure 1c. By replacing the G pixels in the RGBW pattern with W pixels, the conventional color interpolation method used for the Bayer CFA can be applied. ...
Context 3
... when the shutter speed is set according to the W exposure, the amount of light that reaches the RB pixels is reduced compared to the corresponding value for the Bayer CFA image sensor. Hence, other problems arise, such as a low signal-to-noise ratio (SNR) for R and B. To solve this issue, Park et al. [10] proposed a new RWB CFA pattern (see Figure 1d) which allows the pattern to obtain two W values at different exposure times. Despite, the loss of spatial resolution in the horizontal direction, the R and B pixels are arranged in odd rows and the W pixels are placed in even rows, so as to implement a Complementary Metal Oxide Semi-conductor (CMOS) image sensor-based readout method. ...
Context 4
... obtained R and B values exhibit a high SNR, because they are captured with the optimal exposure time. The disadvantage of the RWB CFA is the degradation of the spatial resolution in the horizontal direction (see Figure 1d). To overcome this weakness, Song et al. [12] proposed a Color Interpolation (CI) method that reduces the loss of spatial resolution in the horizontal direction for RWB patterns. ...
Context 5
... set was tested under an incandescent lamp with 200 lx illumination with 3000 K color temperature. Figure 10a represents an RWB image obtained under the incandescent lamp. The color channels were white balanced without considering the color degradation caused by the lack of the G channel. ...
Context 6
... color channels were white balanced without considering the color degradation caused by the lack of the G channel. Therefore, the overall colors of the image are different when compared to Figure 10b. The comparison of Figure 10c,d shows that the overall colors of each color patch and object were similar to the RGB target image (Figure 10b). ...
Context 7
... the overall colors of the image are different when compared to Figure 10b. The comparison of Figure 10c,d shows that the overall colors of each color patch and object were similar to the RGB target image (Figure 10b). However, the green color patches-which enlarged with the yellow box-were different. ...
Context 8
... the overall colors of the image are different when compared to Figure 10b. The comparison of Figure 10c,d shows that the overall colors of each color patch and object were similar to the RGB target image (Figure 10b). However, the green color patches-which enlarged with the yellow box-were different. ...
Context 9
... the green color patches-which enlarged with the yellow box-were different. Figure 10d is much more similar to the target image in Figure 10b. As an error criterion, the angular error was calculated. ...
Context 10
... the green color patches-which enlarged with the yellow box-were different. Figure 10d is much more similar to the target image in Figure 10b. As an error criterion, the angular error was calculated. ...
Context 11
... regarded the average of ∆E as the color correction error. Figure 12 shows the color distribution of all 96 color patches of the GretagMacbeth ColorChecker SG of each image of Figure 9. Each color patch value was calculated in the normalized YCbCr domain. ...
Context 12
... the gamut representation independent of the luminance component, normalized YCbCr domain was used. As shown in Figure 12a, each color component is gathered along the red and blue direction. This implies that there is a lack of G information in the RWB image. ...
Context 13
... implies that there is a lack of G information in the RWB image. When Figures 12b,c are compared, it is evident that the color distribution of the conventional color correction method is not efficient. The conventional color correction matrix manipulates color to red and blue directions well. ...
Context 14
... in the direction of green (quadrants 1 and 3), the color components were not spread out. That is the reason why the color of Figure 10c shows low saturation in the green channel. In contrast, the color distributions of Figure 12b,d are very similar. ...
Context 15
... is the reason why the color of Figure 10c shows low saturation in the green channel. In contrast, the color distributions of Figure 12b,d are very similar. Owing to the successful restoration of G color information from the W channel as per the proposed method, color distribution along the direction of quadrants 1 and 3 was seen to be spread out well. ...
Context 16
... performance evaluation of the proposed and conventional methods for various colors in the color chart was conducted by comparing the results of these methods for the RGB target image shown in Figure 9b. The performance of the proposed method for each color patch is indicated by the red bar in Figure 13, whereas the blue bar represents the performance of the conventional method. The error bars at each point represent the certain confidence interval. ...
Context 17
... error bars at each point represent the certain confidence interval. Since the color of the RWB image was severely distorted, the conventional color correction method yields lower color fidelity compared to the proposed method, as is evident from Figure 13. Especially, a difference of θ l and ∆E is noticeable in the green and purple color groups. ...
Context 18
... a difference of θ l and ∆E is noticeable in the green and purple color groups. As mentioned above, the color distribution of the result of the conventional method was not well-spread in the direction of green and purple (quadrants 1 and 3), as shown in Figure 12. This implies that the G channel has not been restored correctly. ...
Context 19
... a result, it is possible to implement HDR imaging using the RWB image sensor with the proposed method. Figure 14 shows a series of images in the process of creating an HDR image. The three images (Figure 14a,b and a luminance image of Figure 14c) were used to improve the HDR luminance L hdr sensitivity at pixel position (i, j). ...
Context 20
... 14 shows a series of images in the process of creating an HDR image. The three images (Figure 14a,b and a luminance image of Figure 14c) were used to improve the HDR luminance L hdr sensitivity at pixel position (i, j). We have ...
Context 21
... 14 shows a series of images in the process of creating an HDR image. The three images (Figure 14a,b and a luminance image of Figure 14c) were used to improve the HDR luminance L hdr sensitivity at pixel position (i, j). We have ...
Context 22
... detailed description of these quality measures is presented in [20]. Figure 14c shows the color-restored image that has been obtained by applying the proposed method. Figure 14d shows an HDR luminance channel, constructed using Figure 14a,b. ...
Context 23
... 14c shows the color-restored image that has been obtained by applying the proposed method. Figure 14d shows an HDR luminance channel, constructed using Figure 14a,b. By Figure 14c,d, an HDR image is generated (Figure 14e), having rich color information and the advantage of high sensitivity gain. ...
Context 24
... 14c shows the color-restored image that has been obtained by applying the proposed method. Figure 14d shows an HDR luminance channel, constructed using Figure 14a,b. By Figure 14c,d, an HDR image is generated (Figure 14e), having rich color information and the advantage of high sensitivity gain. ...
Context 25
... 14d shows an HDR luminance channel, constructed using Figure 14a,b. By Figure 14c,d, an HDR image is generated (Figure 14e), having rich color information and the advantage of high sensitivity gain. Thus, using the proposed method, the usability of the RWB image sensor is further improved. ...
Context 26
... 14d shows an HDR luminance channel, constructed using Figure 14a,b. By Figure 14c,d, an HDR image is generated (Figure 14e), having rich color information and the advantage of high sensitivity gain. Thus, using the proposed method, the usability of the RWB image sensor is further improved. ...
Context 27
... using the proposed method, the usability of the RWB image sensor is further improved. Figure 15 shows the comparison results for a test image captured using the RWB CFA pattern. The test image includes both a bright region and a dark region. ...
Context 28
... test image includes both a bright region and a dark region. The average brightness of Figure 15c was lower than that recorded in the other results, due to the shorter exposure time used to prevent the saturation of W. If W is saturated, false color information is restored due to the inaccurate estimation of the G-channel, as shown in Figure 15b. The SNR of each image in Figure 15 is compared in Table 3. ...
Context 29
... test image includes both a bright region and a dark region. The average brightness of Figure 15c was lower than that recorded in the other results, due to the shorter exposure time used to prevent the saturation of W. If W is saturated, false color information is restored due to the inaccurate estimation of the G-channel, as shown in Figure 15b. The SNR of each image in Figure 15 is compared in Table 3. ...
Context 30
... average brightness of Figure 15c was lower than that recorded in the other results, due to the shorter exposure time used to prevent the saturation of W. If W is saturated, false color information is restored due to the inaccurate estimation of the G-channel, as shown in Figure 15b. The SNR of each image in Figure 15 is compared in Table 3. According to the SNR values in Table 3, the CFA patterns with W-pixel recorded larger values compared to those with the Bayer CFA pattern. ...

Similar publications

Article
Full-text available
Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of t...

Citations

... Since ∆ log I W (x, y) is independent of the illuminant, it can be used as an IIM as shown in Fig. 2(e). Based on the spectral correlation among the R, G, B, and W pixel intensities, the estimated white pixel intensity I W (x, y) is calculated under the assumption [47] that there is a linear relationship between R, G, B and W pixel intensities. The offset term is also included to compensate for the spectral mismatch of color filter array [48] as follows: ...
Article
Full-text available
Many types of RGBW color filter array (CFA) have been proposed for various purposes. Most studies utilize white pixel intensity for improving the signal-to-noise ratio of the image and demosaicing the image, but we note that the white pixel intensity can also be utilized to improve color reproduction. In this paper, we propose a color reproduction pipeline for RGBW CFA sensors based on a fast, accurate, and hardware-friendly gray pixel detection using white pixel intensity. The proposed color reproduction pipeline was tested on a dataset captured from an OPA sensor which has RGBW CFA. Experimental results show that the proposed pipeline estimates the illumination more accurately and preserves the achromatic color better than conventional methods which do not use white pixel intensity.
... Recently, several other CFA patterns have been developed with the objective of enhancing sensitivity [19][20][21][22]. In addition, demosaicing methods for various CFA patterns have been presented recently [23,24]. The denoising filter is typically applied separately after demosaicing for the three color channels, as shown in Figure 1a. ...
... Moreover, most existing demosaicing methods for the Bayer pattern cannot be applied to these patterns. To overcome these problems, many methods including demosaicing for other CFA patterns have also been researched recently [23,24]. The proposed algorithm is designed for Bayer pattern in a CFA single image sensor. ...
Article
Full-text available
In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.
Chapter
According to the characteristics of the color filter array interpolation in a camera, an image splicing forgery detection algorithm based on bi-cubic interpolation and Gaussian mixture model is proposed. The authors make the assumption that the image is acquired using a color filter array, and that tampering removes the artifacts due to a demosaicing algorithm. This article extracts the image features based on the variance of the prediction error and create image feature likelihood map to detect and locate the image tampered areas. The experimental results show that the proposed method can detect and locate the splicing tampering areas precisely. Compared with bi-linear interpolation, this method can reduce the prediction error and improve the detection accuracy.
Article
Full-text available
According to the characteristics of the color filter array interpolation in a camera, an image splicing forgery detection algorithm based on bi-cubic interpolation and Gaussian mixture model is proposed. The authors make the assumption that the image is acquired using a color filter array, and that tampering removes the artifacts due to a demosaicing algorithm. This article extracts the image features based on the variance of the prediction error and create image feature likelihood map to detect and locate the image tampered areas. The experimental results show that the proposed method can detect and locate the splicing tampering areas precisely. Compared with bi-linear interpolation, this method can reduce the prediction error and improve the detection accuracy.
Article
This paper presents a CMOS image sensor with in-pixel aperture technique for single-chip 2D and 3D imaging. In conventional image sensors, the aperture is located at the camera lens. However, in the proposed image sensor, the aperture is integrated on the CMOS image sensor chip and is formed at a metal layer of the CIS process. A pixel array of the image sensor is composed of the W, R, B, and PA pixels (W pixel with integrated metal aperture) for extracting color and depth information. While the image of the W pixel becomes blurred with increasing distance from a focused object, the image of the PA pixel maintains the sharpness. Therefore, the depth image can be obtained using the depth from the defocus method. The size of the pixel, which is based on a four-transistor active pixel sensor with pinned photodiode, is 2.8 μm × 2.8 μm. A prototype of the proposed image sensor was fabricated using the 0.11-μm CIS process, and its performance was evaluated. IEEE