Figure 9 - uploaded by Mehlika N Inanici
Content may be subject to copyright.
a) Close-up view of point spreading; the square in (b) shows the pixel that corresponds to the actual light source. c) PSF as quantified from the HDR image; d) PSF as a function of eccentricity (distance from the optical centre) in the camera field of view from 0 to 90 8 in 15 8 intervals 

a) Close-up view of point spreading; the square in (b) shows the pixel that corresponds to the actual light source. c) PSF as quantified from the HDR image; d) PSF as a function of eccentricity (distance from the optical centre) in the camera field of view from 0 to 90 8 in 15 8 intervals 

Source publication
Article
Full-text available
In this paper, the potential, limitations and applicability of the High Dynamic Range (HDR) photography technique are evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes were taken with a commercially available digital camera to capture the wide luminance variation within the scenes. The camera response function wa...

Contexts in source publication

Context 1
... is recommended that smaller apertures are used so as to reduce vignetting effects while capturing the calibration sequence which determines the camera response curves. 7 It would have been very convenient to interpret each pixel in the photograph as a luminance measurement, but unfortunately, in most optical systems some portion of the pixel signal comes from surrounding areas. Light entering the camera is spread out and scattered by the optical structure of the lens, which is referred to as the point spread function (PSF). 14 A point light source is used to quantify the PSF. The source is located far enough from the camera to ensure that theoretically it covers less than one pixel area. Without any point spread, the image of the light source should fit onto one pixel. The point spreading is illustrated in Figure 9 (a) and (b) as close-up views. The aperture size, exposure time and eccentricity (distance from the optical centre) affect PSF. The aperture size was kept ...
Context 2
... the capturing processes; therefore it was not a parameter affecting the luminances. The effects of exposure time and eccentricity were recorded and they were plotted in RGB channels. The PSF of the HDR image is shown in Figure 9 (c). The effect of the eccentricity was studied at 15 8 intervals and it is illustrated in Figure 9 (d). The spread affected a limited number of the neighbouring pixels. For most architectural lighting applications, it seems to be a small effect. However, it is important to note that the general scattering of light is such that the cumulative effect of a bright background surrounding a dark target can lead to larger measurement errors. This is indeed evident in the measurement results for darker targets shown in Section 3.1. Unfortunately, it is not feasible to quantify the general scattering. Consequently, point spread is one of the accumulative factors in the error margin, which was quantified as less than 10% on average in Section 3.1. The HDR images used in this paper were generated with the camera response functions shown in Figure 1. The luminances were calculated with the transformation functions given in Section 2.3. The calibration feature of Photosphere was used for fine-tuning the luminances with a grey target in each scene. The vignetting effect was corrected by utilizing the function shown in Figure 8. The luminances extracted from the HDR images indicate reasonable accuracy when compared with physical measurements. Figure 10 presents a histogram for error percentages for 485 coloured and greyscale target points under a wide range of electric and daylighting conditions. The minimum and maximum measured target luminances are 0.5 cd/m 2 and 12870 cd/m 2 , respectively. The average error percentages for all, greyscale, and coloured targets were 7.3%, 5.8%, and 9.3%, respectively. The square of the correlation ( r 2 ) between the measured and captured luminances was found to be 98.8%. There is an increased error for the darker greyscale targets. As mentioned in Section 3.3, the general scattering in the lens and sensor affect the darker regions of the images disproportionately. The consequence of this scattering is an over-estimation of the luminances of the darker regions. Since the luminance is quite low for these targets, small differences between the measured and captured values yield higher error percentages. The results also revealed increased errors for the saturated colours (Figure 10b). In Photosphere, a separate polynomial function is derived to fit to each RGB channel. However, it is important to note that the RGB values produced by the camera are mixed between the different red, green, and blue sensors of the CCD. The CCD sensors have coloured filters that pass red, green, or blue light. With the large sensor resolution, it was assumed that enough green (red, blue)-filtered pixels receive enough green (red, blue) light to ensure that the image would yield reasonable results. The sensor arrays are usually arranged in a Bayer (mosaic) pattern such that 50% of the pixels have green and 25% of the pixels have red and blue filters. When the image is saved to a file, algorithms built within the camera employ interpolation between the neighbouring pixels. 15 The HDR algorithm assumes that the computed response functions preserve the chromaticity of the corresponding scene points. 5 In an effort to keep the RGB transformations constant within the camera, the white balance setting was kept constant throughout the capturing process. The camera response functions were generated with these constraints. Likewise, the luminance calculations were approximated based on sRGB reference primaries, with the assumption that sRGB provides a reasonable approximation to the camera sensor ...
Context 3
... the capturing processes; therefore it was not a parameter affecting the luminances. The effects of exposure time and eccentricity were recorded and they were plotted in RGB channels. The PSF of the HDR image is shown in Figure 9 (c). The effect of the eccentricity was studied at 15 8 intervals and it is illustrated in Figure 9 (d). The spread affected a limited number of the neighbouring pixels. For most architectural lighting applications, it seems to be a small effect. However, it is important to note that the general scattering of light is such that the cumulative effect of a bright background surrounding a dark target can lead to larger measurement errors. This is indeed evident in the measurement results for darker targets shown in Section 3.1. Unfortunately, it is not feasible to quantify the general scattering. Consequently, point spread is one of the accumulative factors in the error margin, which was quantified as less than 10% on average in Section 3.1. The HDR images used in this paper were generated with the camera response functions shown in Figure 1. The luminances were calculated with the transformation functions given in Section 2.3. The calibration feature of Photosphere was used for fine-tuning the luminances with a grey target in each scene. The vignetting effect was corrected by utilizing the function shown in Figure 8. The luminances extracted from the HDR images indicate reasonable accuracy when compared with physical measurements. Figure 10 presents a histogram for error percentages for 485 coloured and greyscale target points under a wide range of electric and daylighting conditions. The minimum and maximum measured target luminances are 0.5 cd/m 2 and 12870 cd/m 2 , respectively. The average error percentages for all, greyscale, and coloured targets were 7.3%, 5.8%, and 9.3%, respectively. The square of the correlation ( r 2 ) between the measured and captured luminances was found to be 98.8%. There is an increased error for the darker greyscale targets. As mentioned in Section 3.3, the general scattering in the lens and sensor affect the darker regions of the images disproportionately. The consequence of this scattering is an over-estimation of the luminances of the darker regions. Since the luminance is quite low for these targets, small differences between the measured and captured values yield higher error percentages. The results also revealed increased errors for the saturated colours (Figure 10b). In Photosphere, a separate polynomial function is derived to fit to each RGB channel. However, it is important to note that the RGB values produced by the camera are mixed between the different red, green, and blue sensors of the CCD. The CCD sensors have coloured filters that pass red, green, or blue light. With the large sensor resolution, it was assumed that enough green (red, blue)-filtered pixels receive enough green (red, blue) light to ensure that the image would yield reasonable results. The sensor arrays are usually arranged in a Bayer (mosaic) pattern such that 50% of the pixels have green and 25% of the pixels have red and blue filters. When the image is saved to a file, algorithms built within the camera employ interpolation between the neighbouring pixels. 15 The HDR algorithm assumes that the computed response functions preserve the chromaticity of the corresponding scene points. 5 In an effort to keep the RGB transformations constant within the camera, the white balance setting was kept constant throughout the capturing process. The camera response functions were generated with these constraints. Likewise, the luminance calculations were approximated based on sRGB reference primaries, with the assumption that sRGB provides a reasonable approximation to the camera sensor ...

Similar publications

Article
Full-text available
The emerging field of high dynamic range (HDR) imaging is directly linked to diverse existing disciplines such as radiometry, photometry, colorimetry and col¬our appearance - each dealing with specific aspects of light and its perception by humans. Although the idea is not new ; its wider usage started just a few years ago with the rising popularit...
Book
Full-text available
The distributions of tropical and subtropical seagrass beds have not been properly monitored in many countries, owing to the limitations of observation methodologies. Conventional aerial photography fails to detect small-sized species and deep-water seagrass beds. We have developed a new mapping system that uses a remotely operated vehicle (ROV) eq...
Conference Paper
Full-text available
If a photograph is reproduced “faithfully”, i.e. preserving the relative colorimetric values of the original scene, the resulting image will often look less colorful and less contrasted than the original scene due to some mechanisms of the human visual system. Film and digital cameras must compensate these effects in order to obtain visually pleasi...
Conference Paper
Full-text available
While real scenes produce a wide range of brightness variations , current cameras use low dynamic range image detector that typically provide 256 levels of brightness data at each pixel. We propose methods to create High Dynamic Range images; the method to enhance the dynamic range of is based on capturing multiple exposure photographs of the scene...

Citations

... As shown in Equation (1), the luminance value of each pixel is based on CIE XYZ values, according to the standard color space (sRGB) [52] and CIE Standard Illuminants D 65 . According to the study by Inanici [29], indoor scene luminance (L i ) (cd/m 2 ) is expressed as: ...
... After linear rescaling, the outdoor HDR photographs are processed through the following steps: (1) vignetting correction that compensates for the light loss in the periphery area caused by the fisheye lens [29], (2) color correction for chromatic changes introduced by ND filter [53], and (3) geometric transformation from equi-distant to hemispherical fisheye image for environment mapping [28]. ...
Preprint
Full-text available
Virtual staging technique can digitally showcase a variety of real-world scenes. However, relighting indoor scenes from a single image is challenging due to unknown scene geometry, material properties, and outdoor spatially-varying lighting. In this study, we use the High Dynamic Range (HDR) technique to capture an indoor panorama and its paired outdoor hemispherical photograph, and we develop a novel inverse rendering approach for scene relighting and editing. Our method consists of four key components: (1) panoramic furniture detection and removal, (2) automatic floor layout design, (3) global rendering with scene geometry, new furniture objects, and the real-time outdoor photograph, and (4) virtual staging with new camera position, outdoor illumination, scene texture, and electrical light. The results demonstrate that a single indoor panorama can be used to generate high-quality virtual scenes under new environmental conditions. Additionally, we contribute a new calibrated HDR (Cali-HDR) dataset that consists of 137 paired indoor and outdoor photographs. The animation for virtual rendered scenes is available here.
... Nevertheless, the CMOS generates photos with a restricted illumination range, resulting in low dynamic range (LDR) images that may not accurately measure glare compared to the human visual system. To address this shortcoming, researchers [15][16][17] have utilized multiexposure photos to create a luminance map, which is referred to as a high dynamic range (HDR) image. However, measuring high-exposure photos can considerably prolong the calculation time, obstructing the extensive implementation of real-time glare detection and control in daylight environments. ...
Article
Glare is a significant concern when implementing passive daylighting strategies because it often impedes their effectiveness. The high dynamic range (HDR) method, which is employed to generate luminance maps for glare detection, aids in the real-time control of shading devices for glare mitigation. However, the HDR method entails considerable processing time and complexity, which limits its practical utility. Consequently, we propose a two-step network that combines generative adversarial networks (GANs) and reconstruction-net to transform a single low dynamic range (LDR) image into a comprehensive fisheye luminance map. This methodology offers a pragmatic approach that significantly streamlines the luminance map generation workflow and reduces the time needed for image acquisition. Additionally, we introduce a GAN generator model to recover missing pixel values in underexposed and overexposed LDR images. To evaluate our proposed network, we assembled a fisheye luminance map dataset that comprises 884 scenes, each containing 15 exposure values ranging from − 7 to +7. We compared our method with existing models by using the test dataset and obtained a peak signal-to-noise ratio (PSNR) and R 2 of daylight glare probability (DGP) of 59.24 and 0.9054, respectively. These results demonstrate that our proposed network represents a state-of-the-art (SOTA) model in luminance map restoration. Furthermore, our model processes each input image in approximately 0.1 s on an RTX 2060 GPU and 7.5 s on an i5-9400F CPU, with an additional 2 s needed by Radiance to calculate the glare metric on an i5-9400F CPU. Our model demonstrates a remarkable 12-fold decrease in processing time compared to traditional methods when run on a CPU. This efficiency dramatically increases, up to a 95-fold reduction, when a GPU is utilized for reconstruction. Our code and dataset are available at: https://github.com/ShikangWen2000/SingleLM-Net.git.
... Camera-based control and its applications in lighting control systems are currently popular (Liu et al., 2016). The authors Inanici (2006) and Sarkar et al. (2008), used high dynamic range photography as a luminance mapping tool. A luminance-based lighting and shading control system is described by Newsham and Arsenault (2009). ...
... When it comes to assessing glare metrics and spatial distribution of light levels, the luminance measurement system outperforms illuminance sensors (Tyukhova, 2014). Sky luminance distribution is derived from sky images for daylight simulations (Humann and McNei, 2017;Inanici, 2006;Spasojevic and Mahdavi, 2007). To control blinds and lighting, photosensors, luminance meters, pyranometers, and occupancy sensors can all be replaced with digital cameras (Jain and Garg, 2018). ...
Article
Full-text available
Lighting designers are always on the quest to develop a lighting control strategy that is aesthetically pleasing, comfortable, and energy-efficient. In an indoor context, electric lighting blended with daylighting controls forms a quintessential component for improving the occupant’s comfort and energy efficiency. Application of soft computing techniques, adaptive predictive control theory, machine learning, HDR photography, and wireless networking have facilitated recent advances in intelligent building automation systems. The evolution and revolution from the 19th to the 21st century in developing daylighting control schemes and their outcomes are investigated. This review summarizes the state-of-the-art artificial intelligence techniques in daylighting controllers to optimize the performance of conventional photosensor-based control and camera-based control in commercial buildings. The past, current, and future trends are investigated and analyzed to determine the key factors influencing the controller design. This article intends to serve as a comprehensive literature review that would aid in creating promising new concepts in daylighting controllers.
... The measured luminance value and displayed luminance value from the original HDR image are used for calculating the calibration factor (k 1 ). According to the study by Inanici [21], given R, G, and B values in the captured indoor HDR image, indoor scene luminance (L i ) is expressed as: As shown in Fig. 3, we positioned two cameras in an enclosed room under consistent electrical lighting. Following the camera settings of indoor and outdoor HDR photography (Sec. ...
... After linear rescaling, the outdoor HDR photographs are processed through the following steps: (1) vignetting correction that compensates for the light loss in the periphery area caused by the fisheye lens [21], (2) color correction for chromatic changes introduced by ND filter [35], and (3) geometric transformation from equi-distant to hemispherical fisheye image for environment mapping [20]. ...
Conference Paper
Full-text available
We propose a novel inverse rendering method that enables the transformation of existing indoor panoramas with new indoor furniture layouts under natural illumination. To achieve this, we captured indoor HDR panoramas along with real-time outdoor hemispherical HDR photographs. Indoor and outdoor HDR images were linearly calibrated with measured absolute luminance values for accurate scene relighting. Our method consists of three key components: (1) panoramic furniture detection and removal, (2) automatic floor layout design, and (3) global rendering incorporating scene geometry, new furniture objects, and real-time outdoor photograph. We demonstrate the effectiveness of our workflow in rendering indoor scenes under different outdoor illumination conditions. Additionally, we contribute a new calibrated HDR (Cali-HDR) dataset that consists of 137 calibrated indoor panoramas and their associated outdoor photographs.
... Un paso fundamental en los procesos de obtención de mapas de luminancias a partir de imágenes fotográficas es la conversión de la información RGB de cada pixel al valor de Luminancia. Inanici [1] utiliza la ecuación L=K (0.2127 R+0.7151 G+0.0722 B). El factor 'K' es una constante derivada de una medición puntual con un luminancímetro y es contrastada con el valor calculado por la imagen. ...
Article
Full-text available
Commercial cameras are currently being used to generate luminance maps. There are two methods of obtaining these measurements: absolute calibration and self-calibration. The self-calibration process is currently the most widely used because it does not require complicated or expensive equipment intervention. One of the steps in the self-calibration methodology is related to the original RGB channels of the camera (RAWRGB) and their conversion to standard RGB color spaces (sRGB, Adobe, among others), through conversion matrices that are normally established as raw camera file (RAW) information. Finally, the standard RGB color space that is used has another transformation matrix that translates the image to the XYZ color space. It is in this space where the luminance is represented by the Y channel, scaled by the K factor. In the self-calibration process, the standard conversion matrices are taken and it is assumed that they are perfect, neglecting the possible error generated. However, since there are several intermediate color spaces and since specialized laboratory instruments are not used, it is not possible to determine which of the spaces would be the optimal or best performing. To do this, it is possible to apply a verification methodology that consists of carrying out the self-calibration system of a particular scene, making the conversion to the different color spaces and comparing its result with a considerable sample of points and colors to verify minor error. The proposal of the present work is to evaluate the error of the conversion of the different color spaces (RAW, sRGB or Adobe) of a ColorChecker® color pattern card, contrast it with the measurement of each patch made with a spot luminance meter and obtain the results errors made by each conversion.
... Since then, many investigators have devoted themselves to high dynamic range (HDR) image generation, and several automatic merging algorithms, such as hdrgen from Radiance (G, 1998) or pfscalibration from pfstools (Mantiuk et al., 2007) have been developed. Several investigations (Cai & Chung, 2011;Inanici, 2006; Proceedings of the 18th IBPSA Conference Shanghai, China, Sept. 4-6, 2023 0160 https://doi.org/10.26868/25222708.2023.1241 Stanley, 2016) have reported that for scenes where the maximal luminance value is lower than 30,000 cd/m 2 , most luminance values from the calibrated luminance map are expected to be within a 10% error range, although higher errors also occur. ...
Conference Paper
Full-text available
Daylight glare is a significant concern in daylight research, and High Dynamic Range (HDR) images are commonly used to calculate glare metrics in current studies. However, generating a reliable HDR image can be error-prone and time-consuming. This study investigates the use of daylight simulation to calculate glare metrics by comparing the values obtained from simulated and HDR derived calculations. The results indicate that the simulated Daylight Glare Probability (DGP) has an MBE of 0.004 and an RMSE of 6.9%. The simulated Vertical Eye Illuminance (Ev) shows an MBE of -130.9 lx and an NRMSE of 8.5%. However, the simulated DGI has an MBE of -0.92 and an NRMSE of 13.2%. Meanwhile, the determinant coefficients of regression curve of measured values and simulated are high for the DGP and Ev but is low for the Daylight Glare Index (DGI). Thus, comparison results show the high accuracy and reliable for simulated DGP and Ev but low accuracy and reliability for simulated DGI values when assessing glare ratings. However, it should be noted that DGI and glare metrics based on contrast effects only perform poorly when employed to evaluate glare. Despite this limitation, daylight simulation remains a viable and cost-effective alternative for calculating glare metrics, and it is relatively easy to implement.
... The images were processed using MATLAB (MathWorks, Natick, MA, USA) software, R2014a version [35]. Inanici [10] explored the realm of High Dynamic Range (HDR) imaging using a Nikon Coolpix 5400 camera with a fisheye lens. The camera captured images in RadianceRGBE and LogLuv TIFF formats, employing diverse external light sources, including daylight, incandescent lamp, projector, and fluorescent, metal halide, and high-pressure sodium lamps. ...
... The advent of photometry utilizing photographic means is by no means a novelty, with film-based images and charge-coupled device (CCD) products frequently employed for luminance evaluations [10]. As digital imaging technology has evolved, its application has spanned across diverse scientific fields, an upward trend evident in Figure 3. Dyer et al. [39] evaluated the impact of uneven illumination on reflected images, noticing that the upper section of the image was darker than the lower section [39]. ...
... To mitigate such irregularities in illumination, researchers can enhance experimental procedures and apply post-processing methods. Ensuring a consistent aperture size during image capture ensures that luminance values remain constant under different conditions [10]. Mathai et al. [42] demonstrated the inspection of 3D transparent objects with a system employing two light sensors without moving the object. ...
Article
Full-text available
The growing demand for sustainable and energy-efficient buildings has highlighted the need for reliable and accurate methods to detect fenestration deterioration and assess UV radiation transmission. Traditional detection techniques, such as spectrophotometers and radiometers, discussed in Part I, are often expensive and invasive, necessitating more accessible and cost-effective solutions. This study, which is Part II, provides an in-depth exploration of the concepts and methodologies underlying UV bandpass-filtered imaging, advanced image processing techniques, and the mechanisms of pixel transformation equations. The aim is to lay the groundwork for a unified approach to detecting ultraviolet (UV) radiation transmission in fenestration glazing. By exploiting the capabilities of digital imaging devices, including widely accessible smartphones, and integrating them with robust segmentation techniques and mathematical transformations, this research paves the way for an innovative and potentially democratized approach to UV detection in fenestration glazing. However, further research is required to optimize and tailor the detection methods and approaches using digital imaging, UV photography, image processing, and computer vision for specific applications in the fenestration industry and detecting UV transmission. The complex interplay of various physical phenomena related to UV radiation, digital imaging, and the unique characteristics of fenestration glazing necessitates the development of a cohesive framework that synergizes these techniques while addressing these intricacies. While extensively reviewing existing techniques, this paper highlights these challenges and sets the direction for future research in the UV imaging domain.
... 41 Representative daylighting predictors were measured or extracted from high dynamic range (HDR) images. 42 Each participant's responses to a survey questionnaire were associated with the daylighting condition that he/she experienced and evaluated. Then, a data-matching method, a propensity score matching (PSM) analysis, 43,44 was applied to daylighting predictors to ensure quantitatively similar daylighting conditions before comparing subjective responses. ...
Article
This study compares subjective evaluations of daylighting environments from two universities: the Singapore University of Technology and Design (SUTD) in Singapore and Southeast University (SEU) in Nanjing, China. Two hundred and twenty-nine students evaluated their instantaneous daylighting environments. Four representative daylighting predictors, horizontal illuminance, vertical illuminance, mean luminance of an entire scene and CIE Glare Index (CGI), were matched between two universities using a propensity score matching method. Eighty-eight participants, 44 from each university, were matched in terms of these four daylighting predictors. The results demonstrate that there are statistically significant differences in subjective assessments between these two locations. Under quantitatively similar daylighting environments, more participants at STUD reported adequate daylighting levels with a noticeable degree of daylight glare, as well as desires to decrease current daylighting levels. On the other hand, more participants at SEU reported inadequate daylighting levels with an imperceptible degree of daylight glare, as well as desires to increase current daylighting levels. One reason for subjective assessment differences might be dissimilar socio-environmental contexts, where the participants are acclimatized to different daylighting environments between Singapore and Nanjing.
... Since the development of digital cameras and their utilization as luminance mapping tools using the High Dynamic Range Imaging (HDRI) technique [1], HDRI has been applied for lighting environment studies such as analyzing luminance, illuminance, and glare for visual comfort, generating illuminance maps, and controlling room lighting [2,3]. To generate luminance maps from HDRI, several methods and software have been developed such as Radiance, Photosphere, hdrgen, hdrscope, and Devebec algorithms using MATLAB or Python [4][5][6]. ...
Conference Paper
This study verifies and checks the accuracy of luminance and illuminance from high dynamic range images (HDRI) generated by a commercial IP camera compared to measurement devices. The statistical analysis indicates that measuring luminance using the calibration method with a luminance meter is comparable with measurement results using a luminance meter, with average relative error in the range of 5–23%, whereas illuminance measurement using the equation method is in line with the measurement using an illuminance meter, with an average relative error range of 1–11%. Regarding the accuracy of the HDRI method, using digital cameras is in the range of 5–27%, which is still acceptable.
... The method based on HDR images can fix luminance values with an accuracy of about 10 % [4]. It should be noticed that this method has a lower accuracy compared to the measurement by calibrated luminance meters. ...
... The first one is getting the luminance map needed to calculate the Q criterion. To solve this problem, the HDRi method proposed in [4] was used for scenes with daylighting. This method was adapted to internal lighting scene so the measured luminance using the camera was close to the actual luminance in the tested experimental scene. ...
Article
Full-text available
How to assess what kind of feeling the lighting scene causes? A new level of computer graphics visualization makes it possible to think about real image through synthetic images from one hand, and from other hand, today the HDRi method allows fixing luminance with an accuracy of up to 10 % in a wide range. Therefore, assessing the lighting quality of designed and existing lighting installations in terms of luminance reaches a new level. It brings the creating of psychophysical model of lighting quality assessment through photometric quantities closer. The structure of the model can be represented in two parts: the calculation equation and the psychophysical scale. This work considers the issue of choosing categories of the psychophysical scale, the method of creating of luminance maps for scene with an arbitrary spatial angle luminance distribution, and the method for evaluating scenes using a new gradient criterion Q based on these maps. An analysis of HDRi images of workplaces based on the experimental setup and scenes in a cafe showed that the Q criterion depends on the type of scene and on the number of light sources in the field of view. One could suggest that it is impossible to create one single psychophysical scale for the Q criterion. However, one can use this new criterion to evaluate the same scenes with different lighting options or similar scenes in terms of spatial luminance distribution and glare. Despite of the luminance map values have an error of about 30 % the using HDRi image of scene allows to calculate Q criterion with an accuracy of about 10 %.