Figure 2 - uploaded by David Al-Dabass
Content may be subject to copyright.
3: An example of a computer generated image without depth of field 

3: An example of a computer generated image without depth of field 

Source publication
Article
Full-text available
Investigations into the depth of field phenomenon and currently existing real-time graphical simulations of it lead to the conclusion that the current solutions did not provide a sufficiently high level of accuracy to convincingly portray the depth of field phenomenon. In particular these techniques do not provide support for the see through effect...

Context in source publication

Context 1
... If we’re to obtain truly realistic images all aspect of the eye need to be taken into account, even those which may be described as a deficiency of our natural optical system Systems that choose to ignore depth of field are starving themselves of an excellent method for providing depth cues and diverting the viewers’ attention to areas of importance. For example Hollywood films have utilised the depth of field phenomenon to great effect over the years, using it to divert the viewers’ attention to important aspects of the scene. If done properly this is not picked up on consciously and provides a very powerful special effect in the Hollywood arsenal. A more accurate implementation of the depth of field phenomenon would not necessarily be of use throughout computer graphics but it would provide much needed realism for many different types of simulators and perhaps even games, where the quest for greater realism is constantly gaining momentum. Possible uses for such an implementation of depth of field include military simulators where depth cues are vitally important, such as flight simulators. Over the long term this would potentially provide much more realistic simulations and hence better training conditions. This would of course have the effect of lessening the jump between simulator and real life which would provide numerous benefits, particularly from a military perspective as complex training scenarios could be worked on from the safety of the base. The depth of field phenomena is something experienced perhaps unknowingly by everyone on a daily basis. When the eye focuses on a particular object the objects around it are perceived as increasingly more blurred the further they are away from the object of focus. This also applies to other lens based optical systems such as photography. The actual range in which the eye can see completely sharp objects is relatively small. In-fact the eye is actually rather poor at distinguishing the detail of objects not directly on its focal plane. Most of what the eye sees is non-sharp and a large amount of image enhancement is performed by the brain to get to the final image we see. For an example of how much work the brain actually does after the eye sends its images we can examine the human eyes blind spots, which few people realise exist. Each eye has a blind spot where the optic nerve meets the back of the eyeball, the brain fills in these blind spots utilising image data from the surrounding area. For a practical example see Serendip, 2003. This emphasises the point that the eye can only see clearly directly in its plane of view, hence all the surrounding objects will appear blurred. Figure 2.1 shows and example of Depth of Field in an optical system. As observed by Rokita [1996] Depth of Field is a direct result of the process of accommodation; which is one of many depth cues that influence the way humans perceive their surroundings. Figure 2.2 shows a visual example of how the depth of field effect is created, the sharp image is created where the rays of light focus directly onto the photoreceptor; this could be the film (in the case of photography) or the retina (in the case of the human eye). In the case where the rays of light focus in front of or behind (i.e. the image is de-focused) the photoreceptors blur circles are created. Blur circles are best explained by taking into account a single point of light as in Figure 2.2. Taking Figure 2.2 as an example; if the rays from the light source are in focus then the rays will converge on the view plane to create a sharp image, for example in the eye the in-focus rays will converge on the retina. However if they’re not in focus then the rays will converge either in front of or behind the sharp image plane, in the case of the eye this would mean the defocused light rays are spread over an area which is dependant on the sources distance from the focal plane. The brain fills in any missing information from the defocused light which creates a blur circle. Obviously in the real world there is much more than a single point light source, so many thousands of blur circles will appear in any one particular optical image. This effect is taken advantage of extensively in the worlds of film and photography, for example the recent film ‘The Lord of the Rings’ makes heavy use of Depth of Field along with various other techniques to draw the viewers’ attention to the important aspects of a scene. Obviously this could also be used to the same effect in things such as computer games. However one of the main uses of the depth of field phenomenon for computer graphics images would be to add realism to Virtual Reality systems such as flight training simulators. The problem with most current computer rendered images is that they’re based on the pin-hole camera model. Meaning that each image is potentially infinity sharp, obviously taking into account practical limitations. As shown by the image in Figure2.3 taken from Quake 3. In Figure 2.3 it can be reasonably assumed that the point of focus should be somewhere around the door at the end of the corridor, this would mean that the gun should be partially blurred. So some method is needed to create more realistic computer generated images by inclusion of a depth of field effect. There are currently a number of solutions to this problem all of which have shortcomings: This is by far the simplest approach to solving the depth of field problem and also currently the most successful for real-time applications, providing moderate results and supporting the fabled see through effect. The effect is achieved by creating the image from multiple discrete viewpoints as shown in Figure 3.1. The image from each viewpoint is added to an accumulation buffer where the final image is built up. As can be seen in Figure 3.2 this method creates the unwanted effect of multiple images being clearly distinguishable, this is due to the fact that not enough sample have been used. The simple solution to this is to increase the number of samples, however; increasing the level of blurring slightly drastically increases the number of samples that are needed to disguise help this artefact. The number of samples required even for a small level of blurring is potentially very large, particularly with more complex images. Rendering the same scene multiple times is also potentially very time consuming, so a large number of samples will obviously lead to a big performance hit in your application. There is another problem with this method that is apparent from figure 3.2; particularly when compared with figure 3.3. It can be seen that the effect created via multiple images is not actually blurring of the image and is perhaps better described as adding fuzziness to the image. Blurring by ray tracing is a form of blurring by multiple viewpoints; the major difference is that ray tracing allows varying of the sample points pixel by pixel. The most realistic depth of field effects can be obtained via ray tracing techniques, where rays of light are traced from the viewpoint to the light source. The calculations involved in ray-tracing make use of the actual physics of light. Figure 2.6 show an example of a ray traced image taken from Pixar’s film Monsters Inc. Ray tracing does produce correct results, however; with current technology it is impossible to perform such complex calculations in real time. As such ray tracing is only of use for pre-processed scenes such as the one shown in Figure 3.3. The basic idea of these systems is to create a blurring effect dependant on depth (usually the value of Z). Examples of this type of system include work by Snyder and Lengyal [1998], Rokita [1996] and also Potmesil and Chakravarty [1981]. NVIDIA® Corporation have implemented a variation of the Potmesil and Chakravarty method [NVIDIA, 2003b], this can be seen in Figure 3.4. There are problems with the method employed by NVIDIA. For example some of the objects within the scene appear to have an aura surrounding them; this is because of the lack of support for the see through effect. This is a common problem which plagues the Rokita, Potmesil and Chakravarty and similar methods. This artefact can be seen more clearly in ATI’s [ATI, 2003] Depth of Field demonstration, shown in Figure 3.5 with the focus plane on the chequered wall. This uses virtually the same method as the NVIDIA example with the exception that pixel shader version 2.0 is preferred over the version 1.1 used by NVIDIA. This of course has the disadvantage of limiting the number of people able to view this demo to those with high-end graphics cards that support pixel shader 2.0. However it does mean that a much higher level of blurring is achieved due to the extra instructions and resisters available. Snyder and Lengyel (1998) however get around this problem via the use of a layering system. Assuming that the objects in question truly are on separate layers the see through effect is supported. However Snyder and Lengyel’s system does have two major drawbacks. Firstly the choice of which objects go on each layer depends on hidden surface removal considerations rather than depth, hence correct ordering for use with depth of field cannot be guaranteed. Also in order to be able to use Snyder and Lengyel’s system a non standard method for hidden surface removal must be used. This causes a huge problem as it would require an entirely new type of graphics rendering system and is simply not practical for such a specialised requirement. The first important aspect is the inclusion of a layering system similar to that proposed by Snyder and Lengyal [1998]. However for our purposes the layers are determined directly by depth and not by hidden surface removal considerations as in Snyder and Lengyal’s system. This method guarantees that two objects rendered on the same level will have a similar level of blurring, which was not the case with Snyder and Lengyal’s system. Each pixel with ...

Similar publications

Article
Full-text available
The paper presents a computer program developed for the determination of the basic dynamic characteristics of the process of machining, numerical simulations of the dynamic system of the process of machining, and graphic presentations of the numerical simulations performed. Results of simulations of the runs of time and frequency characteristics of...
Technical Report
Full-text available
Petri nets are a well-known graphical language for concep-tual modelling. We propose a new model called Petri nets with discrete variables (PNDVs) that permits additional modelling convenience over classical Petri nets. We show that PNDVs are Turing complete and give a limited subset with the same expressive power as Petri nets. Moreover, we demons...
Conference Paper
Full-text available
In Mobile Ad-hoc Network (MANET) Sybil attack can be launched in different dimensions. The current study proposes formation of Sybil attack using simulation in three different aspects. These models are designed to show various forms of Sybil attack in different application domains of MANETs and give a transparent view of each category. Comparisons...
Article
Full-text available
Current methods in non-photorealistic graphics can place a heavy emphasis on the algorithm, as opposed to the artist. In this paper, we analyse these trends, and present a conceptual framework for putting control back in the hands of the artist. Combining ideas from non-photorealistic graphics and artificial intelligence, we present new methods of...
Conference Paper
Full-text available
Architects must consider an entire year's worth of solar positions and climate data to design buildings with adequate daylight and minimal glare. However, annual simulations are time-consuming and computationally expensive, which makes them difficult to integrate into iterative design processes. In this paper, we compare the performance of several...

Citations

... The intensity value for the output pixel is calculated as a weighted average of all neighboring pixel intensities contained within the CoC. Post-process filtering suffers badly from the 'see through effect' [12], the ability to see a sharp edged object through a de-focused object. ...
Conference Paper
This paper details an alternative approach to rendering 3D digital environments using fixation-based disorder fields that maintain high object saliency and provide greater depth perception across the entire visual field. The research undertaken compares the visual effect of applying disorder to an animated sequence as a simple 2D image space post-effect process, with a new 3D world space approach. Much research has been undertaken to establish how we perceive relative space when viewing 2D images. Extracting accurate 3D depth cues from 2D images can be difficult to assimilate, especially when camera position/settings are unknown. In optics, a sense of depth is achieved through the focal length. Converging light appears sharp and 'in focus', whilst poorly converging light appears blurry and 'out of focus'. However, adding blur to an image has its drawbacks. More picture information is destroyed when the pixels of an image are blurred (averaged together), compared to when they are 'scrambled'. It has been argued that whilst both images lose information, the scrambled image contains more information than the blurred one. Initial results in 2D screenspace identify key areas where local disorder decreases depth perception and creates visual confusion when applied to animated sequences. This paper proposes an alternative approach to control the disorder, which overcomes these problems when applied to the moving image.
Conference Paper
An efficient method for rain simulation in 3D environment is proposed in this paper. By taking advantage of the parallelism and programmability of GPUs (Graphic Processing Units), real-time interaction can be achieved. Splashing of raindrop is simulated using collision detection, series of stylized textures and rotations of point sprites. To simulate wind-driven raining effect, the motion of particles can be freely controlled based on Newtonian dynamics. We can also control the size of raindrops dynamically by using different textures or changing the size of point sprites. To achieve living rendering of raining scenes, the effects have been applied such as lighting, DOF (depth of field). Many experiments have been done in 3D scenes with different geometries complexity and particle system complexity. The test results show that our method is efficient and is feasible to solve the problem of real-time rain simulation for 3D scenes with complex geometries.
Conference Paper
We present an efficient method for simulation of raining phenomenon in real time by taking advantage of the parallelism and programmability of GPU (graphic processing unit). Our implementation of the method is based on particle systems and collision detection. Splashing of raindrop is simulated using a series of stylized textures and rotations of point sprites. Taking into account human perception in raining phenomenon of real world, the effects such as light influence, depth of field, motion blur, have been applied. The test results show that our method is efficient and is feasible to solve the problem of 3D raining simulation in real time for more general environment with complex geometry.