Schematic illustration.  

Schematic illustration.  

Source publication
Article
Full-text available
Depth-image-based-rendering (DIBR) is the commonly used for generating additional views for 3DTV and FTV using 3D video formats such as video plus depth (V+D) and multiview-video-plus-depth (MVD). The synthesized views suffer from artifacts mainly with disocclusions when DIBR is used. Depth-based inpainting methods can solve these problems plausibl...

Context in source publication

Context 1
... non-parametric texture synthesis. The quality of the inpainted image highly depends on the order of the filling direction. Consider an input image I with an empty region Ω, also known as hole, the source region Φ (the remaining part of the image except the empty region) is defined as Φ = I − Ω. The boundary between Φ and Ω is denoted as δΩ (see Fig. 2). The basic steps of Criminisi's algorithm are as follows: (i) Identify the boundary and compute the priorities on the boundary region and (ii) Find the patch with the maximum priority and find the best patch that matches the selected patch using patch matching and filling (iii) Update the confidence values. Suppose a patch Ψ p ...

Similar publications

Conference Paper
Full-text available
3D geovirtual environments, such as virtual 3D city and landscape models, can be used as scenery for visualizing thematic data. which can be communicated using suitable color mappings or hatch patterns. For rendering purposes, these hatches patterns can be represented as image-based or procedural textures. The resulting quality of image-based textu...
Article
Full-text available
The depth image based rendering (DIBR) is a popular technology for 3D video and free viewpoint video (FVV) synthesis, by which numerous virtual views can be generated from a single reference view and its depth image. However, some artifacts are produced in the DIBR process and reduce the visual quality of virtual view. Due to the diversity of artif...

Citations

... Although the exemplar method has the advantage of recovering the structure, this priority order is not suitable for hole filling in the view synthesis context, since holes should be filled only from the background texture. Several methods have been proposed by improving filling priorities and adding various foreground-background classification techniques in the exemplar methods including the methods in [18][19][20][21][22][23]. Despite improvements, these methods still have spatial texture inconsistencies and moreover, they severely suffer from temporal inconsistencies (flickering). ...
... p − q 2 2 is the sum of squared difference between known pixels in two patches and β is a weighting coefficient to equalize the effect of the depth and texture. Similar to [22], holes in the depth map are filled by copying the depth values from the best patch location. Holes in the texture image are filled with a weighted average of N patches of texture values. ...
Article
Full-text available
Depth-image-based rendering (DIBR) is a commonly used method for synthesizing additional views using video-plus-depth (V+D) format. A critical issue with DIBR-based view synthesis is the lack of information behind foreground objects. This lack is manifested as disocclusions, holes, next to the foreground objects in rendered virtual views as a consequence of the virtual camera “seeing” behind the foreground object. The disocclusions are larger in the extrapolation case, i.e. the single camera case. Texture synthesis methods (inpainting methods) aim to fill these disocclusions by producing plausible texture content. However, virtual views inevitably exhibit both spatial and temporal inconsistencies at the filled disocclusion areas, depending on the scene content. In this paper, we propose a layered depth image (LDI) approach that improves the spatio-temporal consistency. In the process of LDI generation, depth information is used to classify the foreground and background in order to form a static scene sprite from a set of neighboring frames. Occlusions in the LDI are then identified and filled using inpainting, such that no disocclusions appear when the LDI data is rendered to a virtual view. In addition to the depth information, optical flow is computed to extract the stationary parts of the scene and to classify the occlusions in the inpainting process. Experimental results demonstrate that spatio-temporal inconsistencies are significantly reduced using the proposed method. Furthermore, subjective and objective qualities are improved compared to state-of-the-art reference methods.
Article
We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. Therefore, refining the depth maps is the main challenging problem in the task. We propose an iterative depth refinement algorithm, including error detection and error correction, to correct errors in depth map. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Then, error pixels are corrected based on sampling local candidates. A trilateral filter that considers intensity, spatial and temporal terms into the filter weighting is applied to enhance the spatial and temporal consistency across frames. So the virtual views can be better synthesized according to the refined depth maps. To combine both warped images, disparity-based view interpolation is introduced to alleviate the translucent artifacts. Finally, a directional filter is applied to reduce the aliasing around the object boundaries to generate multiple high-quality virtual views between the two views. We demonstrate the superior image quality of the synthesized virtual views by using the proposed algorithm over the state-of-the-art view synthesis methods through experiments on benchmarking image and video datasets.
Article
View synthesis is an efficient solution to produce content for 3DTV and FTV. However, proper handling of the disocclusions is a major challenge in the view synthesis. Inpainting methods offer solutions for handling disocclusions, though limitations in foreground-background classification causes the holes to be filled with inconsistent textures. Moreover, the state-of-the art methods fail to identify and fill disocclusions in intermediate distances between foreground and background through which background may be visible in the virtual view (translucent disocclusions). Aiming at improved rendering quality, we introduce a layered depth image (LDI) in the original camera view, in which we identify and fill occluded background so that when the LDI data is rendered to a virtual view, no disocclusions appear but views with consistent data are produced also handling translucent disocclusions. Moreover, the proposed foreground-background classification and inpainting fills the disocclusions with neighboring background texture consistently. Based on the objective and subjective evaluations, the proposed method outperforms the state-of-the art methods at the disocclusions.
Conference Paper
Depth-based inpainting methods can solve disocclusion problems occurring in depth-image-based rendering. However, inpainting in this context suffers from artifacts along foreground objects due to foreground pixels in the patch matching. In this paper, we address the disocclusion problem by a refined depth-based inpainting method. The novelty is in classifying the foreground and background by using available local depth information. Thereby, the foreground information is excluded from both the source region and the target patch. In the proposed inpainting method, the local depth constraints imply inpainting only the background data and preserving the foreground object boundaries. The results from the proposed method are compared with those from the state-ofthe art inpainting methods. The experimental results demonstrate improved objective quality and a better visual quality along the object boundaries.