Figure - uploaded by Pierre Bénard
Content may be subject to copyright.
Summary of the properties of the different line extraction approaches.

Summary of the properties of the different line extraction approaches.

Source publication
Article
Full-text available
Non‐photorealistic rendering (NPR) algorithms allow the creation of images in a variety of styles, ranging from line drawing and pen‐and‐ink to oil painting and watercolour. These algorithms provide greater flexibility, control and automation over traditional drawing and painting. Despite significant progress over the past 15 years, the application...

Context in source publication

Context 1
... Table 1 compares the various line extraction techniques we have presented with respect to two key properties: their ability to provide a coherent parameterization and to deal with level of detail. Object space methods provide line parameterization which allows complex rendering effects, in particular stroke texture mapping, but may introduce strong temporal artifacts. ...

Similar publications

Article
Full-text available
Non-photorealistic rendering (NPR) creates images with artistic styles of paintings. In this field, a number of methods of converting photographed images into non-photorealistic ones have been developed, and can be categorized into filter-based and exemplar-based approaches. In this paper, we focus on the exemplar-based approach and propose a novel...

Citations

... A general challenge for line stylization methods is the generation of temporal-coherent stylized lines. Recomputing the lines at every time frame leads to lines sliding or popping up [6]. By propagating the lines from frame to frame, Kalnins et al. [18] provided a solution that works at interactive rates for models of medium complexity. ...
Preprint
Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty -- e.g., in weather forecasts -- is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.
... Therefore, the stylized lines should have temporal continuity. It is not sufficient to simply compute the lines at every time frame, as this leads to visual artifacts such as popping or sliding [26]. ...
... Therefore, the stylized lines should have temporal continuity. It is not sufficient to simply compute the lines at every time frame, as this leads to visual artifacts such as popping or sliding [BBT11]. ...
Conference Paper
Full-text available
Data are often subject to some degree of uncertainty, whether aleatory or epistemic. This applies both to experimental data acquired with sensors as well as to simulation data. Displaying these data and their uncertainty faithfully is crucial for gaining knowledge. Specifically, the effective communication of the uncertainty can influence the interpretation of the data and the users' trust in the visualization. However, uncertainty-aware visualization has gotten little attention in molecular visualization. When using the established molecular representations, the physicochemical attributes of the molecular data usually already occupy the common visual channels like shape, size, and color. Consequently, to encode uncertainty information, we need to open up another channel by using feature lines. Even though various line variables have been proposed for uncertainty visualizations, they have so far been primarily used for two-dimensional data and there has been little perceptual evaluation. Therefore, we conducted a perceptual study to determine the suitability of the line variables sketchiness, dashing, grayscale, and width for distinguishing several uncertainty values on molecular surfaces.
... Dedicated non-photorealistic rendering (NPR) frameworks such as BNPR, LOLLIPOPshaders or Artineering are more commonly used by the artistic community and in production pipelines, but mimicking complex artistic styles remains challenging, especially for animated scenes. As stated by Bénard et al. [BBT11], perfectly and simultaneously preserving the im-pression that a stylized image is drawn on a flat canvas (i.e., flatness) while ensuring a strong correlation between the apparent motion field of the 3D scene and the motion of its stylized depiction (i.e., motion coherence) and minimizing abrupt changes (i.e., temporal continuity) during an animation is impossible. Previous approaches, either mark-based or texture-based, compromise between these contradictory constraints, and thus manage to cover different ranges of styles from discrete to continuous. ...
... The survey of Bénard et al. [BBT11] distinguishes between two main families of approaches when stylizing color regions of 3D animations: texture-based and mark-based methods. Our work belongs to the latter but borrows key ideas from the former. ...
... The first three criteria, proposed by Bénard et al. [BBT11], apply to any stylization method that addresses the temporal coherence problem. The other three measure the versatility of such a method. ...
Article
Full-text available
We present a novel temporally coherent stylized rendering technique working entirely at the compositing stage. We first generate a distribution of 3D anchor points using an implicit grid based on the local object positions stored in a G‐buffer, hence following object motion. We then draw splats in screen space anchored to these points so as to be motion coherent. To increase the perceived flatness of the style, we adjust the anchor points density using a fractalization mechanism. Sudden changes are prevented by controlling the anchor points opacity and introducing a new order‐independent blending function. We demonstrate the versatility of our method by showing a large variety of styles thanks to the freedom offered by the splats content and their attributes that can be controlled by any G‐buffer.
... Video stylization has been an active topic in Non-Photorealistic Rendering for more than two decades [Lit97,Mei96], as surveyed by Bénard et al. [BBT11], Kyprianidis et al. [KCWI13], and Rosin and Collomosse [RC13]. We rst discuss the main approaches to stylize videos, before discussing related methods on motion estimation and processing. ...
Thesis
Digital tools brings new ways of creation, for accomplished artists as well as for any individual willing to create. In this thesis, I am interested in two different aspects in helping artists: interpreting their creation and generating new content.I first study how to interpret a sketch as a 3D object. We propose a data-driven approach that tackles this challenge by training deep convolutional neural networks (CNN) to predict occupancy of a voxel grid from a line drawing. We integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance.We then complement this technique with a geometric method that allows to refine the quality of the final object. To do so, we train an additional CNN to predict higher resolution normal maps from each input view. We then fuse these normal maps with the voxel grid prediction by optimizing for the final surface.We train all of these networks by rendering synthetic contour drawings from procedurally generated abstract shapes.In a second part, I present a method to generate stylized videos with a look reminiscent of traditional 2D animation.Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. Inspired by cut-out animation, we propose to modify the motion of the sequence so that it is composed of 2D rigid motions.To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. Applying existing stylization algorithm to the new sequence produce a stylized video more similar to 2D animation.Although the two parts of my thesis lean on different methods, they both rely on traditional techniques used by artists: either by understanding how they draw objects or by taking inspiration from how they simplify the motion in 2D animation.
... A "calm" style with only small deviations from the base path leads to a rather smooth animation, whereas a "wild" style with strong geometric distortions or using strong textures may lead to visual artifacts such as popping or sliding. This is a recurrent but still mostly open problem in non-photorealistic rendering (Bénard et al., 2011). In the following, we will summarize the main solutions to improve temporal coherence of line drawing animations. ...
... A "calm" style with only small deviations from the base path leads to a rather smooth animation, whereas a "wild" style with strong geometric distortions or using strong textures may lead to visual artifacts such as popping or sliding. This is a recurrent but still mostly open problem in non-photorealistic rendering (Bénard et al., 2011). In the following, we will summarize the main solutions to improve temporal coherence of line drawing animations. ...
Preprint
Full-text available
This tutorial describes the geometry and algorithms for generating line drawings from 3D models, focusing on occluding contours. The geometry of occluding contours on meshes and on smooth surfaces is described in detail, together with algorithms for extracting contours, computing their visibility, and creating stylized renderings and animations. Exact methods and hardware-accelerated fast methods are both described, and the trade-offs between different methods are discussed. The tutorial brings together and organizes material that, at present, is scattered throughout the literature. It also includes some novel explanations, and implementation tips. A thorough survey of the field of non-photorealistic 3D rendering is also included, covering other kinds of line drawings and artistic shading.
... To produce an artifact usable outside of Fluid Brush, the system captures frames to create short animations, or cinemagraphs. This artifact must seamlessly loop to give a sense of continuous motion, but since the only animation in these sequences is particle movement, we do not need to employ more expensive (or extensive) methods such as those described in the survey paper by Bénard et al [Bénard et al. 2011]. ...
Conference Paper
Digital media allows artists to create a wealth of visually-interesting effects that are impossible in traditional media. This includes temporal effects, such as cinemagraph animations, and expressive fluid effects. Yet these flexible and novel media often require highly technical expertise, which is outside a traditional artist's skill with paintbrush or pen. Fluid Brush acts a form of novel, digital media, which retains the brush-based interactions of traditional media, while expressing the movement of turbulent and laminar flow. As a digital media controlled through a non-technical interface, Fluid Brush allows for a novel form of painting that makes fluid effects accessible to novice users and traditional artists. To provide an informal demonstration of the medium's effects, applications, and accessibility, we asked designers, traditional artists, and digital artists to experiment with Fluid Brush. They produced a variety of works reflective of their artistic interests and backgrounds.
... Stylizing animated 3D scenes is challenging, since desired properties (temporal continuity, motion coherence and atness) cannot all be satis ed at once [Bénard et al. 2011]. For instance, following the motion eld of a scene while keeping a 2D visual impression leads to an inherent contradiction: when style marks are attached to the surface of 3D objects (e.g. using mapped textures or anchored strokes), the resulting motion is coherent but the atness is broken because the density and size of style marks will vary in screen-space depending on the location of the camera (zooming) or of the surface orientation (foreshortening). ...
... Stylization of 3D animated objects is a widely studied subject among the expressive rendering community. We refer the reader to Bénard et al. [2011] for a state-of-the-art on this topic. Depending on the use of the geometrical information, stylization methods can operate in image-space -where no 3D information is available -or in object-space. ...
Conference Paper
One of the qualities sought in expressive rendering is the 2D impression of the resulting style, called flatness. In the context of 3D scenes, screen-space stylization techniques are good candidates for flatness as they operate in the 2D image plane, after the scene has been rendered into G-buffers. Various stylization filters can be applied in screen-space while making use of the geometrical information contained in G-buffers to ensure motion coherence. However, this means that filtering can only be done inside the rasterized surface of the object. This can be detrimental to some styles that require irregular silhouettes to be convincing. In this paper, we describe a post-processing pipeline that allows stylization filters to extend outside the rasterized footprint of the object by locally "inflating" the data contained in G-buffers. This pipeline is fully implemented on the GPU and can be evaluated at interactive rates. We show how common image filtering techniques, when integrated in our pipeline and in combination with G-buffer data, can be used to reproduce a wide range of "digitally-painted" appearances, such as directed brush strokes with irregular silhouettes, while keeping enough motion coherence.
... -A real-time rendering system running on the GPU, which is appropriate for integration into production rendering systems working with both computer generated and captured RGBZ images. Bénard et al. give a detailed survey of stylized rendering in [2] and categorize related work as texture-based and primitivebased methods. ...
... Initially, we create a list of low-discrepancy positions with the 2D Halton sequence [25] of bases (2,3). This sequence or its any contiguous subsequences have uniform distribution, which has an important role in our method. ...
Article
Full-text available
This paper presents an image-space stroke rendering algorithm that provides temporally coherent placement of lines at particles that are moving with object surfaces. We generate particles in image space and move them according to an image-space velocity field. Consistent image-space density is achieved by a deterministic rejection-based algorithm that uses low-discrepancy series to filter out overpopulated areas and to fill in underpopulated regions. Our line stabilization method can solve the temporal continuity problems of image-space techniques. The multi-pass algorithm is implemented entirely on the GPU using geometry shaders and vertex transform feedback. Our method provides high-quality results and is implemented as an interactive post processing step. We also provide a wide toolset for artists to control the final rendering style and extended the method to process real-life RGBZ footage.