Fig 5 - uploaded by Takafumi Saito
Content may be subject to copyright.
An example of contour lines.  

An example of contour lines.  

Source publication
Conference Paper
Full-text available
We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing...

Similar publications

Article
Full-text available
This paper describes a novel technique for determining a useful dimension for a time-delay embedding of an arbitrary time series, along with the individual time delays for each dimension. A binary-string genetic algorithm is designed to search for a variable number of time delays that minimize the standard deviation of the distance between each emb...
Article
Full-text available
A conceptual framework is provided in which to think of the relationships between the three-dimensional structure of physical space and the geometric properties of a set of cameras that provide pictures from which measurements can be made. We usually think of physical space as being embedded in a three-dimensional Euclidean space, in which measurem...
Article
Full-text available
Context. High-resolution numerical methods have been developed for nonlinear, discontinuous problems as they appear in simulations of astrophysical objects. One of the strategies applied is the concept of artificial viscosity. Aims. Grid-based numerical simulations ideally utilize problem-oriented grids in order to minimize the necessary number of...
Article
Full-text available
Son yıllarda yüksek konumsal çözünürlüklü uydu görüntülerinin sınıflandırılmasında geleneksel piksel tabanlı sınıflandırma yaklaşımı yerine obje tabanlı yaklaşımın kullanımı önem kazanmıştır. Obje tabanlı görüntü analizi iki temel işlem adımından oluşur. Bunlardan ilki benzer spektral özelliklere sahip piksellerin gruplandırılarak homojen yapılı gö...
Article
Full-text available
Based on the concepts of artificially microstructured materials, i.e. metamaterials, we present here the first practical realization of a radial wave crystal. This type of device was introduced as a theoretical proposal in the field of acoustics, and can be briefly defined as a structured medium with radial symmetry, where the constitutive paramete...

Citations

... These algorithms often encounter challenges in the modern deferred rendering pipeline. Deferred rendering [14] is a technique that separates the geometric stage and lighting stage into two passes. In the geometry pass, all the geometry information is rendered into G-buffers to be used by the lighting pass for shading. ...
Preprint
Full-text available
Neural rendering provides a fundamentally new way to render photorealistic images. Similar to traditional light-baking methods, neural rendering utilizes neural networks to bake representations of scenes, materials, and lights into latent vectors learned from path-tracing ground truths. However, existing neural rendering algorithms typically use G-buffers to provide position, normal, and texture information of scenes, which are prone to occlusion by transparent surfaces, leading to distortions and loss of detail in the rendered images. To address this limitation, we propose a novel neural rendering pipeline that accurately renders the scene behind transparent surfaces with global illumination and variable scenes. Our method separates the G-buffers of opaque and transparent objects, retaining G-buffer information behind transparent objects. Additionally, to render the transparent objects with permutation invariance, we designed a new permutation-invariant neural blending function. We integrate our algorithm into an efficient custom renderer to achieve real-time performance. Our results show that our method is capable of rendering photorealistic images with variable scenes and viewpoints, accurately capturing complex transparent structures along with global illumination. Our renderer can achieve real-time performance ($256\times 256$ at 63 FPS and $512\times 512$ at 32 FPS) on scenes with multiple variable transparent objects.
... Raster-based stroke rendering can adjust the number of strokes by rendering at different scales, e.g. [ST90,LMLH07], but does not produce vector graphics suitable for shape-based stylization or editing. Methods that aim to extract the perceived or intended curve network from human-drawn sketches, either from input raster images (i.e., line drawing vectorization [FLB16,SBBB20]) or from input vector curves (i.e., stroke consolidation [BTS05, OK11, SSISI16, PvMLV * 21, LABS23]) require some degree of line simplification due to the overdrawn nature of human sketches. ...
Article
Full-text available
Shape‐conveying line drawings generated from 3D models normally create closed regions in image space. These lines and regions can be stylized to mimic various artistic styles, but for complex objects, the extracted topology is unnecessarily dense, leading to unappealing and unnatural results under stylization. Prior works typically simplify line drawings without considering the regions between them, and lines and regions are stylized separately, then composited together, resulting in unintended inconsistencies. We present a method for joint simplification of lines and regions simultaneously that penalizes large changes to region structure, while keeping regions closed. This feature enables region stylization that remains consistent with the outline curves and underlying 3D geometry.
... Moreover, the intrinsic scalability of Many-Lights techniques makes them also often suitable for GPU parallelization. While current GPUs are able to process any number of light sources, Many-Lights techniques in real-time applications are usually applied in a deferred rendering [19] pipeline, considering just a single light bounce. Some techniques are focused on the optimization of the interleaved sampling [20] approach, which uses disjoint subsets of VPLs to compute illumination for adjacent pixels, making the processing faster by organizing data in GPU-friendly structures [21], and addressing issues that led to aliasing artifacts and problems with glossy materials [22]. ...
Article
Full-text available
In recent years, many research projects on real-time rendering have focused on the introduction of global illumination effects in order to improve the realism of a virtual scene. The main goal of these works is to find a compromise between the achievable quality of the resulting rendering and the intrinsic high computational cost of global illumination. An established approach is based on the use of Virtual Point Lights, i.e., "fictitious" light sources that are placed on surfaces in the scene. These lights simulate the contribution of light rays emitted by the light sources and bouncing on different objects. Techniques using Virtual Point Lights are often called Many-Lights techniques. In this paper, we propose an extension of a real-time Many-Lights rendering technique characterized by the integration of photometric data in the process of Virtual Point Lights distribution. We base the definition of light sources and the creation of Virtual Point Lights on the description provided in the IES standard format, created by the Illuminating Engineering Society (IES).
... The detailed survey of line drawing methods from 3D models can be found in Bénard and Hertzmann's tutorial [10]. Saito and Takahashi [11] first created stylized contour lines and curved hatching from 3D models by using 2D image processing operations. Winkenbach and Salesin [12] extended the work of [11] to generate more realistic line drawings from 3D models by integrating 2D and 3D rendering. ...
... Saito and Takahashi [11] first created stylized contour lines and curved hatching from 3D models by using 2D image processing operations. Winkenbach and Salesin [12] extended the work of [11] to generate more realistic line drawings from 3D models by integrating 2D and 3D rendering. Zhang et al. [13] presented a cellular automaton model to simulate the Suibokuga-like painting of 3D trees. ...
Article
Full-text available
Creating visually pleasing stylized ink paintings from 3D models is a challenge in robotic manipulation. We propose a semi-automatic framework that can extract expressive strokes from 3D models and draw them in oriental ink painting styles by using a robotic arm. The framework consists of a simulation stage and a robotic drawing stage. In the simulation stage, geometrical contours were automatically extracted from a certain viewpoint and a neural network was employed to create simplified contours. Then, expressive digital strokes were generated after interactive editing according to user's aesthetic understanding. In the robotic drawing stage, an optimization method was presented for drawing smooth and physically consistent strokes to the digital strokes, and two oriental ink painting styles termed as Noutan (shade) and Kasure (scratchiness) were applied to the strokes by robotic control of a brush's translation, dipping and scraping. Unlike existing methods that concentrate on generating paintings from 2D images, our framework has the advantage of rendering stylized ink paintings from 3D models by using a consumer-grade robotic arm. We evaluate the proposed framework by taking 3 standard models and a user-defined model as examples. The results show that our framework is able to draw visually pleasing oriental ink paintings with expressive strokes.
... The detailed survey of line drawing methods from 3D models can be found in Bénard and Hertzmann's tutorial [10]. Saito and Takahashi [11] first created stylized contour lines and curved hatching from 3D models by using 2D image processing operations. Winkenbach and Salesin [12] extended the work of [11] to generate more realistic line drawings from 3D models by integrating 2D and 3D rendering. ...
... Saito and Takahashi [11] first created stylized contour lines and curved hatching from 3D models by using 2D image processing operations. Winkenbach and Salesin [12] extended the work of [11] to generate more realistic line drawings from 3D models by integrating 2D and 3D rendering. Zhang et al. [13] presented a cellular automaton model to simulate the Suibokuga-like painting of 3D trees. ...
Preprint
Creating visually pleasing stylized ink paintings from 3D models is a challenge in robotic manipulation. We propose a semi-automatic framework that can extract expressive strokes from 3D models and draw them in oriental ink painting styles by using a robotic arm. The framework consists of a simulation stage and a robotic drawing stage. In the simulation stage, geometrical contours were automatically extracted from a certain viewpoint and a neural network was employed to create simplified contours. Then, expressive digital strokes were generated after interactive editing according to user's aesthetic understanding. In the robotic drawing stage, an optimization method was presented for drawing smooth and physically consistent strokes to the digital strokes, and two oriental ink painting styles termed as Noutan (shade) and Kasure (scratchiness) were applied to the strokes by robotic control of a brush's translation, dipping and scraping. Unlike existing methods that concentrate on generating paintings from 2D images, our framework has the advantage of rendering stylized ink paintings from 3D models by using a consumer-grade robotic arm. We evaluate the proposed framework by taking 3 standard models and a user-defined model as examples. The results show that our framework is able to draw visually pleasing oriental ink paintings with expressive strokes.
... The techniques of the first and second category are usually used for stylizable lines. Pioneering work was done by Appel [1], Saito and Takahashi [35], and Dooley and Cohen [12], who determined the silhouettes and contours and various line attributes for improved spatial impression. Feature lines are placed at spatially important regions to convey the shape of the surface. ...
Preprint
Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty -- e.g., in weather forecasts -- is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.
... Raster methods, based on edge detection of an image buffer are the simplest way to approximate the contour, e.g., [Decaudin 1996;Saito and Takahashi 1990]; however, these methods do not produce a vector representation of the contours. ...
Preprint
Computing occluding contours is a key building block of non-photorealistic rendering, but producing contours with consistent visibility has been notoriously challenging. This paper describes the first general-purpose smooth surface construction for which the occluding contours can be computed in closed form. For a given input mesh and camera viewpoint, we produce a $G^1$ piecewise-quadratic surface approximating the mesh. We show how the image-space occluding contours of this representation may then be described as piecewise rational curves. We show that this method produces smooth contours with consistent visibility much more efficiently than the state-of-the-art.
... g., vertex, fragment) to generalized processing cores [Hughes et al. 2013]. By supporting multiple output values from the fragment-shader stage a G-buffer [Saito and Takahashi 1990;Deering et al. 1988] is available that can hold intermediate computations per fragment (e. g., position, normal, diffuse color). ...
Preprint
Full-text available
Hardware-based triangle rasterization is still the prevalent method for generating images at real-time interactive frame rates. With the availability of a programmable graphics pipeline a large variety of techniques are supported for evaluating lighting and material properties of fragments. However, these techniques are usually restricted to evaluating local lighting and material effects. In addition, view-point changes require the complete processing of scene data to generate appropriate images. Reusing already rendered data in the frame buffer for a given view point by warping for a new viewpoint increases navigation fidelity at the expense of introducing artifacts for fragments previously hidden from the viewer. We present fragment-history volumes (FHV), a rendering technique based on a sparse, discretized representation of a 3d scene that emerges from recording all fragments that pass the rasterization stage in the graphics pipeline. These fragments are stored into per-pixel or per-octant lists for further processing; essentially creating an A-buffer. FHVs using per-octant fragment lists are view independent and allow fast resampling for image generation as well as for using more sophisticated approaches to evaluate material and lighting properties, eventually enabling global-illumination evaluation in the standard graphics pipeline available on current hardware. We show how FHVs are stored on the GPU in several ways, how they are created, and how they can be used for image generation at high rates. We discuss results for different usage scenarios, variations of the technique, and some limitations.
... While previous approaches have focused on reproducing the photorealistic appearance of objects, the visualization of the radiance field with non-photorealistic styles remains unexplored. Computationally simulating the non-photorealistic styles and illustration techniques, such as perceptual abstraction and feature line drawing, helps digital artists pursue their expression and facilitates visual communication for illustrative purposes [DS02,ST90,GGSC98]. ...
... NPR techniques automatically simulate artistic styles found in human-made artworks by salient geometric or chromatic features of the data. Previous research has tackled a variety of styles such as painterly [Mei96], cartoonish [LMHB00], and feature lines [ST90,DFRS03] not only for artistic purposes, but also to facilitate visual communication among people [DS02,GGSC98]. Existing works have produced stylized results from various input data types such as meshes [LMHB00], photographs [DS02], and videos [WOG06]. ...
Article
Full-text available
Volumetric radiance fields have recently gained significant attention as promising representations of photorealistic scene reconstruction. However, the non‐photorealistic rendering of such a representation has barely been explored. In this study, we investigate the artistic posterization of the volumetric radiance fields. We extend the recent palette‐based image‐editing framework, which naturally introduces intuitive color manipulation of the posterized results, into the radiance field. Our major challenge is applying stylization effects coherently across different views. Based on the observation that computing a palette frame‐by‐frame can produce flickering, we propose pre‐computing a single palette from the volumetric radiance field covering its entire visible color. We present a method based on volumetric visibility to sample visible colors from the radiance field while avoiding occluded and noisy regions. We demonstrate our workflow by applying it to pre‐trained volumetric radiance fields with various stylization effects. We also show that our approach can produce more coherent and robust stylization effects than baseline methods that compute a palette on each rendered view.
... Previous methods use voting schemes and other heuristics to fix visibility, but none are exact. Computing curve visibility with image buffers [Cole and Finkelstein 2010;Eisemann et al. 2008;Saito and Takahashi 1990] or ray-tests with smooth geometry [Elber and Cohen 1990] have similar numerical problems. Our method builds most directly on the method of Bénard et al. [2014], which frequently guarantees correct results, but is very expensive to compute. ...
Article
Full-text available
This paper proposes a method for computing the visible occluding contours of subdivision surfaces. The paper first introduces new theory for contour visibility of smooth surfaces. Necessary and sufficient conditions are introduced for when a sampled occluding contour is valid, that is, when it may be assigned consistent visibility. Previous methods do not guarantee these conditions, which helps explain why smooth contour visibility has been such a challenging problem in the past. The paper then proposes an algorithm that, given a subdivision surface, finds sampled contours satisfying these conditions, and then generates a new triangle mesh matching the given occluding contours. The contours of the output triangle mesh may then be rendered with standard non-photorealistic rendering algorithms, using the mesh for visibility computation. The method can be applied to any triangle mesh, by treating it as the base mesh of a subdivision surface.