Figure 4 - uploaded by Zhiyong Huang
Content may be subject to copyright.
User Specified Control Hairs 

User Specified Control Hairs 

Source publication
Conference Paper
Full-text available
Maximizing visual effect is a major problem in real-time animation. A real-time hair animation framework was proposed previously by C. K. Koh and Z. Huang (2000, 2001) based on 2D representation and texture mapping. One problem is that it lacks of the volumetric effect due to its 2D nature. This paper presents a technique using the U-shape strip to...

Context in source publication

Context 1
... our method, the human hair modeling use both inter- active and automatic processes. Interactively, given a scalp model, the user specify a few ”control hair” of the same number of control points ( Figure 4). Now, the hairstyle model can be derived by an automatic process. The control points of different control hairs are first connected horizontally. These horizontally connected control points are treated as NURBS control points and they are tessellated using Oslo algorithm [2]. Then these tessellated points are connected vertically and used as NURBS control points to be tessellated again vertically. Two neighboring sequences of these tessellated points are connected to form a hair strip (Figure 5(a)). A hairstyle result is shown in Figure 5(b). This can be done at a user specified levels of detail. Finally, in order to improve the visual effect, we apply the texture mapping with the alpha-channel on hair strips of the hair model (Figure 5(c)). Three more examples of different hairstyle are shown in Figure 6. Our method is implemented using Java3D 1.3.1 and JDK 1.4.0 on Windows XP professional edition version 5.1 ser- vice pack 1. The hardware configuration is dual Xeon pro- cessors at 2.8G, 2G RAM, and a 3DLabs Wildcat III 6110 graphics card. Experiments have been conducted on this machine to evaluate the performance of our proposed framework. The result is shown in Table 1. It showed that real-time has been achieved. Normally about 50 hair strips are enough for a typical hairstyle model. By using the U-shape strips, the visual effect can be improved significantly (Figure 7). In Figure 7(a), 2D hair strips are used which makes the hair seem to be a single layered, lacking of the volumetric effect. While in Figure 7(b), we can clearly observe a volumetric effect of the hairstyle. By enabling inter strip springs, different animation effect can be achieved. Figure 8(a) is the initial position of the hair. Figure 8(b) and Figure 8(c) show the different effect achieved with inter-collision detection of hair strips enabled (Figure 8(b)) and disabled (Figure 8(c)) respectively. Figure 8(d) is the rest position of the hair. With wireframe, Figure 9 shows a clearer view of the inter-collisions happening in Figure 8(c). Finally, Figure 10 shows 3 snapshots of an animation sequence. 52 hair strips are used in this animation. The video sequences can be found in www.comp.nus.edu.sg/ ̃huangzy/hair.htm. We propose an enhanced framework for real-time hair animation based on 2D strips hair model. By introducing U-shape strips, the volumetric visual effect of the model is greatly ...

Similar publications

Article
Full-text available
Introduction We present a method for creating consistent 3D models from 2D vector drawings to add effects such as shading, shadowing, or texture mapping of characters on cel animations. We here say 3D models are consistent if they have one-to-one vertex correspondence and their 2D projections coincide with input drawings. To create such models, we...

Citations

... Each has its own advantages, but not all of them can be used for simulations. Meshes or strips are simple and efficient and especially well supported by existing modeling software, but they are not appropriate for realistic simulations [KH00, KN00,LH03]. To reduce the modeling effort while also creating realistic hair models, vector-field-based modeling methods have been introduced [ However, their method is limited to natural hairstyles without constraints. ...
Article
As the deformation behaviors of hair strands vary greatly depending on the hairstyle, the computational cost and accuracy of hair movement simulations can be significantly improved by applying simulation methods specific to a certain style. This paper makes two contributions with regard to the simulation of various hair styles. First, we propose a novel method to reconstruct simulatable hair strands from hair meshes created by artists. Manually created hair meshes consist of numerous mesh patches, and the strand reconstruction process is challenged by the absence of connectivity information among the patches for the same strand and the omission of hidden parts of strands due to the manual creation process. To this end, we develop a two‐stage spectral clustering method for estimating the degree of connectivity among patches and a strand‐growing method that preserves hairstyles. Next, we develop a hairstyle classification method for style‐specific simulations. In particular, we propose a set of features for efficient classifications and show that classifiers trained with the proposed features have higher accuracy than those trained with naive features. Our method applies efficient simulation methods according to the hairstyle without specific user input, and thus is favorable for real‐time simulation.
... Recently, lots of works have been performed for the hair simulation. For instance, using NURBS surfaces to represent groups of strands has been proposed by [1,2,3]. These NURBS surface are referred to as hair strips. ...
... These parametric representations can involve surfaces to represent hair or wisps in the form of trigonal prisms or generalized cylinders. a) Parametric Surface: Using two-dimensional surfaces to represent groups of strands has become a common approach to modeling hair [16], [17], [18]. Typically, these methods use a patch of a parametric surface, such as a NURBS surface, to reduce the number of geometric objects used to model a section of hair. ...
... In order to alleviate this flat appearance of hair, Liang and Huang [17] use three polygon meshes to warp a 2D strip into a U-shape, which gives more volume to the hair. In this method, each vertex of the 2D strip is projected onto the scalp and the vertex is then connected to its projection. ...
... Curves representing clusters of hair are generated between the silhouette surface and the scalp. These curves become the spine for polygon strips that represent large portions of hair, similar to the strips used by [16], [17]. ...
Article
Full-text available
Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in all these areas, a broad diversity of approaches are used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling.
... Beaucoup d'approches ont choisi de représenter des mèches de cheveux par des surfaces paramétriques [KH00,LH03,NT04], en général des NURBS 5 . Ces méthodes sont assez peu coûteuses en temps de calcul, car l'utilisation de morceaux de surfaces paramétriques (appelées également bandes de cheveux) pour représenter des groupes de cheveux réduit fortement la complexité de la chevelure, tout en assurant un maximum de flexibilité pour la modélisation de sa forme ; par conséquent, ce type d'approche permet également, dans une étape ultérieure, d'animer efficacement -quoique de manière assez peu réaliste -les coiffures créées (voir Section 3.2). ...
... Cependant, l'inconvénient majeur de la modélisation par mèches 2D est que les styles de coiffure obtenus sont en général assez peu réalistes, car ils manquent de volume (voir Figure 1.19, à gauche). Afin de pallier à ce problème, Liang et Huang [LH03] extrapolent les bandes de cheveux en leur rajoutant des faces latérales, qu'ils obtiennent en projetant chaque point de la bande sur le cuir chevelu et en reliant deux à deux les points projetés aux points d'origine. La "forme en U" ainsi obtenue pour chaque bande, composée par trois polygones, permet alors de rendre la coiffure plus volumineuse. ...
Article
Due to the recent wide spread use of virtual characters in many elds of the entertainment industry, hair simulation has become a very active research topic in computer graphics. In addition, the physical simulation of hair is attracting greater attention from cosmetic experts, who perceive virtual prototyping as an effective means for developing hair care products. This thesis focuses on two major issues related to the simulation of hair : The interactive simulation of a full head of hair, and the physical realism of the shape and motion of hair. We rst develop new algorithms aimed at reducing the cost of calculation inherent in traditional methods for hair simulation. Our approaches exploit for the rst time a multi-resolution scheme for hair animation as well as volumic rendering of long hair, leading to interactive simulations of full heads of hair. Secondly, we propose a realistic physically based model for hair, realized in collaboration with experts in the elds of mechanical modeling and cosmetology. Within this partnership, we took part to the development of an accurate mechanical model for a single hair strand, which is based upon the theory of Kirchhoff on elastic rods. In the following of our work, this model is scaled to a full head of hair, and then applied to the realistic generation of static natural hairstyles, as well as to the dynamic simulation of hair from various ethnic origins. Finally, we validate our approach through a series of comparisons between virtual and real hair.
... Noble and Tang use NURBS surfaces for their hair geometry [14]. In [8], the hair is modeled in 2D NURBS strips, where the visible strips are tessellated and warped into U-shape strips. In contrast, Kim and Neumann use an optimized hierarchy of hair clusters [5]. ...
Conference Paper
Full-text available
This paper describes a new hair rendering technique for Anime characters. The overall goal is to improve current cel shaders by introducing a new hair model and hair shader. The hair renderer is based on a painterly rendering algo- rithm which uses a large amount of particles. The hair model is rendered twice: first for generating the silhouettes and second for shading the hair strands. In addition we also describe a modified technique for specular highlighting. Most of the rendering steps (except the specular highlight- ing) are performed on the GPU and take advantage of recent graphics hardware. However, since the number of particles determines the quality of the hair shader, a large number of particles is used which reduces the performance accord- ingly.
... Noble and Tang use NURBS surfaces for their hair geometry [27] (See figure 3.6(a)). In [20], the hair is modeled in 2D NURBS strips, where the visible strips are tessellated and warped into U-shape strips. In contrast, Kim and Neumann use an optimized hierarchy of hair clusters [13]. ...
Article
This thesis describes two new techniques for enhancing the rendering quality of cartoon characters in toon-shading applications. The proposed methods can be used to improve the output quality of current cel shaders. The first technique which uses 2D image-based algorithms, enhances the silhouettes of the input geometry and reduces the computer generated artefacts. The silhouettes are found by using the Sobel filter and reconstructed by Bezier curve fitting. The intensity of the reconstructed silhouettes is then modified to create a stylised appearance. In the second technique, a new hair model based on billboarded particles is introduced. This method is found to be particularly useful for generating toon-like specular highlights for hair, which are important in cartoon animations. The whole rendering framework is implemented in C++ using the OpenGL API. OpenGL extensions and GPU programming are used to take the advantage of the functionalities of currently available graphics hardware. The programming of graphics hardware is done using Cg, a high level shader language.
Chapter
This paper proposes an approach that generates hair guides from a sculpted 3D mesh, thus accelerating hair creation. Our approach relies on the local curvature on a sculpted mesh to discover the direction of the hair on the surface. We generate hair guides by following the identified strips of polygons matching hair strands. To improve the quality of the guides, some are split to ensure they correspond to hairstyles ranging from straight to wavy, while others are connected so that they correspond to longer hair strands. In order to automatically attach the guides to the scalp of a 3D head, a vector field is computed based on the directions of the guides, and is used in a backward growth of the guides toward the scalp. This approach is novel since there is no state-of-the-art method that generates hair from a sculpted mesh. Furthermore, we demonstrate how our approach works on different hair meshes. Compared to several hours of manual work to achieve a similar result, our guides are generated in a few minutes.
Thesis
La reconstruction 3D d’un visage à partir d’une image 2D est un problème fondamental de la vision par ordinateur qui suscite un intérêt considérable en raison de ses diverses applications telles que la surveillance, la santé, les jeux-vidéo, le cinéma, etc. Cette thèse présente deux approches hybrides de reconstruction 3D de visage à partir d’une image couleur 2D qui combinent les techniques d’apprentissage profond et de géométrie. Pour faire face au manque de données nécessaires à l’apprentissage des réseaux de neurones, un générateur de têtes humaines synthétiques a été conçu. Ce qui a permis de constituer une base de données d’images faciales avec plusieurs cartes qui contiennent des informations caractéristiques de la géométrie du visage. Les deux approches de reconstruction de visage 3D utilisent des CNN pour produire deux types de cartes à partir d’une image d’un visage humain. La première approche produit une carte de champ des normales et une carte du module de gradient de la carte de profondeur du visage. Par la suite, ces deux sorties sont utilisées dans un processus d’intégration du champ des normales basée sur les moindres carrées pondérées pour générer une surface faciale 3D. Dans la deuxième approche, le réseau de neurones produit une carte de points de repère et une carte de champ des normales similaires à celle produite dans la première approche. Elles sont utilisées dans un processus de régression qui vise à trouver la meilleure combinaison linéaire des bases d’un modèle paramétrique (3DMM) et d’obtenir ainsi le modèle 3D qui s’ajuste au visage présent dans l’image d’entrée.
Article
Simulating detailed dynamic hairs in real time is a challenging problem. Existing methods either simplify the strand dynamics or reduce the degrees of freedom at the cost of rich motion details. We present a real-time simulation for animating hair with high fidelity details. Our approach efficiently captures the inextensibility, bending and torsion strand mechanics, while presenting the stiction/repulsion and detailed real-time collision effects. To efficiently capture self-interactions, we factorize the phenomenon into a coarse, globally coupled volumetric, and detailed collision view. The coarse behaviors are solved with an Eulerian method via position-based density control, while detailed collisions are efficiently handled with temporal coherent link updates. We further provide a fast implicit integration via heptadiagonal matrix decomposition, which provides two to three orders of magnitude of acceleration to traditional methods. The efficiency and effectiveness of our method is validated by simulating variant motions of hair in various styles.
Article
Full-text available
With the tremendous performance increase of today’s graphics technologies, visual details of digital humans in games, online virtual worlds, and virtual reality applications are becoming significantly more demanding. Hair is a vital component of a person’s identity, and can provide strong cues about age, background and even personality. More and more researchers focus on hair modeling in the fields of computer graphics and virtual reality. Traditional methods are physics-based simulation by setting different parameters. The computation is expensive, and the constructing process is non-intuitive, difficult to control. Conversely, image-based methods have the advantages of fast modeling and high fidelity. This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-view hair modeling, static hair modeling from multiple images, video-based dynamic hair modeling, and the editing and reusing of hair modeling results. We first summarize the single-view approaches, which can be divided into the orientation-field and data-driven based methods. The static methods from multiple images and dynamic methods are then reviewed in the next two parts. In the last section, we also review the editing and reusing of hair modeling results. The future development trends and challenges of image-based methods are proposed in the end.