Figure 3 - uploaded by Nadia Magnenat Thalmann
Content may be subject to copyright.
Hierarchical collision and self-collision detection using surface curvature.

Hierarchical collision and self-collision detection using surface curvature.

Source publication
Article
Full-text available
Since l986, we have led extensive research on simulating realistic looking humans. We have created Marilyn Monroe and Humphrey Bogart that met in a Cafe in Montreal. At that time, they did not wear any dress as such. Humphrey's body was made out of a plaster model that has the shape of a suit. Colors on Marilyn's body looked like a dress. Hairs wer...

Similar publications

Article
Full-text available
In this study, artificial neural networks were used to predict the plastic flow behaviour of S355 steel in the process of high-temperature deformation. The aim of the studies was to develop a model of changes in stress as a function of strain, strain rate and temperature, necessary to build an advanced numerical model of the soft-reduction process....
Article
Full-text available
In this article, thermal imprint process for replication of high-quality microstructures on the surface of polymer is investigated. Vibrations has been previously employed as an additional measure to enhance the replicability of microstructure into the pre-heated polymer. On the other hand, polymer behavior under the action of vibrations is not suf...
Article
Full-text available
The aim of the research presented in this article is to investigate the frictional resistance of steel sheets with different drawing quality. Friction tests have been carried out using the bending under tension (BUT) test which simulates the contact conditions at the rounded edges of the punch and die in sheet metal forming operations. The effect o...

Citations

... In the process of deciding how to approach the problem, I read a number of research papers on hair modelling. Many of them were too advanced for my current state of knowledge, but I did derive a certain amount of inspiration from [2]. Although my approach ultimately bears very little similarity to that in their paper, I am nonetheless indebted to the authors for providing some food for thought. ...
Article
This paper discusses a method of modelling collisions between long hairs. The hairs are modelled as cubic B-splines, to which chains of cylinders are fitted for the purposes of collision detection. The head on which the hairs rest is modelled as an ellipsoid. The paper explains how to fit cylinder chains to splines, how to test whether and where two cylinder chains intersect, and how to test whether and where a cylinder chain intersects an ellipsoid. It further shows how to make use of these results to model (a necessarily limited amount of) long hair on a human head.
... Improper rendering of this delicate structure can result in aliasing that causes a scene to look artificial. Given this fine-level complexity, rendering these fibers with existing techniques such as [1,33,28,16,10,22] is cumbersome, if not impossible. ...
Article
Full-text available
We present a framework for knitwear modeling and rendering that accounts for characteristics that are particular to knitted fabrics. We first describe a model for animation that considers knitwear features and their effects on knitwear shape and interaction. With the computed free-form knitwear configurations, we present an efficient procedure for realistic synthesis based on the observation that a single cross section of yarn can serve as the basic primitive for modeling entire articles of knitwear. This primitive, called the lumislice, describes radiance from a yarn cross section that accounts for fine-level interactions among yarn fibers. By representing yarn as a sequence of identical but rotated cross sections, the lumislice can effectively propagate local microstructure over arbitrary stitch patterns and knitwear shapes. The lumislice accommodates varying levels of detail, allows for soft shadow generation, and capitalizes on hardware-assisted transparency blending. These modeling and rendering techniques together form a complete approach for generating realistic knitwear.
... Although a large body of work deals with modeling [9,5,20,18,21], animating [9,5,18] and rendering [15,14,19,13] human hair, few articles treat the question of its acquisition. Likewise, the extensive research on "Shape from Shading" [3,16] only addresses the case of relatively continuous surfaces, and doesn't offer techniques suited to hair. ...
Article
We introduce an image-based method for modeling a specific subject's hair. The principle of the approach is to study the variations of hair illumination under controlled illumination. The use of a stationary viewpoint and the assumption that the subject is still allows us to work with perfectly registered images: all pixels in an image sequence represent the same portion of the hair, and the particular illumination profile observed at each pixel can be used to infer the missing degree of directional information. This is accomplished by synthesizing reflection profiles using a hair reflectance model, for a number of candidate directions at each pixel, and choosing the orientation that provides the best profile match. Our results demonstrate the potential of this approach, by effectively reconstructing accurate hair strands that are well highlighted by a particular light source movement.
... Improper rendering of this delicate structure can result in aliasing that causes a scene to look artificial, and given this fine-level complexity, rendering down or fluff with existing techniques such as Right side: stitch pattern and irregular macroscopic structure. [1,30,26,16,10,21] is cumbersome, if not impossible. Moreover, the appearance of down changes in detail with different viewing distances. ...
Conference Paper
Full-text available
We present a method for efficient synthesis of photorealistic free-form knitwear. Our approach is motivated by the observation that a single cross-section of yarn can serve as the basic primitive for modeling entire articles of knitwear. This primitive, called the lumislice, describes radiance from a yarn cross-section based on fine-level interactions — such as occlusion, shadowing, and multiple scattering — among yarn fibers. By representing yarn as a sequence of identical but rotated cross-sections, the lumislice can effectively propagate local microstructure over arbitrary stitch patterns and knitwear shapes. This framework accommodates varying levels of detail and capitalizes on hardware-assisted transparency blending. To further enhance realism, a technique for generating soft shadows from yarn is also introduced.
... Another example is the dog fur n 101 Dalmatians [2], which was represented using a stochastic (rather than geometric) model. However, rendering hair is computationally expensive, and the majority of the methods to date are too slow for interactive use (see Thalmann et al. [10] for a survey). In an interactive setting, Van Gelder and Wilhelms [11] showed that various parameters of fur can be manipulated in real time. ...
Conference Paper
We introduce a method for real-time rendering of fur on surfaces of arbitrary topology. As a pre-process, we simulate virtual hair with a particle system, and sample it into a volume texture. Next, we parameterize the texture over a surface of arbitrary topology using "lapped textures" --- an approach for applying a sample texture to a surface by repeatedly pasting patches of the texture until the surface is covered. The use of lapped textures permits specifying a global direction field for the fur over the surface. At runtime, the patches of volume textures are rendered as a series of concentric shells of semi-transparent medium. To improve the visual quality of the fur near silhouettes, we place "fins" normal to the surface and render these using conventional 2D texture maps sampled from the volume texture in the direction of hair growth. The method generates convincing imagery of fur at interactive rates for models of moderate complexity. Furthermore, the scheme allows real-time modification of viewing and lighting conditions, as well as local control over hair color, length, and direction.
... Another example is the dog fur n 101 Dalmatians [2], which was represented using a stochastic (rather than geometric) model. However, rendering hair is computationally expensive, and the majority of the methods to date are too slow for interactive use (see Thalmann et al. [10] for a survey). In an interactive setting, Van Gelder and Wilhelms [11] showed that various parameters of fur can be manipulated in real time. ...
Article
We introduce a method for real-time rendering of fur on surfaces of arbitrary topology. As a pre-process, we simulate virtual hair with a particle system, and sample it into a volume texture. Next, we parameterize the texture over a surface of arbitrary topology using "lapped textures" --- an approach for applying a sample texture to a surface by repeatedly pasting patches of the texture until the surface is covered. The use of lapped textures permits specifying a global direction field for the fur over the surface. At runtime, the patches of volume textures are rendered as a series of concentric shells of semi-transparent medium. To improve the visual quality of the fur near silhouettes, we place "fins" normal to the surface and render these using conventional 2D texture maps sampled from the volume texture in the direction of hair growth. The method generates convincing imagery of fur at interactive rates for models of moderate complexity. Furthermore, the scheme allows real-time modification of viewing and lighting conditions, as well as local control over hair color, length, and direction.
Chapter
Hair adds compelling richness to computer graphics scenes. This paper describes techniques for lighting and rendering short hair in real-time on current PC graphics hardware. Level-of-detail representations for drawing fur span the viewing distance from close-ups using procedurally generated alpha-blended lines, to mid and far views using volumetric textures, and on to distant views using anisotropic texture map rendering. Real-time lighting with soft-edged shadows is consistent across the level-of-detail representations.
Article
This contribution describes the semi-automatic creation of highly realistic flexible 3D models of participants for distributed 3D videoconferencing systems. The proposed technique uses a flexible mesh template surrounding an interior skeleton structure which is based on a simplified human skeleton. The vertices of this template are arranged in rigid rings along the bones of the skeleton. Using 3D data obtained by a shape from silhouettes approach, the size and shape of the mesh template are adapted to the real person. Texture mapping of the adapted mesh using real camera images leads to a natural impression. The mesh organization in rigid rings allows an efficient surface deformation according to the skeleton movements. Once the resulting model is transmitted, it can be animated subsequently using the simple parameter set of the interior skeleton structure. Results obtained with real image data confirm the eligibility of the animated person models in terms of realism and efficiency for 3D videoconferencing applications. Copyright © 2000 John Wiley & Sons, Ltd.
Article
This contribution describes the creation of highly realistic 3D models of participants for distributed 3D videoconferencing systems. These models consist of a flexible triangular mesh surrounding an interior skeleton structure, which is based on a simplified human skeleton. The vertices of the predefined mesh template are arranged in rigid rings along the bones of the skeleton. Using 3D data obtained by a stereoscopic approach the size and shape of this initial mesh is adapted to the real person. Adaptation allows to texture the model from real images leading to a natural impression. The mesh organization in rigid rings gives an efficient way for surface deformation according to the skeleton movements. The resulting model is transmitted once and subsequently animated using the simple parameter set of the interior skeleton structure.
Article
This paper describes part of the pipeline which we have implemented for the creation of hair and fur for the upcoming motion picture “Stuart Little”. In particular, we discuss the two effects of producing wet fur and broken-up dry fur. We assume that our animal models consist of a connected set of NURBS patches, which define the skin onto which the individual hairs are placed. A wet fur look is achieved by clumping together of neighboring hairs within certain clumping areas on the skin. The definition of these clumping areas can be either static or animated, the latter to simulate spouts of water hitting parts of the fur coat, making it increasingly wet. A broken-up fur look is generated by breaking of hairs along certain fur-tracks on the skin