Figure 7 - uploaded by Lei Wei
Content may be subject to copyright.
Collision detection framework.

Collision detection framework.

Source publication
Conference Paper
Full-text available
Abstract Wepropose how to define complex geometry, appearance and tangible physical properties of the X3D and VRML objects using mathematical,functions straight in the scene definition code or in loadable libraries. We can ,also touch and feel surfaces of X3D and,VRML objects as well ,as convert them ,to solid ,tangible objects. We can define tangi...

Context in source publication

Context 1
... of polygons which are obtained at the end of the X3D/VRML visualization pipeline for both stan- dard and function-defined objects. However, for the function- defined objects it can work much faster if the collision detection is computed at the level of function definitions. Hence, both meth- ods are currently implemented and offered to the user (Fig. 7). When a haptic device is used to explore the virtual scene, as soon as the haptic interaction point (HIP) collides with a virtual object in the scene, the whole node will immediately be sent back to the haptic plug-in. By checking the definition of the object, the plug- in will get to know whether the object is a standard object (Shape ...

Similar publications

Article
Full-text available
In the modern urban society, human brain is not being sufficiently trained to deal with problems which require 3D perception. As a result, when teaching subjects richly infused with mathematics it is usually a challenge for the learners to follow the instructor and visualize how mathematical concepts reflect in 3D geometry and colors. We have propo...

Citations

... This language serves to represent 3D worlds and contains information needed by visual rendering systems. Sourin and Wei [SW08] proposed an extension of this language by adding haptic rendering techniques. One purpose of this language is to transmit virtual objects and their associated haptic rendering algorithms over the internet. ...
Article
Haptic technology, stimulating the sense of touch, is used for years in virtual reality and teleoperation applications for enhancing the user immersion. Yet it is still underused in audiovisual systems such as movie theaters. The objective of this thesis is thus to exploit the potential of haptics for audiovisual content. In the first part of this Ph.D. thesis, we address the haptic rendering in video viewing context. We first present a new device providing 6 degrees of freedom motion effects. Instead of moving the whole user's body, as it is traditionally done with motion platform, only the head and hands are stimulated. This device allows thus to enrich the audiovisual experience. Then we focus on the haptic rendering of haptic-audiovisuals. The combination of haptic effects and video sequences yields new challenges for the haptic rendering. We introduce a new haptic rendering algorithm to tackle these issues. The second part of this Ph.D. is dedicated to the production of haptic effects. We first present of novel authoring tool. Three editing methods are proposed to create motion effects and to synchronize them to a video. Besides, the tool allows to preview motion effects thanks to a force-feedback device. Then we study combinations of haptic feedback and audiovisual content. In a new approach, the Haptic Cinematography, we explore the potential of haptic effects to create new effects dedicated to movie makers.
... A Virtual Immersive Haptic Mathematics [8] ...
Conference Paper
The emergence of low cost haptic devices and the evolution of technologies for developing virtual reality environments for web have motivated the development of tools that allow the use of haptic systems together with applications and web browsers. The availability of web haptic systems results on the possibility of multisensory experiences for internet users. This paper analyzes technologies for integrating haptic and web systems, exposing the features of each tool in order to find the best strategy to incorporate virtual touch in Museu3l by using the haptic technology, so that it becomes one of the first virtual museums that makes haptic interaction with a work of art in a web visit possible.
... This language serves to represent 3D worlds and contains information needed by visual rendering systems. Sourin and Wei [55] proposed an extension of this language by adding haptic rendering techniques. One purpose of this language is to transmit virtual objects and their associated haptic rendering algorithms over the internet. ...
Article
Full-text available
Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.
... For example, it is rather trivial for implicit functions to implement the collision with the contact point membership predicate. In [5,6], we proposed an approach to haptically collide with implicit functions, which does not need the primitive level input. In [7], another implicit function based approach was proposed for rendering large geometry models at 1000 Hz rate. ...
Article
Polygon and point based models dominate virtual reality. These models also affect haptic rendering algorithms, which are often based on collision with polygons. With application to dual point haptic devices for operations like grasping, complex polygon and point based models will make the collision detection procedure slow. This results in the system not able to achieve interactivity for force rendering. To solve this issue, we use mathematical functions to define and implement geometry (curves, surfaces and solid objects), visual appearance (3D colours and geometric textures) and various tangible physical properties (elasticity, friction, viscosity, and force fields). The function definitions are given as analytical formulas (explicit, implicit and parametric), function scripts and procedures. We proposed an algorithm for haptic rendering of virtual scenes including mutually penetrating objects with different sizes and arbitrary location of the observer without a prior knowledge of the scene to be rendered. The algorithm is based on casting multiple haptic rendering rays from the Haptic Interaction Point (HIP), and it builds a stack to keep track on all colliding objects with the HIP. The algorithm uses collision detection based on implicit function representation of the object surfaces. The proposed approach allows us to be flexible when choosing the actual rendering platform, while it can also be easily adopted for dual point haptic collision detection as well as force and torque rendering. The function-defined objects and parts constituting them can be used together with other common definitions of virtual objects such as polygon meshes, point sets, voxel volumes, etc. We implemented an extension of X3D and VRML as well as several standalone application examples to validate the proposed methodology. Experiments show that our concern about fast, accurate rendering as well as compact representation could be fulfilled in various application scenarios and on both single and dual point haptic devices.
... For example, it is rather trivial for implicit functions to implement the collision with the contact point membership predicate. In [5,6], we proposed an approach to haptically collide with implicit functions, which does not need the primitive level input. In [7], another implicit function based approach was proposed for rendering large geometry models at 1000 Hz rate however it retrieves surface information from volumetric data for penalty-based force generation. ...
Conference Paper
Polygon and point based models dominate in virtual reality. These models also affect haptic rendering algorithms which are often based on collision with polygons. We use mathematical functions to define and implement geometry (curves, surfaces and solid objects), visual appearance (3D colors and geometric textures) and various tangible physical properties (elasticity, friction, viscosity, and force fields). The function definitions are given as analytical formulas (explicit, implicit and parametric), function scripts and procedures. Since the defining functions are very small we can efficiently use them in collaborative virtual environments to exchange between the participating clients. We proposed an algorithm for haptic rendering of virtual scenes including mutually penetrating objects with different sizes and arbitrary location of the observer without a prior knowledge of the scene to be rendered. The algorithm is based on casting multiple haptic rendering rays from the Haptic Interaction Point (HIP), and it builds a stack to keep track on all colliding objects with the HIP. The algorithm uses collision detection based on implicit function representation of the object surfaces. The proposed approach allows us to be flexible when choosing the actual rendering platform. The function-defined objects and parts constituting them can be used together with other common definitions of virtual objects such as polygon meshes, point sets, voxel volumes, etc. We implemented an extension of X3D and VRML which allows for defining complex geometry, appearance and haptic effects in virtual scenes by functions and common polygon-based models, with various object sizes, mutual penetrations, arbitrary location of the observer and variable precision.
... For implicit functions, it is rather trivial to implement the collision with the contact point membership predicate. In [9], we proposed an approach to haptically collide with the objects defined by implicit functions, which does not need the primitive level input. In [6], another implicit function based approach was proposed for rendering large geometry models at 1000 Hz rate; however, it retrieves surface information from volumetric data for penalty-based force generation. ...
Article
Full-text available
Commonly, surface and solid haptic effects are defined in such a way that they hardly can be rendered together. We propose a method for defining mixed haptic effects including surface, solid, and force fields. These haptic effects can be applied to virtual scenes containing various objects, including polygon meshes, point clouds, impostors, and layered textures, voxel models as well as function-based shapes. Accordingly, we propose a way how to identify location of the haptic tool in such virtual scenes as well as consistently and seamlessly determine haptic effects when the haptic tool moves in the scenes with objects having different sizes, locations, and mutual penetrations. To provide for an efficient and flexible rendering of haptic effects, we propose to concurrently use explicit, implicit and parametric functions, and algorithmic procedures.
... For implicit functions, it is rather trivial to implement the collision with the contact point membership predicate. In [9], we proposed an approach to haptically collide with implicit functions, which does not need the primitive level input. In [6], another implicit function based approach was proposed for rendering large geometry models at 1000 Hz rate however it retrieves surface information from volumetric data for penalty-based force generation. ...
Conference Paper
Commonly, surface and solid haptic effects are separated for haptic rendering. We propose a method for defining surface and solid haptic effects as well as various force fields in 3D cyber worlds containing mixed geometric models, including polygon meshes, point clouds, image-based billboards and layered textures, voxel models and functions-based models of surfaces and solids. We also propose a way how to identify location of the haptic tool in such haptic scenes as well as consistently and seamlessly determine haptic effects when the haptic tool moves in the scenes with objects having different sizes, locations, and mutual penetrations.
Article
VR applications on the Web demand lightweight virtual models and scenes. Among the existing tree modeling methods, the rule based modeling method, e.g. L-systems or automatons, should be the lightest. This paper presents an interactive approach for users to model trees easily. By adopting the spherical coordinate system, the modified turtle geometry provides a more natural interpretation when modeling trees. Based on a modified version of parametric L-systems, our approach allows users to modeling trees in a What-You-See-Is-What-You-Get manner. Both theoretical analysis and experimental results show the effectiveness of our proposed solution to lightweight Web3D tree modeling, thus it becomes feasible to construct large scale virtual forest scenes on the Web.
Article
Full-text available
In the modern urban society, human brain is not being sufficiently trained to deal with problems which require 3D perception. As a result, when teaching subjects richly infused with mathematics it is usually a challenge for the learners to follow the instructor and visualize how mathematical concepts reflect in 3D geometry and colors. We have proposed an approach that would allow for defining complex geometry, visual appearance and tangible physical properties of the virtual objects using language of mathematical functions. It allows the learners to get immersed within the 3D scene and explore the shapes which are being modeled visually and haptically. We illustrate this concept using our function-based extension of X3D and VRML. Besides definition of objects with mathematical functions straight in the scene file, standard X3D and VRML objects can be converted to tangible ones as well as augmented with function-defined visual appearances. Since the function-defined models are small in size, it is possible to perform their collaborative interactive modifications with concurrent synchronous visualization at each client computer with any required level of detail.