Figure 2 - uploaded by Anatole Lécuyer
Content may be subject to copyright.
Setup: Bimanual interaction with two 6-DoF haptic devices (Virtuose, Haption). Left hand for virtual bowl manipulation, Right hand for virtual pan manipulation.

Setup: Bimanual interaction with two 6-DoF haptic devices (Virtuose, Haption). Left hand for virtual bowl manipulation, Right hand for virtual pan manipulation.

Source publication
Conference Paper
Full-text available
The Virtual Crepe Factory illustrates our novel approach for 6DoF haptic interaction with fluids. It showcases a 2-handed interactive haptic scenario: a recipe consisting in using different types of fluid in order to make a special pancake also known as "crepe". The scenario guides the user through all the steps required to prepare a crepe: from th...

Context in source publication

Context 1
... Crepe Preparation Scenario. As shown in Figure 2, the user holds a 6DoF haptic device in each hand. The scenario can be divided in 4 distinct consecutives stages. ...

Citations

Chapter
This chapter discusses the topics for which solutions exist, but which still pose scientific challenges and elements of complexity which would be helpful to discuss in detail. It describes physical models to detect collisions and the problem of the virtual human. The chapter then examines naturalness of the interaction, and proposes an analysis of force feedback. It also discusses the 3D interaction loop as the explanatory framework for the scientific challenges surrounding 3D interaction with virtual or mixed environments. This loop comes from the perception‐action loop, which is very often used in the literature to explain the challenges in virtual reality and augmented reality. In order to improve interaction, different sensory modalities of the user are brought into play: not only sight, but also hearing and touch are fundamental sensory modalities. The chapter finally identifies the scientific challenges related to these different sensory modalities.
Conference Paper
Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.
Article
The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.
Article
Virtual Reality allows the simulation and interaction with Virtual Environments (VE) through different sensory modalities. However, interacting with complex physically based VE, such as non-rigid or large environments, presents many challenges in terms of interaction and sensory feedback. The first part of this Ph.D. thesis addresses haptic and multimodal feedback issues during manipulation of non-rigid media. We first present a novel approach for 6 degrees of freedom haptic interaction with fluids, allowing the generation of force feedback from viscous fluids through arbitrary-shaped rigid bodies. This approach is extended by including the haptic interaction with deformable bodies, thus allowing a unified haptic interaction with the different states of matter. A perceptual experiment showed that users could efficiently identify the different states through the haptic modality alone. Then, we introduce a novel vibrotactile fluid rendering model, leveraging previous knowledge on fluid sound synthesis. Through this approach, we allow the interaction with fluids with multimodal feedback, through vibrotactile, kinesthetic, acoustic and visual sensory channels. The second part of this Ph.D. thesis addresses interaction issues during walking navigation of large VE. Since the VE is often larger than the available real workspace, we introduce a novel navigation metaphor that informs users about the real physical boundaries. Using hybrid position/rate control, this technique provides a simple and intuitive metaphor for a navigation safe from collisions and breaks of immersion. Other workspaces, such as CAVE-like environments, present rotation boundaries due to missing screens. Thus, we present three novel metaphors dealing with these additional boundaries. Overall, the evaluation of these navigation techniques showed that they efficiently fulfilled their objectives while being highly appreciated by users.