Under the rubber-band method, when a user backs his hand out of a virtual object, the hand avatar stays as close as possible to the user's real hand, sticking to the surface while the real-hand is moving, until the penetration is cleared.  

Under the rubber-band method, when a user backs his hand out of a virtual object, the hand avatar stays as close as possible to the user's real hand, sticking to the surface while the real-hand is moving, until the penetration is cleared.  

Similar publications

Chapter
Full-text available
Der Beitrag widmet sich der Finanzierung junger Unternehmen durch Gläubigerkontrakte. Neben einer kurzen begrifflichen Einordnung der Fremdfinanzierung und einem Überblick über die Gestaltungsoptionen von Fremdfinanzierungskontrakten werden zunächst einige empirische Evidenzen zur Bedeutung der Fremdfinanzierung für junge Unternehmen sowie die fina...

Citations

... The use of distortion of the human-avatar mapping has been frequently employed in 3D interaction methods, even without haptic feedback [28,29,30]. For example, one of the earliest methods focused on enhancing the effectiveness of user interactions by deliberately altering the mapping between real and virtual bodies for a stretching arm. ...
Article
Full-text available
Objective. A key challenge of virtual reality (VR) applications is to maintain a reliable human-avatar mapping. Users may lose the sense of controlling (sense of agency), owning (sense of body ownership), or being located (sense of self-location) inside the virtual body when they perceive erroneous interaction, i.e. Break-in-embodiment (BiE). However, the way to detect such an inadequate event is currently limited to questionnaires or spontaneous reports from users. The ability to implicitly detect BiE in real-time enables us to adjust human-avatar mapping without interruption. Approach. We propose and empirically demonstrate a novel Brain Computer Interface (BCI) approach that monitors the occurrence of BiE based on the users' brain oscillatory activity in real-time to adjust the human-avatar mapping in VR. We collected EEG data of 37 participants while they performed reaching movements with their avatar with different magnitude of distortion. Main results. Our BCI approach seamlessly predicts occurrence of BiE in varying magnitude of erroneous interaction. The mapping has been customized by BCI-reinforcement learning (RL) closed-loop system to prevent BiE from occurring. Furthermore, a non-personalized BCI decoder generalizes to new users, enabling "Plug-and-Play" ErrP-based non-invasive BCI. The proposed VR system allows customization of human-avatar mapping without personalized BCI decoders or spontaneous reports. Significance. We anticipate that our newly developed VR-BCI can be useful to maintain an engaging avatar-based interaction and a compelling immersive experience while detecting when users notice a problem and seamlessly correcting it.
... It has also been shown that users are more sensitive to decreases than increases in hand velocity [39]. Based on this, more natural hand placements when users' virtual hands collide with objects have been proposed [40]. In addition, hand redirection methods using fixed gains, in general, have also been studied [41]. ...
Article
During mid-air interactions, common approaches (such as the god-object method) typically rely on visually constraining the user's avatar to avoid visual interpenetrations with the virtual environment in the absence of kinesthetic feedback. This paper explores two methods which influence how the position mismatch (positional offset) between users' real and virtual hands is recovered when releasing the contact with virtual objects. The first method (sticky) constrains the user's virtual hand until the mismatch is recovered, while the second method (unsticky) employs an adaptive offset recovery method. In the first study, we explored the effect of positional offset and of motion alteration on users' behavioral adjustments and users' perception. In a second study, we evaluated variations in the sense of embodiment and the preference between the two control laws. Overall, both methods presented similar results in terms of performance and accuracy, yet, positional offsets strongly impacted motion profiles and users' performance. Both methods also resulted in comparable levels of embodiment. Finally, participants usually expressed strong preferences toward one of the two methods, but these choices were individual-specific and did not appear to be correlated solely with characteristics external to the individuals. Taken together, these results highlight the relevance of exploring the customization of motion control algorithms for avatars.
... The benefit of full body embodiment indeed comes from the flexibility for mapping real movements of the user into virtual movements of their avatar, and introducing artificial distortions helps in resolving other frequent VR conflicts, such as when displacing the avatar's hand to avoid going through virtual objects. The most common approach for movement distortion is to introduce a distortion to the location of the virtual hand and assessing to what extent subjects are tolerant of the introduced discrepancies [17][18][19]. These studies could identify a threshold, called a detection threshold, under which participants do not perceive that a distortion is applied between the apparent movement of the avatar (seen in the first-person perspective in VR) and the actual movement they performed. ...
... The goal of this block was to measure if subjects were able to detect the distortion in each trial. To this end, we asked the following forced choice Yes/No question, as done in similar previous studies [18,22]: ...
Article
Full-text available
Providing Virtual Reality(VR) users with a 3D representation of their body complements the experience of immersion and presence in the virtual world with the experience of being physically located and more personally involved. A full-body avatar representation is known to induce a Sense of Embodiment (SoE) for this virtual body, which is associated with improvements in task performance, motivation and motor learning. Recent experimental research on embodiment provides useful guidelines, indicating the extent of discrepancy tolerated by users and, conversely, the limits and disruptive events that lead to a break in embodiment (BiE). Based on previous works on the limit of agency under movement distortion, this paper describes, studies and analyses the impact of a very common yet overlooked embodiment limitation linked to articular limits when performing a reaching movement. We demonstrate that perceiving the articular limit when fully extending the arm provides users with an additional internal proprioceptive feedback which, if not matched in the avatar's movement, leads to the disruptive realization of an incorrect posture mapping. This study complements previous works on self-contact and visuo-haptic conflicts and emphasizes the risk of disrupting the SoE when distorting users' movements or using a poorly-calibrated avatar.
... The benefit of full body embodiment indeed comes from the flexibility for mapping real movements of the user into virtual movements of their avatar, and introducing artificial distortions helps in resolving other frequent VR conflicts, such as when displacing the avatar's hand to avoid going through virtual objects. The most common approach for movement distortion is to introduce a distortion to the location of the virtual hand and assessing to what extent subjects are tolerant of the introduced discrepancies [14][15][16]. These studies could identify a threshold, called a detection threshold, under which participants do not perceive that a distortion is applied between the apparent movement of the avatar (seen in the first-person perspective in VR) and the actual movement they performed. ...
... Block 1: Detection threshold The goal of this block was to measure if subjects were able to detect the distortion in each trial. To this end, we asked the following forced choice Yes/No question, as done in similar previous studies [15,19]: ...
Preprint
Full-text available
Providing Virtual Reality(VR) users with a 3D representation of their body complements the experience of immersion and presence in the virtual world with the experience of being physically located and more personally involved. A full-body avatar representation is known to induce a Sense of Embodiment (SoE) for this virtual body, which is associated with improvements in task performance, motivation and motor learning. Recent experimental research on embodiment provides useful guidelines, indicating the extent of discrepancy tolerated by users and, conversely, the limits and disruptive events that lead to a break in embodiment (BiE). Based on previous works on the limit of agency under movement distortion, this paper describes, studies and analyses the impact of a very common yet overlooked embodiment limitation linked to articular limits when performing a reaching movement. We demonstrate that perceiving the articular limit when fully extending the arm provides users with an additional internal proprioceptive feedback which, if not matched in the avatar's movement, leads to the disruptive realization of an incorrect posture mapping. This study complements previous works on self-contact and visuo-haptic conflicts and emphasizes the risk of disrupting the SoE when distorting users’ movements or using a poorly-calibrated avatar.
... First studies have focused on the hand only. For instance, Burns et al [24], [29] add a constant offset between the subject real hand and the virtual one. Since only the hand is shown (floating hand), this offset is only applied to the hand. ...
... Most of these studies could identify a threshold under which participants do not perceive that a distortion has been applied between the apparent movement of the avatar (seen in first-person perspective in VR) and the actual movement they performed. Indeed, Burns et al. [24] show that, in absence of haptic feedback, visual feedback prevails on the proprioceptive feedback whenever there is a perceptual conflict [24], [29]. However, the SoE's critical importance for controlling and owning a virtual body is not explicitly studied in these prior works. ...
... There are several points of this experiment which could be improved upon. Instead of determining one threshold per staircase, the Point of Subjective Equality (PSE) could have been used as in Burns et al. [29] to find one single threshold for all the staircases and using a classical psychometric function [52], [53], [54], [55]. However, most of the time, these methods are used offline, as opposed to our approach, which was online. ...
Article
Full-text available
In Virtual Reality, having a virtual body opens a wide range of possibilities as the participant's avatar can appear to be quite different from oneself for the sake of the targeted application (e.g. for perspective-taking). In addition, the system can partially manipulate the displayed avatar movement through some distortion to make the overall experience more enjoyable and effective (e.g. training, exercising, rehabilitation). Despite its potential, an excessive distortion may become noticeable and break the feeling of being embodied into the avatar. Past researches have shown that individuals have a relatively high tolerance to movement distortions and a great variability of individual sensitivities to distortions. In this paper, we propose a method taking advantage of Reinforcement Learning (RL) to efficiently identify the magnitude of the maximum distortion that does not get noticed by an individual (further noted the detection threshold). We show through a controlled experiment with subjects that the RL method finds a more robust detection threshold compared to the adaptive staircase method, i.e. it is more able to prevent subjects from detecting the distortion when its amplitude is equal or below the threshold. Finally, the associated majority voting system makes the RL method able to handle more noise within the forced choices input than adaptive staircase. This last feature is essential for future use with physiological signals as these latter are even more susceptible to noise. It would then allow to calibrate embodiment individually to increase the effectiveness of the proposed interactions.
... Nonetheless, the most frequent approach has been to introduce a distortion to the location of the virtual hand and assess to what extent the subjects were tolerant to the introduced discrepancies [4,8,9]. These studies could identify a threshold under which participants do not perceive that a distortion has been applied between the apparent movement of the avatar (seen in first-person perspective in VR) and the actual movement they performed. ...
... These studies could identify a threshold under which participants do not perceive that a distortion has been applied between the apparent movement of the avatar (seen in first-person perspective in VR) and the actual movement they performed. For instance, Burns et al. [8] have shown that, in the absence of haptic feedback, visual feedback prevails on the proprioceptive feedback (posture) whenever there is a perceptual conflict [8,9]. However, the SoE's critical importance for controlling and owning a virtual body was not explicitly studied in these prior works. ...
... where I is the original intensity of the particular stimulation, ∆I is the addition to it required for the change to be perceived (difference threshold or JND (Just Noticeable Difference)), and k is a constant. Previous studies [7][8][9]13] have demonstrated that the discrepancy threshold (or the self-attribution threshold) is analog to the difference threshold. ...
... This effect is achieved by decreasing or increas- 41 ing the (physical) amplitude of the hand movement necessary to 42 reach the target as compared to the apparent (visual) amplitude of 43 the task. Consequently, when a reaching is helped (or hindered), 44 the physical movement becomes shorter (or longer) in amplitude 45 than the visual inspection of the task suggests. One of the most 46 salient features of the distortion during the movement is that the 47 virtual visual feedback of the hand may move faster or slower 48 than the physical hand. ...
Article
Full-text available
This study explores the extent to which individuals embodied in Virtual Reality tend to self-attribute the movements of their avatar. More specifically, we tested subjects performing goal-directed movements and distorted the mapping between user and avatar movements by decreasing or increasing the amplitude of the avatar hand movement required to reach for a target, while maintaining the apparent amplitude – visual distance – fixed. In two experiments, we asked subjects to report whether the movement that they have seen matched the movement that they have performed, or asked them to classify whether a distortion was making the task easier or harder to complete. Our results show that subjects perform poorly in detecting discrepancies when the nature of the distortion is not made explicit and that subjects are biased to self-attributing distorted movements that make the task easier. These findings, in line with previous accounts on the sense of agency, demonstrate the flexibility of avatar embodiment and open new perspectives for the design of guided interactions in Virtual Reality.
... Burns et al have explored two aspects of visuo-proprioceptive mismatch in order to propose interaction techniques preventing the interpenetration of the virtual hand with the virtual environment [10]. The first concerns the perception of a location mismatch between a physical and a virtual hand, demonstrating that a person may be strikingly unaware of visuo-proprioceptive mismatches which were gradually introduced over a long period of time [12]. ...
Conference Paper
Full-text available
We investigate the self-attribution of distorted pointing movements in immersive virtual reality. Participants had to complete a multi-directional pointing task in which the visual feedback of the tapping finger could be deviated in order to increase or decrease the motor size of a target relative to its visual appearance. This manipulation effectively makes the task easier or harder than the visual feedback suggests. Participants were asked whether the seen movement was equivalent to the movement they performed, and whether they have been successful in the task. We show that participants are often unaware of the movement manipulation, even when it requires higher pointing precision than suggested by the visual feedback. Moreover, subjects tend to self-attribute movements that have been modified to make the task easier more often than movements that have not been distorted. We discuss the implications and applications of our results.
... For visual feedback during palpation interaction, we use a visual coupling-approach between the device tool center point T CP and the visual model center point V CP , where V CP transforms the virtual hand's geometry (see Figure 2). One option would be to optimize the discrepancy by position and velocity [7]. Our approach is loosely related to virtual coupling [11] and the god-object proxy [45] and works as follows. ...
Article
Full-text available
Palpation is a physical examination technique where objects, e.g., organs or body parts, are touched with fingers to determine their size, shape, consistency and location. Many medical procedures utilize palpation as a supplementary interaction technique and it can be therefore considered as an essential basic method. However, palpation is mostly neglected in medical training simulators, with the exception of very specialized simulators that solely focus on palpation, e.g., for manual cancer detection. In this article we propose a novel approach to enable haptic palpation interaction for virtual reality-based medical simulators. The main contribution is an extensive user study conducted with a large group of medical experts. To provide a plausible simulation framework for this user study, we contribute a novel and detailed interaction algorithm for palpation with tissue dragging, which utilizes a multi-object force algorithm to support multiple layers of anatomy and a pulse force algorithm for simulation of an arterial pulse. Furthermore, we propose a modification for an off–the–shelf haptic device by adding a lightweight palpation pad to support a more realistic finger grip configuration for palpation tasks. The user study itself has been conducted on a medical training simulator prototype with a specific procedure from regional anesthesia, which strongly depends on palpation. The prototype utilizes a co-rotational finite-element approach for soft tissue simulation and provides bimanual interaction by combining the aforementioned techniques with needle insertion for the other hand. The results of the user study suggest reasonable face validity of the simulator prototype and in particular validate medical plausibility of the proposed palpation interaction algorithm.
... We would also like to support grasping in environments without force feedback, for example, in systems where the hand is optically tracked and worn or complex devices are not desired. Burns et al. [3] proposed a virtual-hand management method (MACBETH) that attempted to correct sticking. However, it manages only virtual hand base position (no orientation or articulation management defined), so it could not be applied to whole-hand grasping without extensions. ...
... It was reported in [5] that maintaining offset between virtual and real hands reduced user performance. Burns et al. [3] proposed a third metaphor -MACBETH. It involves incremental motion, but it removes position discrepancy by introducing velocity discrepancy that is similarly detectable. ...
... Our results demonstrate that maintaining a pose discrepancy (of finger configurations) with subsequent reduction can improve user performance and subjective experience. This augments the results of Burns et al. [3], who introduced discrepancy in hand base position and showed improved user ratings with no loss in performance for a hand navigation task. Two other experiments that were not detailed here were a pickand-drop experiment and a cube-alignment experiment. ...
Conference Paper
We present a method for improved release of whole-hand virtual grasps. It addresses the problem of objects “sticking” during release after the user's (real) fingers interpenetrate virtual objects due to the lack of physical motion constraints. This problem may be especially distracting for grasp techniques that introduce mismatches between tracked and visual hand configurations to prevent visual interpenetration. Our method includes heuristic analysis of finger motion and a transient incremental motion metaphor to manage a virtual hand during grasp release. We incorporate the method into a spring model for whole-hand virtual grasping. We show that the new spring model improves speed and accuracy for a targeted ball-drop task, and users report a subjective preference for the new behavior. In contrast to a standard spring-based grasping method, measured release quality does not depend notably on object size.