Full pose navigation. Left images shows how to move forward, right image show how to rotate right. 

Full pose navigation. Left images shows how to move forward, right image show how to rotate right. 

Source publication
Conference Paper
Full-text available
Navigating virtual environments usually requires a wired interface, game console, or keyboard. The advent of perceptual interface techniques allows a new option: the passive and untethered sensing of users' pose and gesture to allow them maneuver through and manipulate virtual worlds. We describe new algorithms for interacting with 3-D environments...

Context in source publication

Context 1
... this model full normal body movements are used to navigate in the virtual world. The real world movement has a direct mapping to movement in the virtual world that is intuitive and transparent to the users. E.g. if you move forward you will move forward in the virtual world, or if you move sidewise you will move sidewise in the virtual world (see figure 3). In addition to be able to move over greater distance than just a physical mapping from a real space into a virtual space would allow, we need to be able to move using relative movement. The speed is set to be relative to a center point initialized by the tracker when the user walks into the interaction area. Our hypothesis is that novel users will easily learn and be able to use this model. This model uses gestures to control the virtual environment. Based on the WOz test we were able to observe that some of the subjects in the test intuitively used a pointing gesture to show the system where they wanted to go. We have therefore defined and implemented an interaction model based on pointing gestures, i.e., the user will go in the direction in which they point and the height of the arm will control the speed (see figure 4. Twelve volunteer adult subjects interacted with the HalfLife game played on a standard personal computer (PC) using a large projection screen for display output. Connected to this PC was another PC running the vision system. Using the HalfLife SDK kit we created a new input device ...

Similar publications

Conference Paper
Full-text available
We present a novel algorithm developed for decomposing world-space into arbitrary sided high-order polyhedrons for use as navmeshes or other techniques requiring 3D world spatial decomposition. The Adaptive Space Filling Volumes 3D (ASFV3D) algorithm works by seeding world-space with a series of unit cubes. Each cube is then provided with the oppor...
Article
Full-text available
Labs are needed in science education, and schools usually lack appropriate conditions to provide active learning activities with experience. In schools, the lack of labs for science education is usual. The new technologies of 3D virtual worlds allow the user, through immersive virtual worlds, or metaverses, to experience situations similar to those...
Conference Paper
Full-text available
In this paper we present a Finger Walking in Place (FWIP) interaction technique that allows a user to travel in a virtual world as her/his bare flngers slide on a multi-touch sensitive surface. Traveling is basically realized by translating and rotating the user's viewpoint in the virtual world. The user can translate and rotate a viewpoint by mov-...
Conference Paper
Full-text available
Deictic reference -- pointing at things during conversation -- is ubiquitous in human communication, and should also be an important tool in distributed collaborative virtual environments (CVEs). Pointing gestures can be complex and subtle, however, and pointing is much more difficult in the virtual world. In order to improve the richness of intera...
Thesis
Full-text available
The improvement of virtual reality technologies has enabled increased access to these technologies and at lower prices, allowing more studies in this line. This work proposes and studies several navigation and selection techniques in virtual environments using Microsoft Kinect®. This device was chosen because, besides having an accessible price, it...

Citations

... Various types of NUI have been developed and implemented for navigating VR environments in research prototypes. Recently, LaViola et al. [3] explored how people could use wearable shoes to navigate an immersive VR environment, whereas, in Tollmar et al.'s work [7], researchers analyzed the space of perceptual interface abstractions for full-body navigation in a screen specifically through pointing gestures. Although researchers [1] [2] [4] [6] [5] have explored the design space of body gestures, they primarily pre-defined and implemented the control gestures and navigation mapping in VR limited only to hands or arms. ...
... Various types of NUI have been developed and implemented for navigating VR environments in research prototypes. Recently, LaViola et al. [3] explored how people could use wearable shoes to navigate an immersive VR environment, whereas Tollmar et al.'s research [7] analyzed the space of perceptual interface abstractions for full-body navigation in a screen specifically through pointing gestures. Although researchers [1] [2] [4] [6] [5] have explored the design space of body gestures, they primarily pre-defined and implemented the control gestures and navigation mapping in VR limited only to hands or arms. ...
Conference Paper
Full-text available
Immersive Virtual Reality (VR) as a research tool provides numerous opportunities of what one can do and see in a virtual world which is not possible in real world. Being able to fly is an experience that humans have long dreamed of achieving. In this paper, we introduce a VR game where participants can use their body gestures as a Natural User Interface (NUI) to control flying movements via a Microsoft Kinect. The goal of this research is to explore the navigational experience of flying via body gestures: what people like to do, what they want to be, and most importantly, how they map their gestures to navigation control easily in a VR environment.
... The design methods can aid in creating better performing interaction techniques, especially for more complex applications, and may certainly increase the attractiveness of interaction. Furthermore, reflecting human potential matches 3D user interface design very well: the human sensorimotor system is viewed without constraining physical movements ("full-body interaction" [2]) or content representation, as is the case in 2D interface design. ...
... In direct relation, the article often refers to work performed in the field of multi-sensory processing [19] [20], to explain underlying human factors principles. Also, some parts in this article overlap with general thoughts on what researchers have called a full-body interface [2] or multi-sensory system platforms [21]. The majority of full-body and multisensory interfaces are activity or experience-driven, and do mostly not take into account the full human potential. ...
Conference Paper
Full-text available
In particular driven by today's game console technology, the number of 3D interaction techniques that integrate multiple modalities is steadily increasing. However, many developers do not fully explore and deploy the sensorimotor possibilities of the human body, partly because of methodological and knowledge limitations. In this paper, we propose a design approach for 3D interaction techniques, which considers the full potential of the human body. We show how “human potential” can be analyzed and how such analysis can be instrumental in designing new or alternative multi-sensory and potentially full body interfaces.
... The former is to fit sample points of the data to sample points of the template. To find a correspondence between two sets of points, several closest point and optimization algorithms can be used [2, 12]. The latter is to fit the template body _____________________________________________ * Work done while at the Department of Information and Computing Sciences, Utrecht University. ...
Article
Full-text available
This paper describes a framework for modeling of 3D human pose from multiple calibrated cameras, which serves as the core part of a player pose-driven spatial game system. Firstly, by multi-view volumetric reconstruction, voxel-based human model is constructed. Secondly, by applying a hierarchical approach with a set of heuristics, fast indirect body model fitting algorithms are used to fit a predefined human model to the reconstructed data, and based on which human poses are modeled and semantically interpreted as certain control inputs to the game.
... Tollmar et al [19], who also used vision-based motion tracking, reported that immersed subjects preferred real body motions and head motions for travel and orientation over indirect virtual interfaces. In their study, subjects also preferred to trigger actions using hand gestures rather than voice commands. ...
Article
Full-text available
Triage is a medical term that describes the process of prioritizing and delivering care to multiple casualties within a short time frame. Because of the inherent limitations of traditional methods of teach-ing triage, such as paper-based scenarios and the use of actors as standardized patients, computer-based simulations and virtual real-ity (VR) scenarios are being advocated. We present our system for VR triage, focusing on design and development of a pose and gesture based interface that allows a learner to navigate in a virtual space among multiple simulated ca-sualties. The learner is also able to manipulate virtual instruments effectively in order to complete required training tasks.
... Therefore, several multi-view approaches for body pose estimation have been published during the past few years. Most of them try to fit an articulated body model to 3D data [5, 6, 7] . But even then fast tracking of such articulated structures is far from trivial . ...
Conference Paper
Full-text available
We present an algorithm for the real-time detection and interpretation of pointing gestures, performed with one or both arms. The pointing gestures are used as an intuitive tracking interface for a user interacting with an immersive virtual environment. We have defined the pointing direction to correspond to the line of sight connecting the eyes and the pointing fingertip. If a pointing gesture is being performed, the algorithm detects and tracks the position of the user's eyes and fingertip and computes the origin and direction of that gesture with respect to a real-world coordinate system. The algorithm is based on the body silhouettes extracted from multiple views and uses point correspondences to reconstruct in 3D the points of interest. The system doesn't require initial poses, special clothing, or markers.
Conference Paper
We present a simple and intuitive method of user interaction, based on pointing gestures, which can be used with video avatars in a remote collaboration. By connecting the head and fingertip of a user in 3D space we can identify the direction in which they are pointing. Stereo infrared cameras in front of the user, together with an overhead camera, are used to find the user’s head and fingertip in a CAVETM-like system. The position of the head is taken to be the top of the user’s silhouette, while the location of the user’s fingertip is found directly in 3D space by searching the images from the stereo cameras for a match with its location in the overhead camera image in real time. The user can interact with the first object which collides with the pointing ray. In an experimental result, the result of the interaction is shown together with the video avatar which is visible to a remote collaborator.