ArticlePDF Available

Abstract and Figures

This paper describes an efficient and intuitive method of tree shape manipulation by using two hands. A user can directly manipulate the shapes of tree models by using a well-designed two-handed interface. The exact manipulation command and partial shape of the user's intended tree model is automatically embodied and selected from the user's approximated indication made by using two hands. Experimental results show that our proposed two-handed method is useful for effective manipulation of tree shape models.
Content may be subject to copyright.
A preview of the PDF is not available
... Hinckley's experiment [8] verified that two hands provide the user with more information and can structure how the user thinks about a task. Onishi [9] discussed a manipulation technique for 3D plant model editing with bimanual symmetric input. ...
Conference Paper
Full-text available
This paper describes a two-handed interaction framework for desktop virtual environment, called TH3D. The design goal of it is to support device-independent, task-centered and fast development of two-handed applications on desktop computers. Existing research on two-handed interaction mainly focuses on specific interaction techniques and fails to achieve this goal. Our framework is characterized by its multi-layer design applied to two-handed interaction, including normalization of raw input data, interaction primitive and task construction and a built-in efficient two-handed interaction technique which integrates the benefits of egocentric and exocentric interaction. With this framework, application developers can pay more attention on analyzing interaction task and designing interaction techniques without caring more about a variety of devices and the mapping between interaction task and technique. While the current implementation is tested with ordinary Spacemouse and 2D mouse, this framework can be easily ported to other device combinations.
... Both Grossman et al. [1] and Owen et al. [2] use 6DOF trackers to control the design of a 3D curve and surface. Moreover, Onishi et al. [11] discussed a manipulation technique for 3D plant model editing with a bimanual, symmetric input. (3) Navigation: The WIM technique provides an exocentric view of the whole virtual world, so users can easily position their viewpoint based on camera-in-hand or target-based metaphor. ...
Article
This paper describes a two-handed interaction framework for desktop virtual environment, called TH3D. The design goal of this framework is to support the deviceindependent, task-centered, and fast development of twohanded applications on desktop computers. Existing research on two-handed interaction focuses mainly on specific interaction techniques and fails to achieve this goal. Our framework is characterized by its multi-layer design applied to two-handed interaction, including the normalization of raw input data, interaction primitive and task construction, and an efficient, built-in, two-handed interaction technique that integrates the benefits of both egocentric and exocentric interactions. With this framework, application developers can pay more attention to the analysis of interaction tasks and the design of an interaction technique without mainly considering a variety of devices and the mapping between interaction task and technique. The current implementation is tested with ordinary Spacemouse and 2D mouse, but the framework can easily be ported to other device combinations.
Conference Paper
Selection in 3D virtual environments can vary wildly depending on the context of the selection. Various scene attributes such as object velocity and scene density will likely impact the user's ability to accurately select an object. While there are many existing 3D selection techniques that have been well studied, they all tend to be tailored to work best in a particular set of conditions, and may not perform well when these conditions are not met. As a result, designers must compromise by taking a holistic approach to choosing a primary technique; one which works well overall, but is possibly lacking in at least one scenario. We present a software framework that allows a flexible method of leveraging several selection techniques, each performing well under certain conditions. From these, the best one is utilized at any given moment to provide the user with an optimal selection experience across more scenarios and conditions. We performed a user study comparing our framework to two common 3D selection techniques, Bendcast and Expand. We evaluated the techniques across three levels of scene density and three levels of object velocity, collecting accuracy and timing data across a large sample of participants. From our results, we were able to conclude that our auto-selection technique approach is promising but there are several characteristics of the auto-selection process that can introduce drawbacks which need to be addressed and minimized.
Conference Paper
In this study, we implemented a stable and intuitive detach method named “Touch & Detach” for 3D complex virtual objects. In typical modeling software, parts of a complex 3D object are grouped for efficient operation, and ungrouped for observing or manipulating a part in detail. Our method uses elastic metaphors to prevent incorrect operations and to improve the operational feel and responsiveness. In addition, our method can represent the connection and its strength between the parts by simulating a virtual elastic band connecting the parts. It helps users to understand the relationship between the parts of a complex virtual object. This paper presents the details of our proposed method and user study.
Article
Full-text available
Toolglass™ widgets are new user interface tools that can appear, as though on a transparent sheet of glass, between an application and a traditional cursor. They can be positioned with one hand while the other positions the cursor. The widgets provide a rich and concise vocabulary for operating on application objects. These widgets may incorporate visual filters, called Magic Lens™ filters, that modify the presentation of application objects to reveal hidden information, to enhance data of interest, or to suppress distracting information. Together, these tools form a see-through interface that offers many advantages over traditional controls. They provide a new style of interaction that better exploits the user's everyday skills. They can reduce steps, cursor motion, and errors. Many widgets can be provided in a user interface, by designers and by users, without requiring dedicated screen space. In addition, lenses provide rich context-dependent feedback and the ability to view details and context simultaneously. Our widgets and lenses can be combined to form operation and viewing macros, and can be used over multiple applications.
Conference Paper
Full-text available
We present an approach to control information flow in object-oriented systems. The decision of whether an informatin flow is permitted or denied depends on both the authorizations specified on the objects and the process by which information is obtained ...
Article
Full-text available
We discuss a two-handed user interface designed to support three-dimesional neurosurgical visualization. By itself, this system is a “point design,” an example of an advanced user interface technique. In this work, we argue that in order to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. In particular, we argue that the common-sense viewpoint that “two hands save time by working in parallel” may not always be an effective way to think about two-handed interface design because the hands do not necessarily work in parallel (there is a structure to two-handed manipulation) and because two hands do more than just save time over one hand (two hands provide the user with more information and can structure how the user thinks about a task). To support these claims, we present an interface design developed in collaboration with neurosurgeons which has undergone extensive informal usability testing, as well as a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this discussion will help others to apply the lessons in our neurosurgery application to future two-handed user interface designs.
Article
Full-text available
One of the recent trends in computer input is to utilize users' natural bimanual motor skills. This article further explores the potential benefits of such two-handed input. We have observed that bimanual manipulation may bring two types of advantages to human-computer interaction: manual and cognitive. Manual benefits come from increased time-motion efficiency, due to the twice as many degrees of freedom simultaneously available to the user. Cognitive benefits arise as a result of reducing the load of mentally composing and visualizing the task at an unnaturally low level which is imposed by traditional unimanual techniques. Area sweeping was selected as our experimental task. It is representative of what one encounters, for example, when sweeping out the bounding box surrounding a set of objects in a graphics program. Such tasks cannot be modeled by Fitts' Law alone and have not been previously studied in the literature. In our experiments, two bimanual techniques were compared with the conventional one-handed GUI approach. Both bimanual techniques employed the two-handed “stretchy” technique first demonstrated by Krueger in 1983. We also incorporated the “Toolglass” technique introduced by Bier et al. in 1993. Overall, the bimanual techniques resulted in significantly faster performance than the status quo one-handed technique, and these benefits increased with the difficulty of mentally visualizing the task, supporting our bimanual cognitive advantage hypothesis. There was no significant difference between the two bimanual techniques. This study makes two types of contributions to the literature. First, practically we studied yet another class of transaction where significant benefits can be realized by applying bimanual techniques. Furthermore, we have done so using easily available commercial hardware in the context to our understanding of why bimanual interaction techniques have an advantage over unimanual techniques. A literature review on two-handed computer input and some of the relevant bimanual human mototr control studies is also included.
Conference Paper
This paper describes a case study of building a prototype of an immersive three dimensional (3-D) modeler which supports simple two-handed operations. Designing 3-D objects in a virtual environment has a number of advantages for 3-D geometry creation over designing with traditional computer aided design (CAD) tools. In order to enhance the human-computer interaction in a virtual workspace, two-handed spatial input has been incorporated into a few 3-D designing applications. However, existing 3-D designing tools do not utilize two handed interaction for enhancing the interface sufficiently. Our prototype immersive modeler, VLEGO, employs some features of toy blocks to give flexible two-handed interaction for 3-D design. Features of VLEGO can be summarized as follows: Firstly, VLEGO supports various two-handed operations and hence it makes design environment intuitive and efficient. Secondly, possible location and orientation of primitives are discretely limited so that the user can arrange objects accurately with ease. Finally, the system automatically avoids collisions among primitives and adjusts their positions. As a result, precise design of 3-D objects can be achieved easily by using a set of two-handed operations in intuitive way. This paper describes the design and implementation of VLEGO as well as an experiment for examining the effectiveness of two-handed interaction.
Article
Realistic image synthesis of botanical trees has many applications. Since the generation of ‘tree skeletons’ having natural visual impressions is essential to realistic image synthesis, various methods of modelling skeletons, especially growth models, have been presented. However, no one has succeeded in simulating natural tree features which appear in a growth process, such as generation of a round tree crown, a weeping bough, an irregular branching pattern, or regeneration of a crown. This paper demonstrates, by showing several simulated examples, that a growth model having the abilities of heliotropism and dormancy break, which produces shapes of trees adapted to changes in the light environment, is effective in the CG simulation of realistic tree skeletons.
Conference Paper
Interaction with the environment is a key factor affecting the devel- opment of plants and plant ecosystems. In this paper we introduce a modeling framework that makes it possible to simulate and visualize a wide range of interactions at the level of plant architecture. This framework extends the formalism of Lindenmayer systems with constructs needed to model bi-directional information exchange be- tween plants and their environment. We illustrate the proposed framework with models and simulations that capture the develop- ment of tree branches limited by collisions, the colonizing growth of clonal plants competing for space in favorable areas, the interaction between roots competing for water in the soil, and the competition within and between trees for access to light. Computer animation and visualization techniques make it possible to better understand the modeled processes and lead to realistic images of plants within their environmental context.
Conference Paper
We integrate into plant models three elements of plant representation identified as important by artists: posture (manifested in curved stems and elongated leaves), gradual variation of features, and the progression of the drawing process from overall silhouette to local details. The resulting algorithms increase the visual realism of plant models by offering an intuitive control over plant form and supporting an interactive modeling process. The algorithms are united by the concept of expressing local attributes of plant architecture as functions of their location along the stems.