Conference PaperPDF Available

Virtual Reality Interfaces for Virtual Environments

Authors:

Abstract and Figures

In this article we define and classify virtual reality interfaces (VRI). The goal is to provide those interested in such interfaces with a solid and unified framework on VRI. This framework gives an overview of VRI, and allows one to see the different categories of VRI, the relations between them, the levels of analysis, and the terms used. The classification starts from the fact that, to exist and be used in a virtual world, the objects of this world first have to be created and to allow interaction with the user. This led us to define two sides in the classification. The Designer side concerns the modeling of objects and includes modeling interfaces. The User side concerns the interaction with the objects of the world, and includes three sets of interfaces: the sensorial interfaces to perceive the world, the motor skills interfaces to act in that world, and the sensorimotor interfaces which combine the two types of activities.
Content may be subject to copyright.
Virtual Reality Interfaces for Virtual Environments
Lévis Thériault, Jean-Marc Robert, and Luc Baron
École Polytechnique de Montréal
P.O. Box 6079, Station Centre-ville, Montréal, Québec H3C 3A7
{levis.theriault, jean-marc.robert, luc.baron}@polymtl.ca
Abstract. In this article we define and classify virtual
reality interfaces (VRI). The goal is to provide those
interested in such interfaces with a solid and unified
framework on VRI. This framework gives an overview of
VRI, and allows one to see the different categories of
VRI, the relations between them, the levels of analysis,
and the terms used. The classification starts from the fact
that, to exist and be used in a virtual world, the objects of
this world first have to be created and to allow interaction
with the user. This led us to define two sides in the
classification. The Designer side concerns the modeling
of objects and includes modeling interfaces. The User
side concerns the interaction with the objects of the
world, and includes three sets of interfaces: the sensorial
interfaces to perceive the world, the motor skills
interfaces to act in that world, and the sensorimotor
interfaces which combine the two types of activities.
Keywords: Human-Computer Interfaces, Virtual
Environments, Virtual Reality Interfaces, Virtual Reality
Systems, Modeling Interfaces, Sensorial Interfaces, Motor
Skills Interfaces.
1 Introduction and Motivation
In this paper we define and classify virtual reality
interfaces (VRI), with the goal of providing authors,
designers, developers, professors, and users with a solid
and unified framework on VRI [1][2][3]. This framework
gives an overall picture of the different families of VRI,
and of the categories and subcategories of VRI, the
relations that exist between them, the levels of analysis,
and the terms used. Moreover, we briefly describe the
devices and systems that form the VRI, their functioning,
roles and principal functionalities, and the supporting
technologies. In parallel, we identify relevant sources of
information (articles, books, reviews, Web sites, etc.) on
VRI.
The major works of [4][5] are rich sources of
information on VRI that helped us a lot in our enterprise.
For instance, they present definitions and the principles of
virtual reality (VR), the systems, devices and
technologies that form virtual reality systems (VRS), the
manufacturers, the costs of the products, and some
companies in this new domain. We completed the data
collection about VRI with the works of
[6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]
[21]. Despite the value of these works, there was still a
need for a synthetic overall picture of VRI, for a detailed
classification of VRI, and for a clear distinction between
the designer’s activities and the user’s activities with
VRI.
The paper is structured as follows. It presents a
definition of VR and the conditions to be satisfied to have
a VRS. It describes the VRI classification: the underlying
rationale, the families, categories and sub-categories of
VRI, the devices and systems that form the VRI, and the
supporting technologies. A short conclusion highlights
the contribution of this paper.
2 Virtual Reality
The concept of “virtual reality” was invented by Heilig in
1952 and the term “virtual reality” was introduced by
Jaron Lanier in the 80’s and has been around since then.
It is controversial with the consequence that nobody
agrees on a formal definition. However, several criteria
allow one to understand its nature and scope.
Recently Fuch et al. [5] have defined virtual reality as
being “an aggregation of techniques allowing real time
interaction (1) with a virtual world (2), with behavioral
interfaces (3) capable of pseudo-natural immersion (4) of
the user(s) within this environment”. They assert that only
the installations that respect these four conditions are
virtual reality systems (VRS). In the next paragraphs, we
explain each of these conditions.
(1) Real time interaction is reached when the user does
not perceive a time lag between his actions within the
virtual environment and the sensorial response of this
environment. The duration of the lag must be around
100 ms or less, to preserve the impression of real
time. This delay depends on the senses and the motor
skills responses involved. The authors assert that
navigation is only one particular interaction: the user
interacts in the virtual world by virtually moving
him/herself with a specific command in a changing
virtual environment. If the user does not command
his/her displacement him/herself, then he/she is a
passive spectator and we are not anymore in VR.
(2) The two major problematics about VR are concerned
with the VR modeling or digitization, and the
interface between the user and the virtual world.
(3) To operate in a virtual world, one has to use
behavioral interfaces. These are composed of
sensorial and motor skills interfaces. Sensorial
interfaces present the virtual world evolution to the
user’s senses. Motor skills interfaces transmit the
user’s action to the virtual world. The number and
types of interfaces to be used depend on the
objectives pursued with the application.
(4) The user must be in the most effective pseudo-natural
immersion in the virtual world.
. Immersion within a
virtual world cannot be natural because we have
learned to be natural only within the real world and
not within a virtual one (there would be a
sensorimotor and mental skew hence the term
pseudo). The sensation of immersion is subjective
and it depends on the application or devices that are
used (interfaces, software, etc.).
In this article we adopt this definition of VR and the
four conditions that it includes since they seem to be the
most comprehensive ones.
2.1 Basic Features of VRS
VRS [22] allow users, i.e. scientists, engineers,
physicians, trainees, etc. to interact with data in a
different manner, in order to better understand and solve
real complex problems. Real time interaction with a
virtual scene requires that the system be able to carry out
a number of basic operations such as:
real time management of the scene objects that are
stored in memory, and the ongoing updating of the
scene;
simulation of the objects’ behavior;
processing of the 3D images of the scene at a certain
frequency (typically 15 Hz);
sounds generation associated to the graphic database,
if there are any;
management of the navigation model in the database
(i.e., when one walks, flies, makes a zoom, etc.);
control of the system’s input-output peripherals to
allow the user to interact with objects of the scene,
trigger actions and receive the image, the sound or
other adapted sensorial impressions;
etc.
These operations require major capabilities of
processing, visualization, and manipulation of the system
that supports the virtual world.
3 Classification of VRI
A virtual world is composed of a coherent aggregation of
modeled 3D objects one must be able to interact with in a
virtual reality mode. The first part of the classification
relates to the modeling of virtual objects and concerns the
Designer, whereas the second part relates to the real time
interaction between the user and these objects, and
concerns the User. This distinction between the modeling
and the use of VR objects is at the root of the
classification.
Figure 1 presents a classification of VR interfaces. On
one side, the Designer creates and models the objects that
compose the virtual world, and defines their interactivity.
On the other side, the User interacts with the modeled
objects. In this paper we explain how to model objects for
the virtual world and how to interact with them.
The classification includes five levels of
decomposition. At the first level, there are four families
of VRI: on the Designer side, the modeling interfaces,
and on the User side, the sensorial, the motor skills, and
the sensorimotor interfaces.
In the modeling interfaces family, there are three
categories of interfaces and two levels of decomposition.
In the sensorial interfaces family, there are five categories
of VRI corresponding to the five senses of the human
being and four levels of decomposition. In the motor
skills interfaces, there are seven categories of VRI
corresponding to the different motor senses and three
levels of decomposition. Finally, in the sensorimotor
interfaces family, there is only one category of VRI
associated to the force feedback command, and two levels
of decomposition.
4 Modeling Virtual World Objects
Modeling consists in building a representation of some
object of interest. A model is a mimic of an object, in a
smaller, similar to, or larger scale than reality, simplified
as compared to the reality, and with the advantages of
being more easily observable and controllable than
reality. Modeling is carried out with modeling interfaces.
To create 3D models of objects, the VRI designer can
choose between three types of interfaces that correspond
to different methods (see Figure 2). The first method
consists in creating models from real objects, the second
one consists in creating objects with software, and the
third one, which is more recent, consists in creating
volumetric forms with virtual sculpture. The following
paragraphs give more details on each method.
Virtual Reality
Interfaces
Sensorial
Interfaces
Modeling
Interfaces
Designer Side
Interfaces for Modeling
from Real Objects
3D Digitizers CAD tools
Programming and modeling langages
Interfaces for Modeling
with Software
Interfaces for Modeling
from Vi rtual Sculptu re
Free form tools
Motor Skills
Interfaces
Sensorimot or
Interfaces
User Side
Fingers Moveme nts
Detect ion Inter faces
Walk Analysis
Interfaces
Position and
Orientation Location
Interfaces
Command
Interfaces
Facial
Acquisition
Interfaces
SolesData glovesTrackers
Force feedback platforms
Sound and Speech
Command
Interfaces
Manual
Command
Interfaces
Mouses
Joysticks
Pens
Sound and speech re cognition sy stems
Pedestria n
Command
Interfaces
Pedals
Locomotion
Interfaces
Motion Captu re
Interfaces
Lips Movement
Analysis Interfaces
Facial Mo vement
Analysis Interfaces
Optical systems Labial interfac es
Eyes Movement
Location Interfaces
Eyes trackers
2D-Restitution
Motion Capture
Interfaces
Polygon-ba sed systems
3D-Restitution
Motion Capture
Interfaces
Tracker systems
Image analysis systems
Data suits One-way
Locomotion
Interfaces
Roller skates
1D rolling carpets
Multidirectional
Locomotion
Interfaces
Mobile platforms
Gyroscopes
Movement controllers
2D rolling carpets
Manipulators
Force feedback joysticks
Force Feedback
Command Interfaces
Force feedback data gloves
Exoskeletons
Auditive
Interfaces
Touch
Interfaces
Visual
Interfaces
Olfactory
Interfaces
Gustative
Interfaces
Monoscopic Visual
Interfaces
Fig. 1. Classification of Virtual Reality Interfaces
Stereosc
Int
Interfaces with
Two-Screens
Head-mount
Video glasse
Binocular om
opic Visual
erfaces
Head-up displays
Flavours diff user
Sound and Speech
Synthesis Interfaces
Sound and speech synthesis systems
Under development
Therma l Feed back
Interfaces
Touch Feedback
Interfaces
Data gloves
Joysti cks
Thermal gloves
See-through HMDs
Visual display screens
Interfaces with
One-Screen
Shutter glasses
Stereoscipic projection screens
ed displays
s
ni-oriented monitors
3D Soun ds
Generation Interfaces
3D sounds gene rators
Modeling from Virtual Sculpture. With this method,
knowing 3D modeling or a design software is not
necessary. The creation of the form is obtained by
virtually working the matter whose form, texture, and
density can be physically felt [29]. This method allows
sculptors, designers, or experts in computer graphics to
use the sense of touch during the creation. It associates
the freedom of intuition and expression to the speed and
the flexibility of a numerical tool without blocking the
creativity. The use of various tools makes it possible to
shape, cut out, stretch, pierce, etc. this “virtual clay” as a
sculptor would do, by constantly keeping a full control of
the task through a force feedback command. The object
that is finally obtained has a digital definition that can be
used in a CAD software or a rapid prototyping system,
and can be directly exploited to build the physical model
of the object.
Modeling with Software. To create 3D models, one can
use computer-aided design (CAD) tools such as 3D
Studio [24], AutoCAD [25], CATIA [26], ProEngineer
[27], etc. or programming languages such as C/C++,
Pascal, VRML [28], etc. This method allows one to
reproduce an object in the form of a 3D grid that shows
the geometry of the object in details.
To interact with objects of the virtual world, one must be
able to perceive and handle these objects. Thus, the VRI
can be classified according to three large families:
sensorial interfaces, motor skills interfaces, and
sensorimotor interfaces. The next sections describe the
various types of interfaces within each family.
5 Interacting with Virtual World Objects
Modeling from Real Objects. 3D digitizers allow one to
create 3D models of objects for the virtual world much
more quickly than if software or modelers were used [4].
To collect the data that are required to create a 3D model
of an object [23], one can scan the object surfaces with a
digitizer. Results are digital data that can be processed by
a computer.
Modeling
Interfaces
Interfaces for Modeling
from Real Objects
3D Digitizers CAD tools
Programming and modeling langages
Interfaces for Modeling
with Software
Interfaces fo
from Virtua
r Modeling
l Sculpture
Free form tools
Fig. 2. Classification of Modeling Interfaces
Visual Interfaces. As the sight is the sensory channel
that brings the most information to the human [10][31], it
is essential to create visual immersion and thus allow the
users to see objects of the virtual world. In addition to
producing good quality 3D images, the challenge one
faces with the production of visual interfaces is to show a
different image to each eye and allow the users to benefit
from the stereoscopic vision. Visual interfaces were
classified according to the technique used to separate the
images intended for each eye. Thus, the visual interfaces
are divided into two classes: stereoscopic visual interfaces
and monoscopic visual interfaces. The first ones are
divided into two types of interfaces: (1) interfaces with
two screens (one per eye) that include head-mounted
displays, video glasses, and binocular omni-oriented
monitors; (2) interfaces with only one screen (for the two
eyes) that include shutter glasses and stereoscopic
projection screens. The images are then separate at the
The role of the sensorial interfaces is to make it possible
for the user to perceive objects of the virtual world. And
as in the real world, the more the number of senses
stimulated in the virtual world is high, the more the
feeling of immersion of the user is great [30]. In 1952,
Heilig [31] analyzed the senses according to their
capacity to mobilize the human attention. Although we
are critical of the results, it is intriguing to know the order
of magnitude of the mobilization capacity of each sense:
the sight (70%), hearing (20%), smell (5%), touch (4%),
and taste (1%). Our classification of the sensorial
interfaces follows this order, and accordingly we present
the visual, auditive, olfactory, touch and gustative
interfaces (see Figure 3). Let us note that the gustative
interfaces do not exist yet.
5.1 Sensorial Interfaces
screen level or by the glasses. The monoscopic visual
interfaces take shape through head-up displays, see-
through HMDs and visual display screens.
Auditive Interfaces. To increase the feeling of
immersion and even impact the visual perception of the
virtual world, the hearing sense can be exploited [12][32].
The sounds generated by the computer must be 3D and
come from the elements present in the virtual world. The
auditive interfaces were classified into two categories: 3D
sounds generation interfaces, sound and speech synthesis
interfaces.
Touch Interfaces. In several domains such as
teleoperation, telemedicine [33][34][35], and telerobotics,
the sense of touch is necessary to better appreciate the
virtual world [8][12][36][37]. Moreover, the hand has a
high density of tactile sensors that make it a good
candidate for immersion [4], and it is naturally used for
the handling task. The touch interfaces were classified
into two categories: thermal feedback interfaces and
touch feedback interfaces.
Olfactory Interfaces. The synthesis of simple odours
(e.g., coffee, burned rubber, flower, moisture, etc.) is
possible with the mixture of odorous molecules (not more
than two at the same time if one wants to recognise the
final odour) and through the use of a flavour diffuser. The
absence of basic odorous molecules, by analogy with the
basic colours in painting, makes impossible for the
moment the synthesis of odours [31][38][39].
Gustative Interfaces. Gustative interfaces do not exist
yet. Research on this topic remains theoretical and there
are very few publications on it [40]. Various applications
are possible, one can think of rooms for virtual tasting of
foods. It would be possible to taste wines, coffees, etc.
simply through gustatory devices, as we smell odours
through odorous strips in perfumeries.
Sensoria l
Interfaces
Auditive
Interfaces
Touch
Interfaces
Visual
Interfaces
Olfactory
Interfaces
Gustative
Interfaces
Monoscopic Visual
Interfaces
Stereoscopic Visual
Interfaces
Head-up displays
Flavou rs diffuser
Sound a nd Speech
Synthesis Interfaces
3D Sou nds
Generation Interfaces
3D sounds generators Sound and s pe ech sy nthesi s sy stems
Under development
Ther mal Fe edba ck
Interfaces
Touch Feedback
Interfaces
Data gloves
Joystic ks
Thermal gloves
See-through HMDs
Visual display screens
Interfaces with
One-Screen
Shutter glasses
Stereoscipic projection screens
Interfaces with
Two-Screens
Head-mounted displays
Video glasses
Binocular omni-oriented monitors
Fig. 3. Classification of Sensorial Interfaces
5.2 Motor Skills Interfaces
The role of the motor skills interfaces is to allow the user
to act on the objects of the virtual world. To do so, one
must provide the computer with information on the user’s
gestures and speech that concern the objects of the virtual
world so that it can react to them in a suitable way [11].
Seven classes of motor skills interfaces were identified, as
Figure 4 shows. These interfaces enable the user to
operate in the virtual world.
Position and Orientation Location Interfaces. 3D
position sensors, commonly called trackers (the term we
use in the rest of paper), allow the user to know at any
time the position and orientation of his/her members
(head, wrist) or of an object (effector, stylet) in space.
The trackers use either one of the following technologies:
electromagnetic, electric, acoustic, optical, mechanical,
gyroscopic or hybrid.
Finger Movement Detection Interfaces. These
interfaces take shape through data gloves [41][42] which
detect some or the totality of the relative finger
movements in relation to the wrist. The gloves are based
on either one of the following technologies: optical fibres,
plates, Hall effect or pneumatic.
Walking Analysis Interfaces. Soles and force feedback
platforms can be used to analyse the walking of a person.
Walking analysis interfaces allow one to collect
information on the walking of a person, which includes
the pace length, the walking speed, the body movement
and balance, etc.
Motor Skills
Interfaces
Fingers Movements
Detection Interfaces
Walk Analysis
Interfaces
Position and
Orientation Location
Interfaces
Command
Interfaces
Facial
Acquisition
Interfaces
SolesData glovesTrackers
Force feedback platforms
Sound a nd Speech
Command
Interfaces
Manual
Command
Interfaces
Mouses
Joysticks
Pens
Sound and speech recognition systems
Pedestria n
Command
Interfaces
Pedals
Lips Movement
Analysis Interfaces
Facial Mo vement
Analysis Interfaces
Optical systems Labial interfaces
Eyes Movement
Location Interfaces
Eyes trackers
2D-Restitution
Motion Capture
Interfaces
Polygo n-based systems
3D-Restitution
Motion Capture
Interfaces
Tracker systems
Image analysis systems
Data suits One-way
Locomotion
Interfaces
Roller skates
1D rolling carpets
Multidirectional
Locomotion
Interfaces
Mobile platforms
Gyroscopes
Movement controllers
2D rolling carpets
Locomotion
Interfaces
Motion Capture
Interfaces
Fig. 4. Classification of Motor Skills Interfaces
Motion Capture Interfaces. These interfaces can be
divided into two categories: 2D- vs 3D- restitution motion
capture interfaces. Their role is to collect data on body
movements. The 2D-restitution interfaces, which include
an image analysis system, allow to analyse the video
images of the tracked person; these images are modelled
by a polygon-based system allowing a 2D-restitution of
the filmed scene [43]. The 3D-restitution interfaces allow
to build a 3D model of a body in movement, and to
benefit from the advantages of natural movements done
by a human in real environment. Three types of 3D-
restitution interfaces were identified on the basis of the
data collection method that is used: tracker systems,
image analysis systems, and data suits.
Command Interfaces. These interfaces are used to send
orders to the virtual world [5]. It is not rare to meet
traditional control devices in virtual reality. Three
categories of command interfaces were identified:
manual, pedestrian, and sound and speech.
Locomotion Interfaces. The role of these interfaces is to
have the user think that he/she is moving into a virtual
world [44]. They can be divided into two categories: one-
way locomotion interfaces, and multi-dimensional
locomotion interfaces [45]. The former ones include roller
skates, 1D rolling carpets, etc. whereas the latter ones
include mobile platforms, gyroscopes [46], movement
controllers, 2D rolling carpets, etc.
Facial Acquisition Interfaces. These interfaces allow the
computer to collect the facial movements of the user with
various expressions: joy, sadness, fear, anger, surprise,
dislike, daydream, mistrust, concern, withdrawal,
suspicion, etc. in order to reproduce these movements on
a synthesis face [11]. The facial acquisition interfaces can
be divided into three categories: facial movement analysis
interfaces, eyes movement location interfaces, and lips
movement analysis interfaces.
5.3 Sensorimotor Interfaces
The role of the sensorimotor interfaces is to transmit the
motor responses of the user to the computer, and as a
reaction, sensory stimuli are sent by the computer to the
user [5]. Only one subcategory of sensorimotor interfaces
that include these two functions could be identified (see
Figure 5).
These interfaces have some resemblance with the
simulators of movements that transmit changes of
orientation and accelerations to the users present in the
simulator. They allow one to materialise the objects
present in the virtual world: they apply to the part of the
body in contact with the virtual object, the reciprocal
forces that the user would exert on the real object. The
forces to be simulated can be exerted by a liquid or a
solid. The problem with the force feedback command
interfaces, which include manipulators, force feedback
joysticks, force feedback data gloves, exoskeletons, etc. is
that they must be built on a solid frame and they cause
some nuisance to the user [4].
Sensorimot or
Interfaces
Manipulators
Force feedback joysticks
Force Feedback
Command Interfaces
Force feedback data gloves
Exoskeletons
Fig. 5. Classification of Sensorimotor Interfaces
6 Conclusion
In this paper we have defined and classified the various
VRI, and showed a synthetic graphic presentation of the
classification. The goal was to help the VR community to
get a better understanding of the domain by providing it
with a solid and unified framework about the interfaces.
The next step of our research will consist to document
each VR interface and to design a computer-aided
decision-making system for helping designers and
developers to choose and combine VRI in relation with
the tasks, the context, and the users’ requirements.
References
[1] L. Thériault and J.M. Robert. Classification des
interfaces matérielles de la réalité virtuelle. To appear in
71e Congrès de l’Association francophone pour le savoir
(Acfas), May 2003.
[2] L. Thériault and J.M. Robert. Taxonomie de la
réalité virtuelle. Submitted of Journal of Human-Machine
Interaction, March 2003.
[3] L. Thériault and J.M. Robert. Terminologie du
domaine de la réalité virtuelle. Submitted of Journal of
Human-Machine Interaction, March 2003.
[4] G. Burdea and P. Coiffet. La réalité virtuelle.
Hermès, Paris, 1993.
[5] P. Fuchs, M. Moreau, and J.P. Papin. Le traité de la
réalité virtuelle. Presses de l’École des Mines de Paris,
2001.
[6] D. Bowman. Interaction Techniques for Common
Tasks in Immersive Virtual Environments: Design,
Evaluation, and Application. Ph.D. Thesis, Georgia
Institute of Technology, June 1999. Web Site
http://people.cs.vt.edu/~bowman.
[7] S.R. Ellis. What are virtual environments? IEEE
Computer Graphics & Applications, 1994, 17-22.
[8] S.R. Ellis, D.R. Begault, and E.M. Wenzell. Virtual
environments as human-computer interfaces (Chap. 8). In
Helander, M., Landauer, T.K., Prabhu, P.V. (Eds).
Handbook of human-computer interactive, 2nd Edition,
Elsevier, North-Holland, 1997, pp. 163-201.
[9] S. Ellis, M.K. Kaiser, and A.J. Grunwald (Eds).
Pictorial communications in virtual and real
environments. Taylor & Francis, 1991.
[10] C. Esposito and L. Duncan-Lacoste. User interfaces
for virtual reality applications. Morgan & Kaufmann, San
Francisco, 1999.
[11] ESIEA Group Web Site.
http://www.esiea.fr/activ/rv
[12] K.M. Stanney, R.R. Mourant, and R.S. Kennedy,
Human Factors Issues in Virtual Environments : A
Review of the Literature. Presence, 7(4), 1998, pp. 327-
351.
[13] S. K. Card, J. D. Mackinlay, and G. G. Robertson.
The design space of input devices. In Proceedings of
CHI'90, pp. 117-124, 1990.
[14] J.L. Gabbard and D. Hix. A Taxonomy of Usability
Characteristics in Virtual Environments. Office of Naval
Research, Grant No. N00014-96-1-0385, November
1997. Web Site
http://csgrad.cs.vt.edu/~jgabbard/ve/taxonomy
[15] W. Buxton. Lexical and pragmatic considerations of
input structures. Computer Graphics, 17(1), 31-37, 1983.
[16] W. Buxton. Chunking and phrasing and the design
of human-computer dialogues. In H.-J. Kugler (Ed.), In
Proceedings of the IFIP 10th World Computer
Conference--Information Processing '86, pp. 475-480.
Amsterdam: Elsevier Science, 1986.
[17] S.K. Card, J.D. Mackinlay, and G.G. Robertson.
The design space of input devices. In Proceedings of the
CHI '90 Conference on Human Factors in Computing
Systems, pp. 117-124. New York: ACM, 1990.
[18] S.K. Card, J.D. Mackinlay, and G.G. Robertson. A
morphological analysis of the design space of input
devices. ACM Transactions on Office Information
Systems, 9, 99-122, 1991.
[19] J.D. Foley, V.L. Wallace, and P. Chan. The human
factors of computer graphics interaction techniques. IEEE
Computer Graphics and Applications, 4(11), 13-48,
1984.
[20] I.S. MacKenzie. Input devices and interaction
techniques for advanced computing. In W. Barfield, & T.
A. Furness III (Eds.), Virtual environments and advanced
interface design, pp. 437-470. Oxford, UK: Oxford
University Press, 1995.
[21] J.D. Mackinlay, S.K. Card, and G.G. Robertson. A
semantic analysis of the design space of input devices.
Human-Computer Interaction, 5, 145-190, 1991.
[22] G. Burdea, Virtual Reality Systems and
Applications. Electro 93 International Conference, Short
Course, Edison, NJ, April 1993.
[23] A.D. Gregory, S.A. Ehmann, and M.C. Lin.
InTouch : Interactive Multiresolution Modeling and 3D
Painting with a Haptic Interface. In Proceedings of IEEE
Virtual Reality Annual International Symposium, 2000,
pp. 45-52.
[24] M. Bousquet. 3D Studio MAX 2.0 Quick
Reference. 1st edition, February, Autodesk Press, 1998.
[25] S.D. Elliott, J. Brittain, G. Head, J. Head, T.
Schaefer, and S. Elliot. Autocad Reference Library : The
Complete, Searchable Resource for Mastering Autocad.
Cd-Rom edition, October, Ventana Communications
Group Inc, 1995.
[26] P. Carman and P. Tigwell. Catia Reference Guide.
2nd Edition, March, OnWord Press, 1998.
[27] S.G. Smith. Pro/ENGINEER 2000i Configuration
Options Reference Guide, February, CADquest, 2000.
[28] Web3D Web Site
http://www.web3d.org/vrml/vrml.htm
[29] Sim Team Web Site
http://www.simteam.com
[30] E. Emura and S. Tachi. Multisensor Integrated
Prediction for Virtual Reality. Presence, 7(4), 1998, pp.
410-422.
[31] M. Heilig. Enter the Experimental Revolution. In
Proceeding of Cyberarts Conference, Pasadena, October,
1992, pp. 292-305.
[32] C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V.
Kenyon, and J.C. Hart. The CAVE : Audio visual
experience automatic virtual environment.
Communication of ACM, 35(6), June 1992, pp. 64-72.
[33] P.J. Passmore, C.F. Nielsen, W.J. Cosh, and A.
Darzi. Effects of Viewing and Orientation on Path
Following in a Medical Teleoperation Environment. In
Proceedings of IEEE Virtual Reality Annual International
Symposium, 2001, pp. 209-215.
[34] A. Rizzo, J.G. Buckwalter, C. van der Zaag, U.
Neumann, M. Thiebaux, and C. Chua. Virtual
Environment Applications in Clinical Neuropsychology.
In Proceedings of IEEE Virtual Reality Annual
International Symposium, 2000, pp. 63-70.
[35] R. Steele, G. Goodrich, D. Hennies, and J.
McKinley. Design and Operation Features of Portable
Blind Reading Aids : Project Overview, Report 62, 1988,
pp. 137-138.
[36] G. Burdea. Force and Touch Feedback for Virtual
Reality. John Wiley & Sons, New York, 1994.
[37] S.R. Ellis, S. Adelstein, G. Baumeler, G. Jense, and
R. Jacoby. Sensor spatial distorsion, visual latency, and
update rate effects on 3D tracking in virtual
environments. In Proceedings of IEEE Virtual Reality,
Houston, TX, 1999, pp. 218-221.
[38] GNO Web Site
http://olfac.unav-lyon1.fr/olfac/acti/olfacto.htm
[39] H. Sundgren, F. Winquist, and I. Lundstrom.
Artificial Olfactory System Based on Field Effect
Devices. In Proceedings of the Interface to Real and
Virtual Worlds, Montpellier, France, March 1992, pp.
463-472.
[40] I. Bardot, L. Bochereau, P. Bourgine, B. Heyd, J.
Hossenlopp, N. Martin, M. Rogeaux, and G. Trystram.
Cuisiner artificial : un automate pour la formulation
sensorielle de produits alimentaires. In Proceedings of
Interface to Real and Virtual Worlds Conference,
Montpellier, France, March 1992, pp. 451-461.
[41] D.J. Sturman and D. Zeltzer. A survey of glove-
based input. IEEE Computer Graphics & Applications,
Vol. 14, January 1994, pp. 30-39.
[42] W. Buxton and B.A. Myers. A study in two-handed
input. In Proceedings of the CHI '86 Conference on
Human Factors in Computing Systems, pp. 321-326. New
York: ACM, 1986.
[43] Virtual Experimentation VEX
http://www.clarte.asso.fr/didacticiel/index.htm
[44] M. Mirose et al. Development of Haptic Interface
Platform. Transactions of the Virtual Reality Society of
Japan, 3(3), 1998.
[45] R. Darken, W. Cockayne, and D. Carmein. The
Omni-directional Treadmill : A Locomotion Device for
Virtual Worlds. In Proceedings of UIST 97.
[46] S. You and U. Neumann. Fusion of Vision and Gyro
Tracking for Robust Augmented Reality Registration. In
Proceedings of IEEE Virtual Reality Annual International
Symposium, 2001, pp. 71-78.
... A virtual environment is used by means of an interface; VR interfaces should comply with certain conditions, i.e., no significant lag time to perceive it as a real time interaction, have seamless digitalization, use a behavioral interface (sensorial and motor skills), and have an effective immersion as close as possible to reality [11]. The use of virtual reality technologies for rehabilitation purposes has recently increased [12][13][14][15], especially for motor rehabilitation applications [16,17]. ...
... From the analyzed articles, we found that 57.5% (23) of them use a VR interface environment for rehabilitation purposes (Table 1). Some 27.5% (11) of the articles propose a virtual interface that operates as a computer interface (CI) ( Table 2). Meanwhile, 4 (10%) of them use AR interfaces as biofeedback (Table 3). ...
... VR/AR technologies can be used as a visual guide to perform an activity, or to immerse in a different environment, but they can also be controlled using a variety of sensors or biosignals where a natural movement of the body generates a response in the environment displayed as movement or control of an avatar [11]. A simple example would be to adapt the environment so that when the subjects walk, it moves too, and they can explore it. ...
Article
Full-text available
Virtual reality (VR) and augmented reality (AR) are engaging interfaces that can be of benefit for rehabilitation therapy. However, they are still not widely used, and the use of surface electromyography (sEMG) signals is not established for them. Our goal is to explore whether there is a standardized protocol towards therapeutic applications since there are not many methodological reviews that focus on sEMG control/feedback. A systematic literature review using the PRISMA (preferred reporting items for systematic reviews and meta-analyses) methodology is conducted. A Boolean search in databases was performed applying inclusion/exclusion criteria; articles older than 5 years and repeated were excluded. A total of 393 articles were selected for screening, of which 66.15% were excluded, 131 records were eligible, 69.46% use neither VR/AR interfaces nor sEMG control; 40 articles remained. Categories are, application: neurological motor rehabilitation (70%), prosthesis training (30%); processing algorithm: artificial intelligence (40%), direct control (20%); hardware: Myo Armband (22.5%), Delsys (10%), proprietary (17.5%); VR/AR interface: training scene model (25%), videogame (47.5%), first-person (20%). Finally, applications are focused on motor neurorehabilitation after stroke/amputation; however, there is no consensus regarding signal processing or classification criteria. Future work should deal with proposing guidelines to standardize these technologies for their adoption in clinical practice.
... Существует много работ, посвященных классификации интерфейсов виртуальной реальности. Многие типы интерфейсов В Р рассматриваются в [12] , а также основательное исследование по данной теме было проведено в [26] . ...
... Согласно классификации интерфейсов, описанной в [26] , выделяют четыре основных группы интерфейсов, описанных ниже. 3. Интерфейсы, основанные на моторике пользователя: ...
... Также очень часто используются специализированные СА П Р для создания объемных моделей [28] и различные языки программирования, такие как С/C++ и Pascal [26] . ...
Article
Full-text available
Предметом исследования являются особенности организации интерфейсов виртуальной реальности. Автор подробно рассматривает такие аспекты темы, как вовлеченность пользователя в виртуальную среду, различные способы и сценарии взаимодействия пользователя с виртуальной реальностью, безопасность пользователя в виртуальной среде, а также такое явление, как киберболезнь и способы ее предотвращения. В исследовании также рассматривается использование голосового управления в качестве альтернативы ручному. Особое внимание в данном исследовании уделяется классификации интерфейсов виртуальной реальности, среди которых выделяются и подробно рассматриваются сенсорные интерфейсы, интерфейсы на основе моторики пользователя, сенсомоторные интерфейсы, интерфейсы для моделирования и разработки виртуальной реальности. Основным выводом проведенного исследования является то, что интерфейс виртуальной реальности должен проектироваться с учетом эргономики пользователей для предотвращения мышечной усталости и киберболезни. Кроме того, очень важным при проектировании интерфейсов виртуальной среды является обеспечение безопасности пользователя: пользование интерфейсом виртуальной реальности не должно приводить к травмированию пользователя. Для создания эргономичного и безопасного интерфейса виртуальной реальности зачастую требуется сочетание различных видов интерфейсов, с помощью которых пользователь может получить доступ к альтернативному способу управления или улучшенной навигации. Особым вкладом автора в исследование темы является описание классификации интерфейсов виртуальной реальности. The subject of the study is the features of the organization of virtual reality interfaces. The author examines in detail such aspects of the topic as user involvement in the virtual environment, various ways and scenarios of user interaction with virtual reality, user security in the virtual environment, as well as such a phenomenon as cyberbullying and ways to prevent it. The study also considers the use of voice control as an alternative to manual. Particular attention in this study is paid to the classification of virtual reality interfaces, among which sensory interfaces, interfaces based on user motor skills, sensorimotor interfaces, interfaces for modeling and developing virtual reality are distinguished and considered in detail. The main conclusion of the study is that the virtual reality interface should be designed taking into account the ergonomics of users to prevent muscle fatigue and cyber-pain. In addition, it is very important to ensure the user's safety when designing virtual environment interfaces: using the virtual reality interface should not lead to injury to the user. To create an ergonomic and secure virtual reality interface, a combination of different types of interfaces is often required, through which the user can access an alternative control method or improved navigation. A special contribution of the author to the study of the topic is the description of the classification of virtual reality interfaces.
... VR is a technology that handles representations of the physical world enabling the communication of ideas (Sherman and Craig 1995). Its capacity to handle objects and their properties, to walk-through the virtual environment in real time, and to simulate real world situations in a 3D graphical display, makes this technology an ideal interface for describing real world models (Brooks 1994). The integration of CBR and VR plays its role in keeping records of experiences represented in an environment that simulates real situations. ...
... But certainly every society uses its space differently, both technologically and artistically" (Bolter 1986, p80) This work also involves the construction of the virtual worlds where the cases are held. This is a process of design and, as such, there is no standard or common sense operation or methodology to be followed (Brooks 1994). The development of this project has shown that the understanding of VR capabilities and their influences over the human process of perception and cognition can help decide whether VR is appropriate for case representation. ...
Conference Paper
Full-text available
This work presents a Computer-Based Training (CBT) tool that relies on an integration of Virtual Reality (VR) and Case-Based Reasoning (CBR). It is an application that handles past cases represented in Virtual Reality (VR) and aims at providing a framework for the development of computer-based instructional applications, d prototype has been developed as part of this research and is used to provide examples on the issues discussed. The application holds past experiences of experts in the inspection of health & safety regulations of scaffold structures. Each case in the prototype contains a virtual scaffold structure and tasks involved for its inspection. The instructional activity happens by reviewing tasks on scaffold inspection to either increase or evaluate users' skills. The prototype development methodology is presented explaining the process of case design in VR. A training session on the inspection of scaffold health & safety regulations is presented and conclusions are drawn.
... and to simulate real world situations in a 3D graphical display, makes this technology an ideal interface for describing real world models (BROOKS, 1994). ...
... But certainly every society uses its space differently, both technologically and artistically" (BOLTER, 1986, p80) Apart from the usual issues involving case representation in CBR (see KOLODNER, 1993;WATSON, 1997 for further details), this research also involves the construction of the virtual worlds where the cases are held. This is a process of design and, as such, there is no standard or common-sense operation or methodology to be followed (BROOKS, 1994). ...
... These results are classified as good because they have a lower value when compared to the normative average value set. Interface quality in the use of VR is defined as the quality of interaction between users and objects in the virtual world, where users must be able to understand and handle these objects (Thériault, Robert, and Baron 2004). The role of the interface allows users to see virtual world objects, so the more senses that are stimulated in the virtual world, the greater the feeling of immersion in the user (Gatica-Rojas and Méndez-Rebolledo 2014). ...
Article
Full-text available
Technology-based simulation learning methods have great potential to support education for students in the health sector. One of the innovations that can be done is virtual reality. In using virtual reality, paying attention to the quality of virtual reality services to provide learning satisfaction and motivate students to be active in the learning process is essential. VNursLab designed VR as an additional learning media for Padjadjaran University nursing students. Therefore, this study aims to identify nursing students' satisfaction with using VR as a learning media simulator.The method used is pre-experimental quantitative research and a one-shot case study approach. The sampling used in the study was purposive. Eighty-six respondents completed the research by following the VR simulation for 20 minutes and continued filling out the PSSUQ (Post-Study System Usability Questionnaire) questionnaire. The data were analyzed using descriptive analytics by finding the mean value of each aspect of the assessment obtained from the mean of each element of the respondent. The results show that the overall quality of VR VNursLab is good (2.2), with a mean value of system quality (2.3), mean InfoQual (2.21), and IntQual (2.28). Based on these results, VR VNursLab as a learning media with virtual simulation methods has a good quality of use assessment and provides satisfaction with nursing students' use.
... Cell and portal methods [1] [2] do not work effectively for our models because of their open structure. Regular subdivisions [3] [4], octrees [5], BSP trees and related methods [6] [7] work well for static models, but not so well for dynamic ones. Occlusion culling for these very large models [8] [9] requires hardware support which was not available on our low-cost PCs. ...
Article
In this paper we present a general technique for real-time display of very large models. We combine methods for culling and level-of-detail management with wire-frame representations, generated dynamically from a hierarchy of bounding volumes. Smoothly moving boundary planes ensure that changes of representation do not lead to distracting popping effects. Context-sensitive landmark objects are added to support navigation, with a minimal impact on display frame rates. The method uses four dynamically-managed, overlapping zones to control level of detail. Results from user evaluation experiments are presented to demonstrate that the technique is effective for very large models on a modestly priced PC.
... VR is a technology that handles representations of the physical world [5] enabling the communication of complex ideas [6]. Its capacity to handle objects and their properties, to walk-through the virtual environment in real time, and to simulate the real world in a 3D graphical display, makes this technology an ideal computer tool for describing real world models [7]. Bringing together VR and CBR, within a unified framework was the main objective of this research. ...
Conference Paper
Full-text available
This paper presents the results of an investigation into the suitability of a Virtual Reality (VR) environment for case-based reasoning (CBR). The paper will show that for problem domains where visualisation of the case is important (e.g. in design or training applications) VR environments have great potential. A prototype has been developed as part of this research. It holds past experiences of experts in the inspection of health and safety regulations of scaffold structures. Each case in the case-base describes a virtual scaffold structure along with the various tasks involved in its inspection. In order to encourage further applications of this approach, this paper describes features of VR that potentially make VR a powerful tool for CBR.
Article
This paper describes an on-going collaborative project between CADCentre Ltd and the Advanced Interfaces Group (AIG) at the University of Manchester. From an industrial perspective, the main aim of the project is to explore the application of virtual reality techniques to the task of creating, and subsequently using, CAD models of complex process plants, such as oil rigs, oil refineries or power plants. For the academic partners, the goals are to understand and overcome problems of applying such techniques to large, real-world systems, and to research software architectures and algorithms which will underpin future developments in the design and application of virtual reality. Computer-aided process plant design
Article
Full-text available
A bewildering variety of devices for communication from humans to computers now exists on the market. In this article, we propose a descriptive framework for analyzing the design space of these input devices. We begin with Buxton's (1983) idea that input devices are transducers of physical properties in one, two, or three dimensions. Following Mackinlay's semantic analysis of the design space for graphical presentations, we extend this idea to more comprehensive descriptions of physical properties, space, and transducer mappings. In our reformulation, input devices are transducers of any combination of linear and rotary, absolute and relative, position and force, in any of the six spatial degrees of freedom. Simple input devices are described in terms of semantic mappings from the transducers of physical properties into the parameters of the applications. One of these mappings, the resolution function, allows us to describe the range of possibilities from continuous devices to discrete devices, including possibilities in between. Complex input controls are described in terms of hierarchical families of generic devices and in terms of composition operators on simpler devices. The description that emerges is used to produce a new taxonomy of input devices. The taxonomy is compared with previous taxonomies of Foley, Wallace, and Chan (1984) and of Buxton (1983) by reclassifying the devices previously analyzed by these authors. The descriptive techniques are further applied to the design of complex mouse-based virtual input controls for simulated three-dimensional (3D) egocentric motion. One result is the design of a new virtual egocentric motion control.
Article
Haptic feedback plays an important role to recognize and manipulate objects in virtual environments. Recently, high quality haptic feedback devices have been commercially available. However, the basic software for haptic devices is not prepared sufficiently. In this paper, we propose Haptic Interface Platform(HIP) that is a common software library independent of the types of haptic devices to construct virtual environments with haptic feedback. HIP supports three types of haptic device types:Point Type, Surface Type and Texture Type. To generalize Haptic Interface Platform, we classified it into three functions: (1)device driver, (2)haptic Tenderer, (3)haptic simulation engines. HIP enables users to make complex virtual environments easily. Since HIP is compatible with many haptic displays, we will also be able to accumulate haptic software assets and reproduct software in making applications. In the last of this paper, we developed simple haptic models using HIP and experimented how humans would feel them for each haptic display. We verified that HIP made the similar sensation for each display.
Chapter
The virtual environments created through computer graphics are new communications media. They are generally experienced through head-coupled, virtual image, stereoscopic displays that can synthesize a coordinated multisensory presentation of a synthetic environment. A well-designed human computer interface gives the user an efficient, effortless flow of information between the device and its operator. When users are given sufficient control over the pattern of this interaction, they themselves can evolve efficient interaction strategies that optimize their communications to the machine. Virtualization may be defined as the process by which a viewer interprets patterned sensory impressions to represent objects in an environment other than that from which the impressions physically originate. A classical example would be that of a virtual image as defined in geometrical optics. A viewer of such an image sees the rays emanating from it as if they originated from a virtual point rather than from their actual location. Three levels of virtualization may be distinguished: virtual space, virtual image, and virtual environments. These levels represent points on a design continuum of virtualization as synthesized sensory stimuli more closely acquire the sensory and motor characteristics of a real environment.
Article
An edited volume of chapters on pictorial communications. Four major sections examine environments, knowing, acting (vehicular control, manipulative control, visual mapping and adaption, orientation), and seeing (pictorial space, primary depth cues). Individual chapters include themes of cartography, perception, 3-D visualisation, vehicle navigation, spatial vision. -M.Blakemore
Article
An interactive graphics system must define everything about the user-computer interface, ranging from the concepts the user must understand, down to the finer details of screen formats, interaction techniques, and device characteristics. A brief description of the overall design process shows how the issue of interaction technique fits into the whole. The authors develop a broad classification of interaction tasks (select, position, orient, path, quantify, and text) and of controlling tasks (stretch, sketch, manipulate, and shape). They expand on these tasks and relate each other to the relevant human factors in the design of these techniques. Then they relate any experimental work relevant to selecting one technique over another.
Article
The use of physical gestures to reinforce cognitive chunking is discussed. The thesis presented is that muscular tension and motion can be used to phrase human-computer dialogues. These phrases can be used to reinforce the chunking of low-level tasks that correspond to the higher-level primitives of the mental model that we are trying to establish. The relationship of such gestures to the issue of compatibility is also discussed. Finally, we suggest how to improve the use of grammar-based models in analysing and designing interaction languages.
Article
An abstract is not available.