Content uploaded by Luc Baron
Author content
All content in this area was uploaded by Luc Baron on Dec 26, 2013
Content may be subject to copyright.
Virtual Reality Interfaces for Virtual Environments
Lévis Thériault, Jean-Marc Robert, and Luc Baron
École Polytechnique de Montréal
P.O. Box 6079, Station Centre-ville, Montréal, Québec H3C 3A7
{levis.theriault, jean-marc.robert, luc.baron}@polymtl.ca
Abstract. In this article we define and classify virtual
reality interfaces (VRI). The goal is to provide those
interested in such interfaces with a solid and unified
framework on VRI. This framework gives an overview of
VRI, and allows one to see the different categories of
VRI, the relations between them, the levels of analysis,
and the terms used. The classification starts from the fact
that, to exist and be used in a virtual world, the objects of
this world first have to be created and to allow interaction
with the user. This led us to define two sides in the
classification. The Designer side concerns the modeling
of objects and includes modeling interfaces. The User
side concerns the interaction with the objects of the
world, and includes three sets of interfaces: the sensorial
interfaces to perceive the world, the motor skills
interfaces to act in that world, and the sensorimotor
interfaces which combine the two types of activities.
Keywords: Human-Computer Interfaces, Virtual
Environments, Virtual Reality Interfaces, Virtual Reality
Systems, Modeling Interfaces, Sensorial Interfaces, Motor
Skills Interfaces.
1 Introduction and Motivation
In this paper we define and classify virtual reality
interfaces (VRI), with the goal of providing authors,
designers, developers, professors, and users with a solid
and unified framework on VRI [1][2][3]. This framework
gives an overall picture of the different families of VRI,
and of the categories and subcategories of VRI, the
relations that exist between them, the levels of analysis,
and the terms used. Moreover, we briefly describe the
devices and systems that form the VRI, their functioning,
roles and principal functionalities, and the supporting
technologies. In parallel, we identify relevant sources of
information (articles, books, reviews, Web sites, etc.) on
VRI.
The major works of [4][5] are rich sources of
information on VRI that helped us a lot in our enterprise.
For instance, they present definitions and the principles of
virtual reality (VR), the systems, devices and
technologies that form virtual reality systems (VRS), the
manufacturers, the costs of the products, and some
companies in this new domain. We completed the data
collection about VRI with the works of
[6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]
[21]. Despite the value of these works, there was still a
need for a synthetic overall picture of VRI, for a detailed
classification of VRI, and for a clear distinction between
the designer’s activities and the user’s activities with
VRI.
The paper is structured as follows. It presents a
definition of VR and the conditions to be satisfied to have
a VRS. It describes the VRI classification: the underlying
rationale, the families, categories and sub-categories of
VRI, the devices and systems that form the VRI, and the
supporting technologies. A short conclusion highlights
the contribution of this paper.
2 Virtual Reality
The concept of “virtual reality” was invented by Heilig in
1952 and the term “virtual reality” was introduced by
Jaron Lanier in the 80’s and has been around since then.
It is controversial with the consequence that nobody
agrees on a formal definition. However, several criteria
allow one to understand its nature and scope.
Recently Fuch et al. [5] have defined virtual reality as
being “an aggregation of techniques allowing real time
interaction (1) with a virtual world (2), with behavioral
interfaces (3) capable of pseudo-natural immersion (4) of
the user(s) within this environment”. They assert that only
the installations that respect these four conditions are
virtual reality systems (VRS). In the next paragraphs, we
explain each of these conditions.
(1) Real time interaction is reached when the user does
not perceive a time lag between his actions within the
virtual environment and the sensorial response of this
environment. The duration of the lag must be around
100 ms or less, to preserve the impression of real
time. This delay depends on the senses and the motor
skills responses involved. The authors assert that
navigation is only one particular interaction: the user
interacts in the virtual world by virtually moving
him/herself with a specific command in a changing
virtual environment. If the user does not command
his/her displacement him/herself, then he/she is a
passive spectator and we are not anymore in VR.
(2) The two major problematics about VR are concerned
with the VR modeling or digitization, and the
interface between the user and the virtual world.
(3) To operate in a virtual world, one has to use
behavioral interfaces. These are composed of
sensorial and motor skills interfaces. Sensorial
interfaces present the virtual world evolution to the
user’s senses. Motor skills interfaces transmit the
user’s action to the virtual world. The number and
types of interfaces to be used depend on the
objectives pursued with the application.
(4) The user must be in the most effective pseudo-natural
immersion in the virtual world.
. Immersion within a
virtual world cannot be natural because we have
learned to be natural only within the real world and
not within a virtual one (there would be a
sensorimotor and mental skew hence the term
pseudo). The sensation of immersion is subjective
and it depends on the application or devices that are
used (interfaces, software, etc.).
In this article we adopt this definition of VR and the
four conditions that it includes since they seem to be the
most comprehensive ones.
2.1 Basic Features of VRS
VRS [22] allow users, i.e. scientists, engineers,
physicians, trainees, etc. to interact with data in a
different manner, in order to better understand and solve
real complex problems. Real time interaction with a
virtual scene requires that the system be able to carry out
a number of basic operations such as:
real time management of the scene objects that are
stored in memory, and the ongoing updating of the
scene;
simulation of the objects’ behavior;
processing of the 3D images of the scene at a certain
frequency (typically 15 Hz);
sounds generation associated to the graphic database,
if there are any;
management of the navigation model in the database
(i.e., when one walks, flies, makes a zoom, etc.);
control of the system’s input-output peripherals to
allow the user to interact with objects of the scene,
trigger actions and receive the image, the sound or
other adapted sensorial impressions;
etc.
These operations require major capabilities of
processing, visualization, and manipulation of the system
that supports the virtual world.
3 Classification of VRI
A virtual world is composed of a coherent aggregation of
modeled 3D objects one must be able to interact with in a
virtual reality mode. The first part of the classification
relates to the modeling of virtual objects and concerns the
Designer, whereas the second part relates to the real time
interaction between the user and these objects, and
concerns the User. This distinction between the modeling
and the use of VR objects is at the root of the
classification.
Figure 1 presents a classification of VR interfaces. On
one side, the Designer creates and models the objects that
compose the virtual world, and defines their interactivity.
On the other side, the User interacts with the modeled
objects. In this paper we explain how to model objects for
the virtual world and how to interact with them.
The classification includes five levels of
decomposition. At the first level, there are four families
of VRI: on the Designer side, the modeling interfaces,
and on the User side, the sensorial, the motor skills, and
the sensorimotor interfaces.
In the modeling interfaces family, there are three
categories of interfaces and two levels of decomposition.
In the sensorial interfaces family, there are five categories
of VRI corresponding to the five senses of the human
being and four levels of decomposition. In the motor
skills interfaces, there are seven categories of VRI
corresponding to the different motor senses and three
levels of decomposition. Finally, in the sensorimotor
interfaces family, there is only one category of VRI
associated to the force feedback command, and two levels
of decomposition.
4 Modeling Virtual World Objects
Modeling consists in building a representation of some
object of interest. A model is a mimic of an object, in a
smaller, similar to, or larger scale than reality, simplified
as compared to the reality, and with the advantages of
being more easily observable and controllable than
reality. Modeling is carried out with modeling interfaces.
To create 3D models of objects, the VRI designer can
choose between three types of interfaces that correspond
to different methods (see Figure 2). The first method
consists in creating models from real objects, the second
one consists in creating objects with software, and the
third one, which is more recent, consists in creating
volumetric forms with virtual sculpture. The following
paragraphs give more details on each method.
Virtual Reality
Interfaces
Sensorial
Interfaces
Modeling
Interfaces
Designer Side
Interfaces for Modeling
from Real Objects
3D Digitizers CAD tools
Programming and modeling langages
Interfaces for Modeling
with Software
Interfaces for Modeling
from Vi rtual Sculptu re
Free form tools
Motor Skills
Interfaces
Sensorimot or
Interfaces
User Side
Fingers Moveme nts
Detect ion Inter faces
Walk Analysis
Interfaces
Position and
Orientation Location
Interfaces
Command
Interfaces
Facial
Acquisition
Interfaces
SolesData glovesTrackers
Force feedback platforms
Sound and Speech
Command
Interfaces
Manual
Command
Interfaces
Mouses
Joysticks
Pens
Sound and speech re cognition sy stems
Pedestria n
Command
Interfaces
Pedals
Locomotion
Interfaces
Motion Captu re
Interfaces
Lips Movement
Analysis Interfaces
Facial Mo vement
Analysis Interfaces
Optical systems Labial interfac es
Eyes Movement
Location Interfaces
Eyes trackers
2D-Restitution
Motion Capture
Interfaces
Polygon-ba sed systems
3D-Restitution
Motion Capture
Interfaces
Tracker systems
Image analysis systems
Data suits One-way
Locomotion
Interfaces
Roller skates
1D rolling carpets
Multidirectional
Locomotion
Interfaces
Mobile platforms
Gyroscopes
Movement controllers
2D rolling carpets
Manipulators
Force feedback joysticks
Force Feedback
Command Interfaces
Force feedback data gloves
Exoskeletons
Auditive
Interfaces
Touch
Interfaces
Visual
Interfaces
Olfactory
Interfaces
Gustative
Interfaces
Monoscopic Visual
Interfaces
Fig. 1. Classification of Virtual Reality Interfaces
Stereosc
Int
Interfaces with
Two-Screens
Head-mount
Video glasse
Binocular om
opic Visual
erfaces
Head-up displays
Flavours diff user
Sound and Speech
Synthesis Interfaces
Sound and speech synthesis systems
Under development
Therma l Feed back
Interfaces
Touch Feedback
Interfaces
Data gloves
Joysti cks
Thermal gloves
See-through HMDs
Visual display screens
Interfaces with
One-Screen
Shutter glasses
Stereoscipic projection screens
ed displays
s
ni-oriented monitors
3D Soun ds
Generation Interfaces
3D sounds gene rators
Modeling from Virtual Sculpture. With this method,
knowing 3D modeling or a design software is not
necessary. The creation of the form is obtained by
virtually working the matter whose form, texture, and
density can be physically felt [29]. This method allows
sculptors, designers, or experts in computer graphics to
use the sense of touch during the creation. It associates
the freedom of intuition and expression to the speed and
the flexibility of a numerical tool without blocking the
creativity. The use of various tools makes it possible to
shape, cut out, stretch, pierce, etc. this “virtual clay” as a
sculptor would do, by constantly keeping a full control of
the task through a force feedback command. The object
that is finally obtained has a digital definition that can be
used in a CAD software or a rapid prototyping system,
and can be directly exploited to build the physical model
of the object.
Modeling with Software. To create 3D models, one can
use computer-aided design (CAD) tools such as 3D
Studio [24], AutoCAD [25], CATIA [26], ProEngineer
[27], etc. or programming languages such as C/C++,
Pascal, VRML [28], etc. This method allows one to
reproduce an object in the form of a 3D grid that shows
the geometry of the object in details.
To interact with objects of the virtual world, one must be
able to perceive and handle these objects. Thus, the VRI
can be classified according to three large families:
sensorial interfaces, motor skills interfaces, and
sensorimotor interfaces. The next sections describe the
various types of interfaces within each family.
5 Interacting with Virtual World Objects
Modeling from Real Objects. 3D digitizers allow one to
create 3D models of objects for the virtual world much
more quickly than if software or modelers were used [4].
To collect the data that are required to create a 3D model
of an object [23], one can scan the object surfaces with a
digitizer. Results are digital data that can be processed by
a computer.
Modeling
Interfaces
Interfaces for Modeling
from Real Objects
3D Digitizers CAD tools
Programming and modeling langages
Interfaces for Modeling
with Software
Interfaces fo
from Virtua
r Modeling
l Sculpture
Free form tools
Fig. 2. Classification of Modeling Interfaces
Visual Interfaces. As the sight is the sensory channel
that brings the most information to the human [10][31], it
is essential to create visual immersion and thus allow the
users to see objects of the virtual world. In addition to
producing good quality 3D images, the challenge one
faces with the production of visual interfaces is to show a
different image to each eye and allow the users to benefit
from the stereoscopic vision. Visual interfaces were
classified according to the technique used to separate the
images intended for each eye. Thus, the visual interfaces
are divided into two classes: stereoscopic visual interfaces
and monoscopic visual interfaces. The first ones are
divided into two types of interfaces: (1) interfaces with
two screens (one per eye) that include head-mounted
displays, video glasses, and binocular omni-oriented
monitors; (2) interfaces with only one screen (for the two
eyes) that include shutter glasses and stereoscopic
projection screens. The images are then separate at the
The role of the sensorial interfaces is to make it possible
for the user to perceive objects of the virtual world. And
as in the real world, the more the number of senses
stimulated in the virtual world is high, the more the
feeling of immersion of the user is great [30]. In 1952,
Heilig [31] analyzed the senses according to their
capacity to mobilize the human attention. Although we
are critical of the results, it is intriguing to know the order
of magnitude of the mobilization capacity of each sense:
the sight (70%), hearing (20%), smell (5%), touch (4%),
and taste (1%). Our classification of the sensorial
interfaces follows this order, and accordingly we present
the visual, auditive, olfactory, touch and gustative
interfaces (see Figure 3). Let us note that the gustative
interfaces do not exist yet.
5.1 Sensorial Interfaces
screen level or by the glasses. The monoscopic visual
interfaces take shape through head-up displays, see-
through HMDs and visual display screens.
Auditive Interfaces. To increase the feeling of
immersion and even impact the visual perception of the
virtual world, the hearing sense can be exploited [12][32].
The sounds generated by the computer must be 3D and
come from the elements present in the virtual world. The
auditive interfaces were classified into two categories: 3D
sounds generation interfaces, sound and speech synthesis
interfaces.
Touch Interfaces. In several domains such as
teleoperation, telemedicine [33][34][35], and telerobotics,
the sense of touch is necessary to better appreciate the
virtual world [8][12][36][37]. Moreover, the hand has a
high density of tactile sensors that make it a good
candidate for immersion [4], and it is naturally used for
the handling task. The touch interfaces were classified
into two categories: thermal feedback interfaces and
touch feedback interfaces.
Olfactory Interfaces. The synthesis of simple odours
(e.g., coffee, burned rubber, flower, moisture, etc.) is
possible with the mixture of odorous molecules (not more
than two at the same time if one wants to recognise the
final odour) and through the use of a flavour diffuser. The
absence of basic odorous molecules, by analogy with the
basic colours in painting, makes impossible for the
moment the synthesis of odours [31][38][39].
Gustative Interfaces. Gustative interfaces do not exist
yet. Research on this topic remains theoretical and there
are very few publications on it [40]. Various applications
are possible, one can think of rooms for virtual tasting of
foods. It would be possible to taste wines, coffees, etc.
simply through gustatory devices, as we smell odours
through odorous strips in perfumeries.
Sensoria l
Interfaces
Auditive
Interfaces
Touch
Interfaces
Visual
Interfaces
Olfactory
Interfaces
Gustative
Interfaces
Monoscopic Visual
Interfaces
Stereoscopic Visual
Interfaces
Head-up displays
Flavou rs diffuser
Sound a nd Speech
Synthesis Interfaces
3D Sou nds
Generation Interfaces
3D sounds generators Sound and s pe ech sy nthesi s sy stems
Under development
Ther mal Fe edba ck
Interfaces
Touch Feedback
Interfaces
Data gloves
Joystic ks
Thermal gloves
See-through HMDs
Visual display screens
Interfaces with
One-Screen
Shutter glasses
Stereoscipic projection screens
Interfaces with
Two-Screens
Head-mounted displays
Video glasses
Binocular omni-oriented monitors
Fig. 3. Classification of Sensorial Interfaces
5.2 Motor Skills Interfaces
The role of the motor skills interfaces is to allow the user
to act on the objects of the virtual world. To do so, one
must provide the computer with information on the user’s
gestures and speech that concern the objects of the virtual
world so that it can react to them in a suitable way [11].
Seven classes of motor skills interfaces were identified, as
Figure 4 shows. These interfaces enable the user to
operate in the virtual world.
Position and Orientation Location Interfaces. 3D
position sensors, commonly called trackers (the term we
use in the rest of paper), allow the user to know at any
time the position and orientation of his/her members
(head, wrist) or of an object (effector, stylet) in space.
The trackers use either one of the following technologies:
electromagnetic, electric, acoustic, optical, mechanical,
gyroscopic or hybrid.
Finger Movement Detection Interfaces. These
interfaces take shape through data gloves [41][42] which
detect some or the totality of the relative finger
movements in relation to the wrist. The gloves are based
on either one of the following technologies: optical fibres,
plates, Hall effect or pneumatic.
Walking Analysis Interfaces. Soles and force feedback
platforms can be used to analyse the walking of a person.
Walking analysis interfaces allow one to collect
information on the walking of a person, which includes
the pace length, the walking speed, the body movement
and balance, etc.
Motor Skills
Interfaces
Fingers Movements
Detection Interfaces
Walk Analysis
Interfaces
Position and
Orientation Location
Interfaces
Command
Interfaces
Facial
Acquisition
Interfaces
SolesData glovesTrackers
Force feedback platforms
Sound a nd Speech
Command
Interfaces
Manual
Command
Interfaces
Mouses
Joysticks
Pens
Sound and speech recognition systems
Pedestria n
Command
Interfaces
Pedals
Lips Movement
Analysis Interfaces
Facial Mo vement
Analysis Interfaces
Optical systems Labial interfaces
Eyes Movement
Location Interfaces
Eyes trackers
2D-Restitution
Motion Capture
Interfaces
Polygo n-based systems
3D-Restitution
Motion Capture
Interfaces
Tracker systems
Image analysis systems
Data suits One-way
Locomotion
Interfaces
Roller skates
1D rolling carpets
Multidirectional
Locomotion
Interfaces
Mobile platforms
Gyroscopes
Movement controllers
2D rolling carpets
Locomotion
Interfaces
Motion Capture
Interfaces
Fig. 4. Classification of Motor Skills Interfaces
Motion Capture Interfaces. These interfaces can be
divided into two categories: 2D- vs 3D- restitution motion
capture interfaces. Their role is to collect data on body
movements. The 2D-restitution interfaces, which include
an image analysis system, allow to analyse the video
images of the tracked person; these images are modelled
by a polygon-based system allowing a 2D-restitution of
the filmed scene [43]. The 3D-restitution interfaces allow
to build a 3D model of a body in movement, and to
benefit from the advantages of natural movements done
by a human in real environment. Three types of 3D-
restitution interfaces were identified on the basis of the
data collection method that is used: tracker systems,
image analysis systems, and data suits.
Command Interfaces. These interfaces are used to send
orders to the virtual world [5]. It is not rare to meet
traditional control devices in virtual reality. Three
categories of command interfaces were identified:
manual, pedestrian, and sound and speech.
Locomotion Interfaces. The role of these interfaces is to
have the user think that he/she is moving into a virtual
world [44]. They can be divided into two categories: one-
way locomotion interfaces, and multi-dimensional
locomotion interfaces [45]. The former ones include roller
skates, 1D rolling carpets, etc. whereas the latter ones
include mobile platforms, gyroscopes [46], movement
controllers, 2D rolling carpets, etc.
Facial Acquisition Interfaces. These interfaces allow the
computer to collect the facial movements of the user with
various expressions: joy, sadness, fear, anger, surprise,
dislike, daydream, mistrust, concern, withdrawal,
suspicion, etc. in order to reproduce these movements on
a synthesis face [11]. The facial acquisition interfaces can
be divided into three categories: facial movement analysis
interfaces, eyes movement location interfaces, and lips
movement analysis interfaces.
5.3 Sensorimotor Interfaces
The role of the sensorimotor interfaces is to transmit the
motor responses of the user to the computer, and as a
reaction, sensory stimuli are sent by the computer to the
user [5]. Only one subcategory of sensorimotor interfaces
that include these two functions could be identified (see
Figure 5).
These interfaces have some resemblance with the
simulators of movements that transmit changes of
orientation and accelerations to the users present in the
simulator. They allow one to materialise the objects
present in the virtual world: they apply to the part of the
body in contact with the virtual object, the reciprocal
forces that the user would exert on the real object. The
forces to be simulated can be exerted by a liquid or a
solid. The problem with the force feedback command
interfaces, which include manipulators, force feedback
joysticks, force feedback data gloves, exoskeletons, etc. is
that they must be built on a solid frame and they cause
some nuisance to the user [4].
Sensorimot or
Interfaces
Manipulators
Force feedback joysticks
Force Feedback
Command Interfaces
Force feedback data gloves
Exoskeletons
Fig. 5. Classification of Sensorimotor Interfaces
6 Conclusion
In this paper we have defined and classified the various
VRI, and showed a synthetic graphic presentation of the
classification. The goal was to help the VR community to
get a better understanding of the domain by providing it
with a solid and unified framework about the interfaces.
The next step of our research will consist to document
each VR interface and to design a computer-aided
decision-making system for helping designers and
developers to choose and combine VRI in relation with
the tasks, the context, and the users’ requirements.
References
[1] L. Thériault and J.M. Robert. Classification des
interfaces matérielles de la réalité virtuelle. To appear in
71e Congrès de l’Association francophone pour le savoir
(Acfas), May 2003.
[2] L. Thériault and J.M. Robert. Taxonomie de la
réalité virtuelle. Submitted of Journal of Human-Machine
Interaction, March 2003.
[3] L. Thériault and J.M. Robert. Terminologie du
domaine de la réalité virtuelle. Submitted of Journal of
Human-Machine Interaction, March 2003.
[4] G. Burdea and P. Coiffet. La réalité virtuelle.
Hermès, Paris, 1993.
[5] P. Fuchs, M. Moreau, and J.P. Papin. Le traité de la
réalité virtuelle. Presses de l’École des Mines de Paris,
2001.
[6] D. Bowman. Interaction Techniques for Common
Tasks in Immersive Virtual Environments: Design,
Evaluation, and Application. Ph.D. Thesis, Georgia
Institute of Technology, June 1999. Web Site
http://people.cs.vt.edu/~bowman.
[7] S.R. Ellis. What are virtual environments? IEEE
Computer Graphics & Applications, 1994, 17-22.
[8] S.R. Ellis, D.R. Begault, and E.M. Wenzell. Virtual
environments as human-computer interfaces (Chap. 8). In
Helander, M., Landauer, T.K., Prabhu, P.V. (Eds).
Handbook of human-computer interactive, 2nd Edition,
Elsevier, North-Holland, 1997, pp. 163-201.
[9] S. Ellis, M.K. Kaiser, and A.J. Grunwald (Eds).
Pictorial communications in virtual and real
environments. Taylor & Francis, 1991.
[10] C. Esposito and L. Duncan-Lacoste. User interfaces
for virtual reality applications. Morgan & Kaufmann, San
Francisco, 1999.
[11] ESIEA Group Web Site.
http://www.esiea.fr/activ/rv
[12] K.M. Stanney, R.R. Mourant, and R.S. Kennedy,
Human Factors Issues in Virtual Environments : A
Review of the Literature. Presence, 7(4), 1998, pp. 327-
351.
[13] S. K. Card, J. D. Mackinlay, and G. G. Robertson.
The design space of input devices. In Proceedings of
CHI'90, pp. 117-124, 1990.
[14] J.L. Gabbard and D. Hix. A Taxonomy of Usability
Characteristics in Virtual Environments. Office of Naval
Research, Grant No. N00014-96-1-0385, November
1997. Web Site
http://csgrad.cs.vt.edu/~jgabbard/ve/taxonomy
[15] W. Buxton. Lexical and pragmatic considerations of
input structures. Computer Graphics, 17(1), 31-37, 1983.
[16] W. Buxton. Chunking and phrasing and the design
of human-computer dialogues. In H.-J. Kugler (Ed.), In
Proceedings of the IFIP 10th World Computer
Conference--Information Processing '86, pp. 475-480.
Amsterdam: Elsevier Science, 1986.
[17] S.K. Card, J.D. Mackinlay, and G.G. Robertson.
The design space of input devices. In Proceedings of the
CHI '90 Conference on Human Factors in Computing
Systems, pp. 117-124. New York: ACM, 1990.
[18] S.K. Card, J.D. Mackinlay, and G.G. Robertson. A
morphological analysis of the design space of input
devices. ACM Transactions on Office Information
Systems, 9, 99-122, 1991.
[19] J.D. Foley, V.L. Wallace, and P. Chan. The human
factors of computer graphics interaction techniques. IEEE
Computer Graphics and Applications, 4(11), 13-48,
1984.
[20] I.S. MacKenzie. Input devices and interaction
techniques for advanced computing. In W. Barfield, & T.
A. Furness III (Eds.), Virtual environments and advanced
interface design, pp. 437-470. Oxford, UK: Oxford
University Press, 1995.
[21] J.D. Mackinlay, S.K. Card, and G.G. Robertson. A
semantic analysis of the design space of input devices.
Human-Computer Interaction, 5, 145-190, 1991.
[22] G. Burdea, Virtual Reality Systems and
Applications. Electro 93 International Conference, Short
Course, Edison, NJ, April 1993.
[23] A.D. Gregory, S.A. Ehmann, and M.C. Lin.
InTouch : Interactive Multiresolution Modeling and 3D
Painting with a Haptic Interface. In Proceedings of IEEE
Virtual Reality Annual International Symposium, 2000,
pp. 45-52.
[24] M. Bousquet. 3D Studio MAX 2.0 Quick
Reference. 1st edition, February, Autodesk Press, 1998.
[25] S.D. Elliott, J. Brittain, G. Head, J. Head, T.
Schaefer, and S. Elliot. Autocad Reference Library : The
Complete, Searchable Resource for Mastering Autocad.
Cd-Rom edition, October, Ventana Communications
Group Inc, 1995.
[26] P. Carman and P. Tigwell. Catia Reference Guide.
2nd Edition, March, OnWord Press, 1998.
[27] S.G. Smith. Pro/ENGINEER 2000i Configuration
Options Reference Guide, February, CADquest, 2000.
[28] Web3D Web Site
http://www.web3d.org/vrml/vrml.htm
[29] Sim Team Web Site
http://www.simteam.com
[30] E. Emura and S. Tachi. Multisensor Integrated
Prediction for Virtual Reality. Presence, 7(4), 1998, pp.
410-422.
[31] M. Heilig. Enter the Experimental Revolution. In
Proceeding of Cyberarts Conference, Pasadena, October,
1992, pp. 292-305.
[32] C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V.
Kenyon, and J.C. Hart. The CAVE : Audio visual
experience automatic virtual environment.
Communication of ACM, 35(6), June 1992, pp. 64-72.
[33] P.J. Passmore, C.F. Nielsen, W.J. Cosh, and A.
Darzi. Effects of Viewing and Orientation on Path
Following in a Medical Teleoperation Environment. In
Proceedings of IEEE Virtual Reality Annual International
Symposium, 2001, pp. 209-215.
[34] A. Rizzo, J.G. Buckwalter, C. van der Zaag, U.
Neumann, M. Thiebaux, and C. Chua. Virtual
Environment Applications in Clinical Neuropsychology.
In Proceedings of IEEE Virtual Reality Annual
International Symposium, 2000, pp. 63-70.
[35] R. Steele, G. Goodrich, D. Hennies, and J.
McKinley. Design and Operation Features of Portable
Blind Reading Aids : Project Overview, Report 62, 1988,
pp. 137-138.
[36] G. Burdea. Force and Touch Feedback for Virtual
Reality. John Wiley & Sons, New York, 1994.
[37] S.R. Ellis, S. Adelstein, G. Baumeler, G. Jense, and
R. Jacoby. Sensor spatial distorsion, visual latency, and
update rate effects on 3D tracking in virtual
environments. In Proceedings of IEEE Virtual Reality,
Houston, TX, 1999, pp. 218-221.
[38] GNO Web Site
http://olfac.unav-lyon1.fr/olfac/acti/olfacto.htm
[39] H. Sundgren, F. Winquist, and I. Lundstrom.
Artificial Olfactory System Based on Field Effect
Devices. In Proceedings of the Interface to Real and
Virtual Worlds, Montpellier, France, March 1992, pp.
463-472.
[40] I. Bardot, L. Bochereau, P. Bourgine, B. Heyd, J.
Hossenlopp, N. Martin, M. Rogeaux, and G. Trystram.
Cuisiner artificial : un automate pour la formulation
sensorielle de produits alimentaires. In Proceedings of
Interface to Real and Virtual Worlds Conference,
Montpellier, France, March 1992, pp. 451-461.
[41] D.J. Sturman and D. Zeltzer. A survey of glove-
based input. IEEE Computer Graphics & Applications,
Vol. 14, January 1994, pp. 30-39.
[42] W. Buxton and B.A. Myers. A study in two-handed
input. In Proceedings of the CHI '86 Conference on
Human Factors in Computing Systems, pp. 321-326. New
York: ACM, 1986.
[43] Virtual Experimentation VEX
http://www.clarte.asso.fr/didacticiel/index.htm
[44] M. Mirose et al. Development of Haptic Interface
Platform. Transactions of the Virtual Reality Society of
Japan, 3(3), 1998.
[45] R. Darken, W. Cockayne, and D. Carmein. The
Omni-directional Treadmill : A Locomotion Device for
Virtual Worlds. In Proceedings of UIST 97.
[46] S. You and U. Neumann. Fusion of Vision and Gyro
Tracking for Robust Augmented Reality Registration. In
Proceedings of IEEE Virtual Reality Annual International
Symposium, 2001, pp. 71-78.