ArticlePDF Available

A virtual map to support people who are blind to navigate through real spaces

Authors:

Abstract and Figures

Most of the spatial information needed by sighted people to construct cognitive maps of spaces is gathered through the visual channel. Unfortunately, people who are blind lack the ability to collect the required spatial information in advance. The use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on the rise in recent years. This research is based on the hypothesis that the advance supply of appropriate spatial information (perceptual and conceptual) through compensatory sensorial channels within a virtual environment may assist people who are blind in their anticipatory exploration and cognitive mapping of the unknown space. In this long-term research we developed and tested the BlindAid system that combines 3D audio with a Phantom® haptic interface to allow the user to explore a virtual map through a hand held stylus. The main goals of this research were to study the cognitive mapping process of people who are blind when exploring complex virtual maps and how they apply this spatial knowledge later in real space. The findings supply strong evidence that interaction with the BlindAid system by people who are blind provides a robust foundation for the participants' development of comprehensive cognitive maps of unknown real spaces.
Content may be subject to copyright.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 41
A Virtual Map to Support People Who Are Blind
in Navigation Through Real Spaces
Orly Lahav
Tel Aviv University
David W. Schloerb, Siddarth Kumar, and Mandayam A. Srinivasan
Massachusetts Institute of Technology
Most of the spatial information needed by sighted people to construct cognitive maps of spaces is gathered
through the visual channel. Unfortunately, people who are blind lack the ability to collect the required spa-
tial information in advance. e use of virtual reality as a learning and rehabilitation tool for people with
disabilities has been on the rise in recent years. is research is based on the hypothesis that the advance
supply of appropriate spatial information (perceptual and conceptual) through compensatory sensorial
channels within a virtual environment may assist people who are blind in their anticipatory exploration and
cognitive mapping of the unknown space. In this long-term research we developed and tested the BlindAid
system that combines 3D audio with a Phantom® haptic interface to allow the user to explore a virtual map
through a hand held stylus. e main goals of this research were to study the cognitive mapping process of
people who are blind when exploring complex virtual maps and how they apply this spatial knowledge later
in real space. e ndings supply strong evidence that interaction with the BlindAid system by people who
are blind provides a robust foundation for the participants’ development of comprehensive cognitive maps
of unknown real spaces.
e visual sense plays a primary role in guiding sighted
persons through an unknown environment and helping
them reach a destination safely. Unfortunately, people
who are blind face diculties in performing such tasks.
For most people who are blind, walking in an unknown
environment can be uncomfortable, even after orienta-
tion and mobility (O&M) rehabilitation training. In
this paper we dene the term O&M as “the eld dealing
with systematic techniques by which blind persons orient
themselves to their environment and move about inde-
pendently” (Blasch, Wiener, & Welsh, 1997). Research
on O&M skills of people who are blind in known and
unknown spaces (Ungar, Blades, & Spencer, 1996) indi-
cates that the support for the acquisition of spatial map-
ping and orientation skills should be supplied at two
main levels: perceptual and conceptual.
At the perceptual level, information perceived via other
senses should compensate for the deciency in the visual
channel. us, the touch, audio, and olfactory chan-
nels become powerful suppliers of information about
unknown environments. In addition to the regular au-
dio feedback, the audio channel includes echolocation,
which enables the use of echo sounds to collect sur-
rounding spatial information (Kish, 1997). At the con-
ceptual level, the focus is on supporting the development
of appropriate strategies for an ecient mapping of the
space and the generation of navigation paths.
Over the years, O&M aids have been developed as sec-
ondary aids to help people who are blind build cognitive
maps and explore real spaces. Research has been done on
the eectiveness of these aids for people who are blind
Journal of Special Education Technology
42 JSET 2011 Volume 26, Number 4
and their ability to perform spatial tasks that are simi-
lar to the tasks they perform in this research. ese sec-
ondary aids are not a replacement for primary aids such
as the long cane and the guide dog. Farmer and Smith
(1997) describe the long cane as a device used “to provide
detection or preview by extending the tactile sense of the
user.” ere are two types of O&M aids: preplanning
aids that provide the user with information before arrival
in an environment, and in situ aids that provide the user
with information about the environment. Preplanning
aids include a verbal description of the space, tactile
maps, strip maps, physical models (Espinosa & Ochaita,
1998; Herman, Herman, & Chatman, 1983; Rieser,
1989; Ungar et al., 1996); sound-based virtual environ-
ment () systems (Sánchez, Noriega, & Farías, 2008);
and digital audio and tactile screens, which allow the us-
ers to collect graphics information as diagrams and maps
(for example, the Nomad device or the Talking Tactile
Tablets). No O&M research has been done yet on the
digital audio and tactile screen technology.
In situ aids that have been developed in recent years in-
c lu de m ob il i ty a id s su c h a s ob st ac le de t ec t or s. So me o f t he
obstacle detection devices are based on ultrasonic echolo-
cation, which is used to sense objects. ese include, for
example, Sonicguide (Warren & Strelow, 1985), Kaspa
(Easton & Bentzen, 1999), Miniguide (GDP Research,
2005), and Palmsonar (Takes Corp., 2007). Other sys-
tems, such as the Tactile Vision Substitution System
(), are based on a black-and-white camera for in-
put and a small tactile tablet that is placed on the user’s
tongue for output (Bach-y-Rita, Tyler, & Kaczmarek,
2003). e orientation aids are based on embedded in-
formation and navigation systems. Environmental adap-
tation is needed for the embedded information devices,
such as Talking Signs, which place sensors in the envi-
ronment (Crandall, Bentzen, Myers, & Mitchell, 1995),
or the activated audio beacon that uses cell phone tech-
nology (Landau, Wiener, Naghshineh, & Giusti, 2005).
e navigation systems that use personal guidance, or
global positioning, systems () are based on satellite
communication and include VoiveNoteGPS, Trekker,
Waynder Access, and others (Golledge, Klatzky, &
Loomis, 1996; Loomis, Golledge, Klatzky, & Marston,
2007).
e inventory of O&M electronic aids encompasses
more than 150 systems, products, and devices (Roentgen,
Gelderblom, Soede, & de Witte, 2008). However, there
are a number of limitations in the use of these preplan-
ning and in situ aids. For example, the limited dimen-
sions of tactile maps and models may result in poor
resolution of the spatial information provided (e.g., lack
of precise topographical features or of accurate dimen-
sions and locations for structural objects). ere also are
diculties in producing them and acquiring updated
spatial information, and they rarely are available. As a
result of these limitations, people who are blind are less
likely to use preplanning aids in everyday life. e major
limitation of the in situ aids is that users must gather the
spatial information in the explored space, making it im-
possible to build the cognitive map in advance and creat-
ing a feeling of insecurity and dependence upon arrival
at a new space. Furthermore, the embedded informa-
tion devices require special installation in the real space.
From the perspective of safety and isolation, the in situ
aids are based mostly on auditory feedback, which in
real space can reduce users’ attention and isolate them
from the surrounding space, especially from auditory in-
formation such as cars, auditory landmarks, or personal
interactions.
e use of virtual reality in domains such as simulation-
based training, gaming, and entertainment industries
has been on the rise in recent years. In particular, this
technology is used in learning and rehabilitation envi-
ronments for people with physical, mental, and learning
disabilities (Schultheis & Rizzo, 2001; Standen, Brown,
& Cromby, 2001). e word “haptic” derives from the
Greek haptikos, “able to touch.” In this paper we use
haptic to describe touch-mediated manual interactions
with real or virtual environments, such as exploration for
extraction of information about the environment or ma-
nipulation for modifying the environment (Srinivasan
& Basdogan, 1997). Research on the implementa-
tion of haptic technologies within s (Basdogan &
Srinivasan, 2002; Biggs & Srinivasan, 2002; Salisbury
& Srinivasan, 1997) and their potential for supporting
rehabilitation training has been reported for sighted
people (Giess, Evers, & Meinzer, 1998; Gorman, Lieser,
Murray, Haluck, & Krummel, 1998). Technological ad-
vances, particularly in haptic interface technology, en-
able blind individuals to expand their spatial knowledge
by using articially made reality maps through haptic
and audio feedback (Parente & Bishop, 2003) and con-
struction of cognitive maps (Lahav & Mioduser, 2004;
Semwal & Evans-Kamp, 2000; Simonnet, Guinard, &
Tisseau, 2006).
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 43
e study t hat is described in this paper is part of a larger
research eort that included design and development of
the BlindAid system and a usability study on the system
components. In the present study we report on the con-
tribution of the BlindAid system to users who are blind
in exploring virtual maps in order to familiarize them-
selves with new real spaces. e main research questions
of this study were:
1. Which exploration processes do people who are
blind use for exploring an unknown space in a ?
2. Which structural components and relationships
are included in the cognitive map constructed by
people who are blind who explored the unknown
space in the ?
3. How does the cognitive map help blind people
who explored the  perform the orientation tasks
in the real space?
In the next section, we will describe the BlindAid sys-
tem that was developed especially for this research. We
will present the general research method, present the
research results, and conclude with a discussion on the
merits of using the BlindAid system.
e BlindAid System
e BlindAid system, shown in Figure 1, was designed
through active collaboration among engineers and learn-
ing scientists at the Massac huset ts Institute of Technology
(MIT) Touch Lab, an expert on three-dimensional (3D)
audio in s, and an O&M instructor from the Carroll
Center for the Blind in Newton, Massachusetts. e
system provides virtual maps for people who are blind
and consists of application software running on a per-
sonal computer equipped with a haptic device and ste-
reo headphones. e haptic device, a Desktop Phantom
(SensAble Technologies), allows users who are blind to
interact manually with the . It has two primary func-
tions: (1) it controls the position of the user avatar with-
in the ; and (2) it provides haptic feedback and cues
similar to those generated by the tip of a long cane (e.g.,
stiness and texture) about the space from the tip of the
Phantom. For example, the indoor real space includes
dierent ground textures (tile oor, marble oor, rubber
oor, wood oor, or other); and each oor has a dierent
degree of stiness and texture feedback (hard, bouncy,
smooth, or rough). e BlindAid s simulate these
dierent ground textures. When users interact virtually
with a rubber oor, for example, the tip of the Phantom
produces a sense of bounce and bumpiness. Interacting
with a tile or marble oor generates a dierent sensation.
e headphones (Sennheiser HD580) present sounds to
the users as if they were standing in the .
e virtual workspace is a rectangular box that corre-
sponds to the usable physical workspace of the Phantom,
and the user avatar always is containedwithin the work-
space. ere are two methods for moving the virtual
workspace within the  in order to explore beyond the
connes of the workspace. e rst method involves
pressing one of the arrow keys. Each arrow key press
shifts the workspace half of its width in the given di-
rection. e second method involves only the Phantom;
the user presses a button on the stylus, causing the user
avatar position to bexed in the . en, similar to the
way in which one repositions a computer mouse upon
reaching the edge of the mouse pad, the user—while
holding the stylus button—advances the virtual work-
space in the opposite direction.
Six action commands on the computer’s numeric keypad
permit the user to control other aspects of the system
while interacting with the . e commands include
restart, pause, start, install and recall landmark, addi-
tional audio information, and zoom in and zoom out.
For example, zoom in and zoom out allows the user to
change the horizontal scale of the virtual workspace
within the  to display more or less detail. In addition,
every object within the  is assigned a maximum zoom
out level. is allows the developer to control the level
of detail that may be accessible to the user at the dier-
ent zoom levels. For example, one of the zoom in levels
allows the user to explore the environment’s structure
without the objects in it. Note that the haptic and audio
feedbacks do not change with the zoom level and are
consistent for all the s in our tests.
In addition to the user mode described above, the sys-
tem also has edit and evaluation modes. In this stage
of research and development a semi-automated editor
can read an electronic blueprint le (dxf) to import the
walls of a building into a new , and by manual ed-
iting we can add other types of objects and dene the
audio and haptic feedback. In the future, this manual
editing process will be replaced by an automatic editor.
Furthermore, all the BlindAid s will be available and
Journal of Special Education Technology
44 JSET 2011 Volume 26, Number 4
accessible via the Internet, much like the visual maps
that are accessible to sighted people. e evaluation
mode allows researchers to record and review the avatar’s
position and orientation within the  during an experi-
ment session. e computer records the user behavior in
a text le, and these data can be viewed directly or re-
played by the system like a video recording. As shown in
Figure 2, the central display demonstrates the user’s path
(the black dots interconnected by lines). e big black
dot represents the user’s avatar, and gray lines represent
private and public doors. e gray area represents a rub-
ber oor texture, and the three white rectangle areas rep-
resent a marble oor with a smooth texture. e nine
rectangle shapes represent objects in the environment
(such as a public phone, water bubbler, sculpture, or soda
machine). e upper right keyboard shows the user’s ex-
ecution of command actions. e researchers also can
listen to the sounds during playback. Further technical
details about the system are presented in an earlier paper
(Schloerb, Lahav, Desloge, & Srinivasan, 2010).
Method
Participants
e research included four participants who were se-
lected on the basis of seven criteria: totally blind (no vi-
sual ability), at least 21 years old, not multihandicapped,
trained in O&M, English speaking, having onset of
blindness at least two years prior to the experimental pe-
riod, and comfortable with the use of computers. One
participant was congenitally blind and three were adven-
titiously blind, one was female and three were male, ages
ranged from 41 to 53 years, one was a guide dog user,
and three were long cane users. To evaluate the partici-
pants’ initial O&M skills, each was asked to complete
a questionnaire on O&M issues. e results showed no
dierences in initial O&M ability among participants.
Each participant reported previous experience with
computer applications but no previous experience with
s or the Phantom device other than their experience
with it during our previous research (the usability study).
All the participants took part in our previous research,
which lasted two to three meetings (three hours total).
During this study, the participants interacted with 13
s and evaluated varied audio feedback (mono, stereo,
and stereo with rotation), haptic parameters (stiness,
dampness, static/dynamic friction, smooth and nons-
mooth texture), navigation tools, and command action.
is usability study included s that were not associ-
ated with real spaces. All the participants arrived at the
experiment room with the help of the researcher, who
met them at a public transportation station or on the
street when they were dropped o by a taxi. All four
participants took part in all the experiments.
Variables
ree groups of dependent variables were dened: the
process of the exploration task, construction of a cog-
nitive map, and performance of the orientation tasks.
ese variables were dened in our previous research
(Lahav, 2003; Lahav & Mioduser, 2008a).
Six variables were related to the exploration process:
1. Total duration was the total time spent accom-
plishing the task.
2. Spatial strategies were alternative strategies used
by the participants in their exploration. ese in-
cluded perimeter strategy (walking along a rooms
walls), grid strategy (exploring a room’s interior
by scanning the room), object-to-object strategy
(walking from one object to another), exploring
object area strategy (walking around an object and
Figure 1
e Blind-Aid system.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 45
exploring the space around it), and random strat-
egy (walking without pattern).
3. Systematic exploration was exploring the new en-
vironment in a planned, methodical pattern in or-
der to acquire spatial information. is included
systematic (exploring the space in a planned, me-
thodical pattern), systematic most of the time (us-
ing a systematic pattern most of the time during
the exploration process), restless but systematic
(wandering around in a space in a systematic pat-
tern), systematic some of the time (using a system-
atic pattern a few times during the exploration
process), and unsystematic (random wandering
around in a space).
4. Stops were the number of pauses made by the par-
ticipant during the exploration. Two values were
dened: short pauses (4–10 s) introduced for user’s
technical purposes (e.g., changing the grasp posi-
tion on the Phantom), and long pauses (more then
10 s) apparently used for cognitive processing (e.g.,
memorization or planning).
5. Command action was the use of the computer’s
numeric keypad to control spatial information aids
while interacting with the .
6. Frequency of command action usage.
e construction of a cognitive map process included
eight variables that were related to cognitive map compo-
nents and the cognitive map construction process. Four
variables were related to the cognitive map components:
(1) structural components, (2) structural component
location, (3) objects within the room, and (4) object lo-
cation. Four other variables were related to the cognitive
map construction process:
1. Spatial strategy was used for describing the space:
perimeter, object-to-object, items list, or starting
point perspective descriptions.
2. Spatial model was used for describing the space—
route model in which the environment is described
in terms of a series of displacements in space, map
model (a holistic overall description of the space),
and integrated representation of route and map
models.
3. Chronology of the descriptive process.
4. Spatial relationships were verbal descriptions that
related an object to a structure or to another object
by distances or directions.
e third group of variables examined the participants’
performance on the orientation tasks in the real space.
ere were four of these variables:
1. Successful completion was the participant’s ability
to nd the task’s target in the real space—good
navigation, arrived at the target’s zone, arrived at
the target’s zone with verbal assistance, and failed.
2. Type of path was the path that the participant
chose to take—direct, direct with limited walking
around, indirect, and wandering around.
3. Spatial strategies were alternative strategies used
by the participants in their navigation—perim-
eter, grid, object-to-object, exploring object area,
or other strategies.
4. Total duration was the total time spent accom-
plishing the task.
Instrumentations
e research included eight tools: three for the imple-
mentation, and ve for the collection of the data. ere
also were three tools for the implementation of the
studies.
Simulated environments. Four s were designed; they
were based on actual spaces at the MIT campus. ey
ranged from a simple space to a complex space. e com-
plexity of the environment was dened by environment
size, environment shape, and number of components
(Martinsen, Tellevik, Elmerskog, & Storliløkken, 2007).
Figure 2
Evaluation display.
Journal of Special Education Technology
46 JSET 2011 Volume 26, Number 4
e rst  (1) included a lobby area in a square shape
(202 square meters). e second  ( 2) was a new space
in an L shape with some structural similarity to 1 (291
square meters). e third environment (3) was a new
space with a lobby area that was similar to 1, a long
corridor with several private doors, a conference room,
and more than 25 objects (352 square meters). e last
 (4) was a complex area that included 3 com-
ponents and a new space with a few long corridors, a
second conference room, a second lobby, and more than
50 new objects (663 square meters). (See Figure 3.) We
chose this simple-to-complex space approach to allow
users to learn how to explore the  gradually by using
the BlindAid system and to examine their behavior in
dierent complex spaces. Because of safety and O&M
issues, these simulated environments were chosen in
conjunction with an O&M rehabilitation specialist.
Exploration task. Each participant was asked to explore
each  freely and with time limitations. An O&M
rehabilitation specialist dened the exploration time
limitation of each . Before the exploration task, the
researchers informed the participants that they would be
asked to describe the space and its components at the
end of their exploration.
Orientation task. Each participant was asked to perform
ve orientation tasks in the real target space: a target-
object task, reverse to the task’s starting point, a perspec-
tive taking task, reverse to the task’s starting point, and a
point-of-location task from the same starting point that
was used in the . Each simulated environment had
ve orientation tasks that were unique to it. ese orien-
tation tasks were designed with an O&M rehabilitation
specialist.
In addition to the preceding three implementation tools,
a set of four tools was developed for the collection of
quantitative and qualitative data:
1. O&M questionnaire. e aim of this questionnaire
was to evaluate the participants’ O&M ability and
to nd dierences and similarities in their O&M
experience and abilities. e questionnaire had 50
questions about the participant’s O&M ability in-
doors and outdoors as well as in known and un-
known environments. Some of the questions were
adapted from O&M rehabilitation evaluation in-
struments for use in this research (Dodson-Burk
& Hill, 1989; Lahav, 2003; Sonn, Tornquist &
Svensson, 1999).
2. Observations. e participants were video record-
ed during their exploration and orientation tasks.
ese video recordings were transcribed.
3. Open interview. After completing the exploration
task of each , the participants were asked to de-
scribe the space verbally. is open interview was
video recorded and transcribed.
4. Computer log. e computer data enabled the re-
searchers to track the user’s exploration activities
in the  in two ways: as a text le, and as a video
recording le. e integration of both sets of data
supplied information about the user’s exploration
strategies, spatial problem-solving abilities, the dis-
tances traversed, and path duration (see Figure 2).
Procedure
is study was a part of a larger research eort. e rst
study included usability experiments. During that rst
study, the participants learned how to operate and to
gather spatial information by using the BlindAid system.
All participants worked and were observed individually.
In the rst session, two consent forms were obtained:
authorization of participation in this study and photog-
raphy or videotaping of the experimental setup, followed
by an O&M questionnaire. All participants then had
four sessions (Sessions 2 to 5); each session focused on
one of the simulated environments, starting with 1
and nishing with 4. In the second session before ex-
ploring the 1, each participant learned how to move
the virtual workspace by using the arrow keys. After a
short learning period (2–3 min) the participants started
to explore 1. In the third session, before exploring 2,
the participants learned about the meaning of zoom in
and zoom out layers and how to operate them. In the
fourth session, before exploring 3, two randomly cho-
sen participants learned how to move the virtual work-
space by using the Phantom. In the fth session, they
explored 4 and each participant continued to use his
or her previous method to move the virtual workspace.
Every session included a  exploration task, followed
by verbal description, which followed arrival for the rst
time at the target real space to perform ve orientation
tasks. Each session lasted about 1.5–2 hr and was video
recorded. Processing and analysis of the collected data
followed these sessions. During this stage, the researcher
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 47
and the O&M rehabilitation specialist evaluated the ex-
ploration and the orientation task performance of each
 by each participant separately using the video record-
ing, the transcription, and computer log data. e video
recordings were stored on external HD media locked in
a cabinet in the research lab.
Data Analysis
To evaluate the participants’ O&M exploration and per-
formance we used our previous research coding schemes
instruments (Lahav, 2003; Lahav & Mioduser, 2008b).
ese coding schemes instruments were dened by three
O&M rehabilitation specialists who have been working
in a rehabilitation center for people who are blind for
more than 15 years. ey took part in the design and
construction of each coding scheme based on the ob-
servation of video data and computer logs; the identi-
cation and classication of exploration strategies; the
consolidation of evaluation instruments based on the
previous analyses and on the O&M literature (e.g.,
Jacobson, 1993; Jacobson, Kitchin, Garling, Golledge,
& Blades, 1998; Hill et al., 1993); and the implementa-
tion of the instruments for analyzing the participants’
O&M exploration, performance, and acquaintance with
the new space. Before analyzing this research data, an
O&M rehabilitation specialist evaluated a few observa-
tions (video, including the transcription and computer
logs) using the previous research-coding scheme to be as-
sured that all the variables were included and dened. In
addition, the O&M rehabilitation specialist took part in
the evaluation process, specically in the identication
and classication of exploration tasks and in the partici-
pants’ orientation tasks performance in the real space.
Results
Research Question 1: Which exploration processes do people
who are blind use for exploring an unknown space in a ?
Six aspects are of interest regarding the exploration
processes used by the four participants: total duration,
spatial strategies, systematic exploration, number and
kinds of pauses made while examining the new space,
Figure 3
Fourth virtual environment.
Note: e environment is a T-shape corridor, with each side of the long corridor ending in a rectangle that is used as a lobby.
Each lobby contains three elevators, a set of bathrooms (women and men), a water bubbler, and a set of re stairs. Four private
doors are located in the north wall of the long corridor, and there are another ve private doors and one public door in the oppo-
site wall. e public door is the entrance to the conference room, which is a rectangle with a window wall opposite the entrance
door. is conference room includes a long table with 14 chairs around it, a podium, two trash cans, a table, a magazine rack,
and a coat rack. Two private doors are located in the right wall of the corridor and one private door and two public doors of the
second conference room are located in the left wall. e second conference room is also a rectangle. It includes three tables, two
trash cans, a podium, a magazine rack, a coat rack, and four rows of eight chairs in each row (for a total of 32 chairs).
Journal of Special Education Technology
48 JSET 2011 Volume 26, Number 4
command actions, and frequency of command action
usage. e results present the unique exploration of
each participant and his or her spatial behavior diversity
during the exploration tasks. For example, Participant
1 explored the  dierently than did Participant 4,
and eventually those dierences inuenced the cogni-
tive map and behavior in the real tasks. ese dierences
will be discussed later in this article. e results of the
four exploration tasks (14) of each participant are
shown in Table 1.
e total average of the exploration time increased in re-
lation to the complexity of the explored  (shape, size,
and number of components). For example, in 1 the
average was 28:51 min, and in 4 it was 60:33 min.
In addition, dierences were found among the partici-
pants. For example, the duration time for Participant 1
or Participant 3 to explore the  was in most of the
tasks twice the duration time of Participant 4. In all ex-
ploration tasks, 81.25% of the participants performed
the perimeter strategy as a rst spatial strategy to explore
the , the object-to-object strategy as a second strategy
(73%), and the grid strategy as a third strategy (75%).
Participants 3 and 4 excelled in their systematic explora-
tion during all the  exploration tasks. For example,
during his perimeter strategy exploration, Participant 3
started to name the objects aloud and to relate them to
each other. For example, “Between Table 4 and Locker
1 there is a doorway; on the right of the doorway…;
there is a magazine rack on the left [of] Locker 2.
During his grid strategy exploration, he tried to discover
new areas: “Let’s see what is across from there, second
table, two…where Table 1 is.” Participant 2 improved
his systematic exploration during the exploration tasks
and become more systematic during the exploration of
4. In the opposite range, Participant 1 had diculty
keeping a systematic exploration method during the en-
tire range of exploration tasks. During the analysis stage
of the exploration process, we noticed that Participant
4 mostly explored the right side of his path and covered
more hallways and fewer details (such as obstacles in his
way). One of the reasons might be that he is a guide dog
user and he applied his real space exploration strategies
in the  (in the real space only his right hand is free
to explore) to use later on in the orientation tasks in the
real space.
As part of the exploration process we examined the
number and kinds of pauses participants made while
examining the new space. We divided the pauses into
two groups: short pauses (4–10 s) introduced for techni-
cal purposes, and long pauses (more than 10 s) used for
cognitive processing. e results show that the use of
short and long pauses increased in relation to the type
of  that was explored. In 1, the participants used
an average of 18 short pauses and eight long pauses.
e number of pauses increased in 4, where the par-
ticipants used an average of 41 short pauses and 12 long
pauses. Long pauses of 1:30 to 3 min each were observed
in 3 and 4. Participants 1 and 3 used many pauses
(short and long), and they used them in dierent ways.
Participant 1 used the pauses for resting and memoriz-
ing the objects. Participant 3 used the pauses to build his
next path based on where he wanted to go and what he
wanted to discover or to check.
During the exploration of the s the participants used
command actions that allowed them to get more spatial
information and to navigate in the . e most used
tool was Additional Audio Information. In 1, the fre-
quency of accessing this tool averaged 28.5, and in 4
the average was 90.5. We also examined the length of
time that this tool was used. We found that 13% to 24%
of the participants’ exploration total duration was with
the Additional Audio Information tool. In the second
, three of the four participants used this tool for 26%
to 34% of the total duration. Accessing and time of use
of this tool increased in relation to the type of .
During the exploration tasks, the participants could ask
the  system to send them to the starting point auto-
mat ica lly. e use of this tool i ncre ased a s the  become
larger and more complex; for example in 1 0.5 times,
and in 4 4.5 times. Similar increases occurred with
the Pause tool. During the rst two s no participant
used the Pause tool, but in 4 three of the participants
used it during their exploration. A fourth tool allowed
the participants to Install and Recall Landmarks (PL)
or to Recall Researcher Landmarks (RL) that were in-
stalled in advance. ese tools were used mostly in 1,
and their use decreased in relation to the type of .
All the participants used the Zoom In and Zoom Out
command actions. e number of uses of these tools was
consistent in all the s, although the participants used
the Zoom Out command for a longer period of time.
Each participant used the Zoom In or Zoom Out com-
mands in dierent points during their exploration. For
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 49
Table 1
Participants’ Performance in the Four VE Exploration Tasks
Stops Commands Action
nVE Time
Spatial
Strategy
Syste matic
Exploration Short Long
Move to
Start
Additional
Audio /Time
Zoom
In/ Time
Zoom Out/
Time
Install
Landmark PL RL
11 49:13 1- Perimeter
2- EOS
3- Grid
Systematic
restless 18 26 2 49
06:24 624
247:34 1- Obj-obj
2- Perimeter
3- Random
Some of the
time 32 11 4 62
09:02 1
00:07 1
14:00 10 12 1
364:09 1- Perimeter
2- Obj-obj
3- Grid
Systematic
restless 32 11 1 98
11:38 113
41:11:22 1- Obj-obj
2- Perimeter
3- Random
Some of the
time 62 16 4 66
09:37 3
02:50 2
07:00 7
21 20:19 1- Perimeter Most of the
time 213
07:10 5144
230:09 1- EOS
2- Perimeter
3- Obj-obj
Systematic
restless 14 5 1 14
08:49 1
17:22 1
04:07
337:44 1- Perimeter
2- Obj-obj
3- Grid
Systematic
restless 17 5 7 29
01:47
435:39 1- Perimeter
2- Obj-obj
3- Grid
Excellent 4 2 6 94
06:13 2
05:45 3
12:00
continued
Journal of Special Education Technology
50 JSET 2011 Volume 26, Number 4
Table 1, continued
Participants’ Performance in the Four VE Exploration Tasks
Stops Commands Action
nVE Time
Spatial
Strategy
Syste matic
Exploration Short Long
Move to
Start
Additional
Audio /Time
Zoom
In/ Time
Zoom Out/
Time
Install
Landmark PL RL
31 32:07 1- Perimeter
2- Obj-obj
3- Grid
Excellent 44 4 38
06:12 326
255:28 1- Perimeter
2- Obj-obj
3- Grid
Excellent 65 4 1 79
14:37 1
06:07 11
335:09 1- Perimeter
2- Obj-obj Excellent 27 5 1 34
03:35 2
06:35 1
01:15 15
41:32:29 1- Perimeter
2- Obj-obj Most of the
time 95 25 7 86
36:02
4113:45 1- Perimeter
2- Obj-obj
3- Grid
Excellent 6 3 14
02:47 161
223:02 1- Perimeter
2- Obj-obj
3- Grid
Excellent 11 1 45
07:56 1
01:05 2
07:00 1
327:07 1- Perimeter
2- Obj-obj
3- Grid
Excellent 7 4 2 43
04:15 1
00:05 4
03:56 1
442:42 1- Perimeter
2- Obj-obj Excellent 5 5 1 116
07:28
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 51
example, Participant 1 expressed his preference by say-
ing, “My preferences will be to try it normally rst and
then if things start to be too cluttered I will take the
layers to see.” Participant 3 decided to start his explora-
tion of 3 without objects (Zoom In). He also used the
Additional Audio Information tool. After three minutes
he used the Zoom Out and explored the MIT campus
for another 2:37 min; he then returned to the starting
point and explored the environment without objects
(Zoom In) for another 4:20 min. Afterward, he contin-
ued to explore the environment with all the objects in it
until he nished his task.
Participants 2 and 4 learned how to use the Phantom
and used it to explore 3 and 4. All the participants
used the arrow keys or the Phantom when it was needed
to extend their map.
Research Question 2: Which structural components and re-
lationships are included in the cognitive map constructed
by people who are blind who explored the unknown space
in the ?
After exploring each , the participants described the
environment verbally. ese results expressed the par-
ticipants’ ability to present verbally the cognitive map
that they built as a result of their exploration. e results
represented in Table 2 show that, in all four s, the
structure components were described better than the ob-
jects in all three variables (components name, location,
and location related). For example, in 4 the struc-
ture components named average was 60% compared to
26% of objects named. Structure location average was
42% compared to 15%, and structure location related
to other components was 38% compared to 19%. e
amount of spatial information about the environment
in all variables (structure and objects) decreased in rela-
tion to the  type. For example, in 1 each participant
mentioned all the structure components (100%) in his or
her verbal description, and in 4 the average was 60%.
For describing the spaces, 63% of the participants used
the perimeter strategy and only 31% used the object-to-
object strategy. For the spatial model, 69% of the par-
ticipants used the route model. In both variables (spatial
strategy and spatial model), 75% of the participants were
systematic in all four s. Like the dierences that were
found among the participants’ exploration behaviors,
dierences also emerged in the ability to describe the
cognitive map. For example, Participant 1 in 1 rotated
her environment’s components (structure and objects)
by 180 degrees; in 3 she placed the environment’s ob-
jects in the opposite direction of their real location (e.g.,
right objects were placed in the left side). Participant 3
showed a high ability to describe the four environments,
Participant 4 was second, and Participant 1 showed a
lower ability to describe them.
Research Question 3: How does the cognitive map help the
blind person who explored the  to perform the orienta-
tion tasks in the real space?
After the construction of the cognitive map, the par-
ticipants were asked to perform ve orientation tasks in
the real space. It should be recalled that the participants
entered the real space for the rst time to perform these
tasks, and they were not given the option to explore the
rooms rst. Four variables were examined: successful
completion of the tasks, type of path, spatial strategy, and
time spent on task (see Table 3). Most of the participants
performed the target-object tasks and perspective-taking
tasks successfully by choosing a direct path to the target.
Nevertheless, the reverse tasks were performed success-
fully in a shorter time and in a more direct path than the
task itself. Most of the participants used the object-to-
object strategy to perform their tasks in the real space.
ree participants performed their orientation tasks by
using their echolocation ability. ey transferred the
spatial information that was collected via the  by hap-
tic and audio feedback and applied it in the real space as
an auditory traveler. For example, during the exploration
in the 3, Participant 3 arrived at a recess area that was
located in a corridor in front of the Research Laboratory
of Electronics () main door. In the target-object task
he was asked to nd the bulletin board (this object was
located to the right of the  main door). During his
rst time in the real space he walked into the corridor,
got the echolocation information about the recess area
to his right (without checking it with his long cane),
turned left to the  main door, and went from there
to the bulletin board. In the point-on-the-location tasks
the participants succeeded in performing an average of
65%88% in the rst three spaces. e tasks included
only indoor objects. e fourth space included outdoor
objects (buildings and streets), and only 31% succeeded
in performing this task.
Journal of Special Education Technology
52 JSET 2011 Volume 26, Number 4
As in the previous results, dierences were found among
the participants’ performances. Participant 1 succeeded
in performing only 5 out of 16 tasks. He used the pe-
rimeter strategy and the indirect path to nd his way
in the real space. In the point-on-the-location task he
succeeded in only 36% of the cases. is participant had
diculty adjusting his exploration skills and had di-
culty transferring and applying them in the real space.
e other three participants used mainly the object-to-
object strategy and a direct path to nd their way to the
target object. Participants 1 and 2 walked mostly with
their long cane in the middle of the corridor, using their
echolocation traveler techniques. ey heard the echo
from the walls and the recess area and did not bump
their long cane against the walls. ey transferred their
spatial information from the , which was based most-
ly on tactile traveler techniques and audio landmarks
(e.g., the recess area).
At the starting point of each perspective-taking task,
Participant 4 chose to discover and search for known
landmarks before he started the task. is participant
used a guide dog in the real space tasks and, as a result, he
walked very quickly in the real space, passed landmarks
he was aware of, and needed to recalculate his distance
and time. After his rst task performance in the real
space he said, “When I arrived I had the general under-
standing what I will nd there, I had three elevators….
What I don’t get is the sense of size. I had in my mind
that this space will be much bigger then what it was.”
Discussion
e research reported here was an eort to assess the
contribution of s as a secondary orientation aid that
would allow people who are blind to learn about un-
known spaces in advance and to apply this spatial
knowledge in the real space. e results helped us elu-
cidate three issues concerning the contribution of the
Table 2
e Average Performance of the Cognitive Map Construction
Verbal Description Structure Verbal Description Objects
VE Components Location
Location
Related Components Location
Location
Related
Spatial
Strategy
Spatial
Model
1 100% 75% 63% 53% 21% 21% Perimeter
(n=2)
Obj-Obj
(n=2)
Route
(n=2)
Map (n=2)
2 57% 50% 43% 53% 40% 30% Perimeter
(n=3)
Item list
(n=1)
Route
(n=3)
3 80% 57% 52% 50% 22% 19% Perimeter
(n=3)
Obj-Obj
(n=1)
Route
(n=3)
Map (n=1)
4 60% 42% 38% 26% 15% 19% Perimeter
(n=2)
Obj-Obj
(n=2)
Route
(n=2)
Map (n=2)
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 53
BlindAid system in the exploration and learning process
of unknown spaces by people who are blind. Spatial Behavior
As found in our previous research (Lahav & Mioduser,
2004) the exploration in the s gave participants a
stimulating, comprehensive, and thorough acquaintance
with the target space. e high degree of compatibility
Table 3
e Average Performance of the Orientation Tasks in the Real Space
Real
Space Task Successful
Direct
Path Strategy Time
1Target-object 50% 50% Perimeter (n=2); Obj-Obj (=1); Other (n=1) 1:07
reverse 100% 100% Perimeter (n=2); Obj-Obj (n=1); Other
(n=1) 0:17
Perspective-taking 75% 75% Perimeter (n=1); Obj-Obj (n=3) 0:41
reverse 75% 75% Perimeter (n=1); Obj-Obj (n=3) 0:36
Point-on-the-location 65%
2Target-object 75% 75% Perimeter (n=1); Obj-Obj (n=2) 0:43
reverse 75% 75% Obj-Obj (n=3) 0:28
Perspective-taking 75% 75% Obj-Obj (n=3) 1:27
reverse 75% 75% Obj-Obj (n=3) 0:52
Point-on-the-location 88%
3Target-object 100% 100% Perimeter (n=1); Obj-Obj (n=3) 1:11
reverse 100% 100% Perimeter (n=1); Obj-Obj (n=3) 0:31
Perspective-taking 75% 50% Perimeter (n=1); Obj-Obj (n=3) 1:58
reverse 100% 100% Perimeter (n=1); Obj-Obj (n=3) 0:35
Point-on-the-location 69%
4Target-object 50% 50% Perimeter (n=1); Obj-Obj (n=2); Other
(n=1) 2:35
reverse 75% 100% Obj-Obj (n=4) 0:44
Perspective-taking 50% 75% Obj-Obj (n=4) 4:13
reverse 75% 100% Obj-Obj (n=4) 1:11
Point-on-the-location 31%
Mean Target-object 69% 69%
reverse 88% 94%
Perspective-taking 69% 69%
reverse 81% 88%
Point-on-the-location 63%
Journal of Special Education Technology
54 JSET 2011 Volume 26, Number 4
between the  components and the real space on one
hand and the exploring methods supported by the 
on the other contributed to the users’ exploration ability.
ese features also enabled participants to implement
exploration patterns in a systematic method that they
commonly used in real spaces, but in a qualitatively dif-
ferent manner. During the orientation tasks in the real
space the participants were able to recall their cognitive
map by active and problem-solving tasks that might in-
crease their recall ability. ey were able to manipulate
their spatial knowledge very well, especially in the re-
verse tasks and the perspective-taking tasks. Most of the
participants were able to transfer the tactile information
that they collected as a tactile traveler in the s to audi-
tory landmarks and were able to use echolocation land-
marks during their walk in the real space. ese abilities
allowed the participants to recall their spatial informa-
tion as needed in a exible way.
e s oer new spatial tools that do not exist in real
space for people who are blind. ese tools enable users
to control the level of spatial information. During the
exploration tasks the participants were able to use zoom
in and zoom out levels and to develop new navigational
skills. ese capabilities can expand spatial information
about the surrounding area and can increase people’s
spatial awareness. Nevertheless, more learning sessions
will be needed in order for them to adapt the meaning
behind the zoom in and zoom out method conceptually.
One of the participants suggested that he would like to
have a vertical Zoom In tool. By using a vertical Zoom
In tool he could explore the environment on multiple
levels, perhaps helping him to ascertain his location in
the building, the location of exits, or in which direction
the street is located.
Methodology
Exploring a complex environment (Martinsen et al.,
2007) raised some methodology issues that need to be
considered in future research. In this research, as in our
previous research, the exploration task included free
exploration. e idea behind it was to support the par-
ticipant’s independent exploration in the . Exploring
complex environments might aect a participant’s cog-
nitive overload, and we think that in future research the
exploration task needs to include free exploration for a
measure of time, with instructed exploration tasks af-
terward. e instructed exploration task needs to allow
the participant to use both route and map model spatial
models. e inclusion of an instructed exploration task
allows the participant to gather broad information about
the environment and to focus on the landmark locations
that will need to be reached later in the real space.
Exploring a complex environment revealed the need for
developing a new data-collecting tool that allows better
mirroring of the participant’s cognitive map. In previ-
ous research we used verbal description and a physical
model (Lahav & Mioduser, 2005). As in other research
results that include complex environments, the verbal
description is dull, and using the physical model is a long
process. Using raised dot drawings or other embossed
drawing techniques can inuence three parameters: long
period of time to teach and to practice the technique;
long period of time to create a tactile image; and, most
important, long period of time to learn the concept that
stands behind the 3D image. ese new data-collecting
tools will need to be explored and studied in future
research.
Future Research and Development of the
BlindAid System
Additional research and development eorts will trans-
form this promising technology into a useful learn-
ing and rehabilitation tool. e BlindAid system as an
O&M simulation-learning tool can support a variety
of target populations, including people who are newly
blind, people who are blind who need to improve their
O&M skills, and students in the O&M teacher pro-
gram. is system allows the instructor or the user to
control the size and shape of the environment as well as
the level of density, and it further allows control of the
levels of spatial complexity. e  learning simulation
could be based on a schematic structure to detail the
structure with objects within it.
e BlindAid system can assist O&M specialists in reha-
bilitation centers as a simulator with which their clients
can interact and be trained as part of their O&M reha-
bilitation program. e spatial behavior of Participant 1
in this research highlights the need for a mirroring sim-
ulation that will allow people who are blind to under-
stand their positive and negative spatial behavior. Using
this simulation tool with O&M specialist interventions
in the training process might improve their orientation
skills and awareness. Nevertheless, the system can help
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 55
compensate for the shortage in rehabilitation funding,
increasing the number of training hours for each client.
is system also can be used as a training simulation for
students who are studying to become O&M teachers.
ey can practice exploration methods as blindfolded
users. Furthermore, development will support people
who are blind who have already received O&M train-
ing in downloading virtual maps via the Internet and
exploring real spaces before arrival. is could function
like services such as Google Maps or MapQuest, which
allow sighted people to explore new spaces in advance.
In addition to serving as a learning and training tool,
the BlindAid system can be used as a diagnostic tool. An
O&M specialist can predict a participant’s spatial be-
havior in a real space by observing his or her exploration
behavior in the . is diagnostic tool can be applied
during or after an O&M rehabilitation process. It also
can be used as an indicator to conrm suitability for a
guide dog. e BlindAid haptic stylus simulates the way
the user leads the dog in the real space. A guide dog user
needs to apply high spatial ability to nd secure paths in
a real space, and this diagnostic tool allows O&M spe-
cialists to track and observe how their clients think and
behave during exploration of a new space.
is study’s results also have important implications for
the continuation of the research and implementation
purposes. In regard to research, further studies should
examine how people who are newly blind construct spa-
tial cognitive maps of spaces using the  during their
rehabilitation program; how they use these maps for
navigating in the real environments; and, consequent-
ly, how the system contributes to their rehabilitation
process. Further studies should compare the BlindAid
system with other orientation tools such as , tactile
maps, or models to understand how this  system can
enhance and improve the orientation ability of people
who are blind compared with other technologies. At
another level, the development of more comprehensive
environment-editing tools for the  will support the
creation of a variety of models of spaces (e.g., public
buildings, shopping areas) enabling pre- and postvisit
exploration and recall of unknown spaces by people who
are blind. ese implementations also may serve the re-
search and practitioner community as models for the
further development of technology-based tools for sup-
porting learning processes and performance of people
with special needs.
References
Bach-y-Rita, P., Tyler, M. E., & Kaczmarek, K. A. (2003). Seeing
with the brain. International Journal of Human-Computer Inter-
action, 15(2), 285–295.
Basdogan, C., & Srinivasan, M. A. (2002). Haptic rendering in vir-
tual environments. In K. M. Stanney (Ed.), Virtual environment
handbook. Mahweh, NJ: Erlbaum.
Biggs, S. J., & Srinivasan, M. A. (2002). Haptic interfaces. In K.
M. Stanney (Ed.), Virtual environment handbook. Mahweh, NJ:
Erlbaum.
Blasch, B. B., Wiener, W. R., & Welsh R. L. (1997). Foundations of
orientation and mobility. New York, NY: American Foundation
for the Blind.
Crandall, W., Bentzen, B. L., Myers, L., & Mitchell, P. (1995).
Transit accessibility improvement through talking signs remote in-
frared signage, a demonstration and evaluation. San Francisco,
CA: Smith-Kettlewell Eye Research Institute, Rehabilitation
Engineering Research Center.
Dodson-Burk, B., & Hill, E. W. (1989). Preschool orientation and
mobility screening. A publication of Division IX of the Associa-
tion for Education and Rehabilitation of the Blind and Visually
Impaired. New York, NY: American Foundation for the Blind.
Easton, R. D., & Bentzen, B. L. (1999). e eect of extended
acoustic training on spatial updating in adults who are congeni-
tally blind. Journal of Visual Impairment and Blindness, 93(7),
405–415.
Espinosa, M. A., & Ochaita, E. (1998). Using tactile maps to im-
prove the practical spatial knowledge of adults who are blind.
Journal of Visual Impairment and Blindness, 92(5), 338–345.
Farmer, L. W., & Smith, D. L. (1997). Adaptive technology. In B.
B. Blasch, W. R. Wiener, & R. L. Welsh (Eds.), Foundations of
orientation and mobility. New York, NY: American Foundation
for the Blind.
GDP Research. (2005). e Miniguide ultrasonic mobility aid. Re-
trieved from http://www.gdp-research.com.au/minig_1.htm
Giess, C., Evers, H., & Meinzer, H. P. (1998). Haptic volume render-
ing in dierent scenarios of surgical planning. Paper presented at
the ird Phantom Users Group Workshop, MIT, Cambridge,
MA.
Golledge, R. G., Klatzky, R. L., & Loomis, J. M. (1996). Cogni-
tive mapping and waynding by adults without vision. In J. Por-
tugali (Ed.), e Construction of Cognitive Maps (pp. 215–246).
Netherlands: Kluwer.
Gorman, P. J., Lieser, J. D., Murray, W. B., Haluck, R. S., & Krum-
mel, T. M. (1998). Assessment and validation of force feedback vir-
tual reality based surgical simulator. Paper presented at the ird
Phantom Users Group Workshop, MIT, Cambridge, MA.
Herman, J. F., Herman, T. G., & Chatman, S. P. (1983). Construct-
ing cognitive maps from partial information: A demonstration
study with congenitally blind subjects. Journal of Visual Impair-
ment and Blindness, 77(5), 195198.
Journal of Special Education Technology
56 JSET 2011 Volume 26, Number 4
Hill, E., Rieser, J., Hill, M. M., Hill, M., Halpin, J., & Halpin
R. (1993). How persons with visual impairments explore novel
spaces: Strategies of good and poor performers. Journal of Visual
Impairment and Blindness, 87(8), 295–301.
Jacobson, W. H. (1993). e art and science of teaching orientation
and mobility to persons with visual impairments. New York, NY:
American Foundation for the Blind.
Jacobson, R. D., Kitchin, R., Garling, T., Golledge, R., & Blades,
M. (1998). Learning a complex urban route without sight: Com-
paring naturalistic versus laboratory measures. Paper presented at
the International Conference of the Cognitive Science Society of
Ireland, University College, Dublin, Ireland.
Kish, D. (1997). When darkness lights the way: How the blind may
function as specialists in movement and navigation (Master’s the-
sis). California State University, Los Angeles.
Lahav, O. (2003). Blind persons’ cognitive mapping of unknown spaces
and acquisition of orientation skills, by using audio and force-feed-
back virtual environment. (Doctoral dissertation). Tel-Aviv Uni-
versity, Israel (Hebrew).
Lahav, O., & Mioduser, D. (2004). Exploration of unknown spaces
by people who are blind, using a multisensory virtual environ-
ment. Journal of Special Education Technology, 19(3), 15–24.
Lahav, O., & Mioduser, D. (2005). Blind persons’ acquisition of
spatial cognitive mapping and orientation skills supported by
virtual environment. International Journal on Disability and Hu-
man Development, 4(3), 231–237.
Lahav, O., & Mioduser, D. (2008a). Construction of cognitive maps
of unknown spaces using a multi-sensory virtual environment
for people who are blind. Computers in Human Behavior, 24,
1139–1155.
Lahav, O., & Mioduser, D. (2008b). Haptic-feedback support for
the cognitive mapping of unknown spaces by people who are
blind. International Journal of Human-Computer Studies, 66(1),
23–35.
Landau, S., Wiener, W., Naghshineh, K., & Giusti, E. (2005). Cre-
ating accessible science museums with user-activated environ-
mental audio beacons (Ping!). Assistive Technology, 17, 133–143.
Loomis, J. M., Golledge, R. G., Klatzky, R. L., & Marston, J. R.
(2007). Assisting waynding in visually impaired travelers. In
G. L. Allen (Ed.), Applied Spatial Cognition: From research to cog-
nitive technology (pp. 179–202). Mahwah NJ: Erlbaum.
Martinsen, H., Tellevik, J. M., Elmerskog, B., & Storliløkken, M.
(2007). Mental eort on mobility route learning. Journal of Vi-
sual Impairment and Blindness, 101(6), 327–350.
Parente, P., & Bishop, G. (2003). BATS: e Blind Audio Tactile
Mapping System. Savannah, GA: Association for Computing
Machinery Southeast Conference.
Rieser, J. J. (1989). Access to knowledge of spatial structure at novel
points of observation. Journal of Experimental Psychology: Learn-
ing, Memory, and Cognition, 15(6), 1157–1165.
Roentgen, U. R., Gelderblom, G. J., Soede, M., & de Witte, L. P.
(2008). Inventory of electronic mobility aids for persons with
visual impairments: A literature review. Journal of Visual Impair-
ment and Blindness, 102(11), 702 724.
Salisbury, J. K., & Srinivasan, M. A. (1997). Phantom-based haptic
interaction with virtual objects. IEEE Computer Graphics and
Applications, 17(5), 6–10.
Sánchez, J., Noriega, G., & Farías, C. (2008). Mental representa-
tion of navigation through sound-based virtual environments. Paper
presented at the 2008 AERA Annual Meeting, New York, NY.
Schloerb, D. W., Lahav, O., Desloge, J. G., & Srinivasan, M. A.
(2010). BlindAid: Virtual environment system for self-reliant trip
planning and orientation and mobility training. Paper presented
at IEEE Haptics Symposium, Waltham, MA.
Schultheis, M. T., & Rizzo, A. A. (2001). e application of virtual
reality technology for rehabilitation. Rehabilitation Psychology,
46(3), 296–311.
Semwal, S. K., & Evans-Kamp, D. L. (2000). Virtual environments
for visually impaired. Paper presented at the 2 International
Conference on Virtual Worlds, Paris, France.
Simonnet, M., Guinard, J.-Y., & Tisseau, J. (2006). Preliminary
work for vocal and haptic navigation software for blind sailors.
International Journal of Disability and Human Development.
52(2), 61–67.
Sonn, U., Tornquist, K., & Svensson, E. (1999). e ADL taxon-
omy—From individual categorical data to ordinal categorical
data. Scandinavian Journal of Occupational erapy, 6, 11–20.
Srinivasan, M. A., & Basdogan, C. (1997). Haptics in virtual envi-
ronments: Taxonomy, research status, and challenges. Computers
and Graphics, 21(4), 393 404.
Standen, P. J., Brown, D. J., & Cromby, J. J. (2001). e eective
use of virtual environments in the education and rehabilitation
of students with intellectual disabilities. British Journal of Educa-
tion Technology, 32(3), 289–299.
Takes Corporation. (2007). Owner’s manual: Palmsonar PS231-7.
Retrieved from http://www.palmsonar.com/231-7/prod.htm
Ungar, S., Blades, M., & Spencer, S. (1996). e construction of
cognitive maps by children with visual impairments. In J. Por-
tugali (Ed.), e construction of cognitive maps. Netherlands: Klu-
wer Academic.
Warren, D. H., & Strelow, E. R. (1985). Electronic spatial sensing for
the blind. Boston, MA: Martinus Nijho.
Author Notes
Orly Lahav is a researcher and lecturer in the Technology and
Learning Program, Department of Education in Mathematics,
Science and Technology, School of Education, at Tel-Aviv
University. Orly Lahav was a postdoctoral associate, David W.
Schloerb is a research scientist, Siddarth Kumar was a doctoral
student, and Mandayam A. Srinivasan is a senior research scien-
tist, all at the Laboratory for Human and Machine Haptics (e
Touch Lab), Research Laboratory of Electronics, Massachusetts
Institute of Technology.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 57
Correspondence concerning this article should be addressed to
Orly Lahav, School of Education, Tel Aviv University, Tel Aviv,
Israel. Email: lahavo@post.tau.ac.il
is research was supported in part by a grant from e
National Institutes of Health–National Eye Institute (Grant
No. 5R21EY16601-2), and supported in part by e European
Commission, Marie Curie International Reintegration Grants
(Grant No. FP7-PEOPLE-2007-4-3-IRG). We acknowledge
the discussions with and the audio system development by Jay
Desloge, and the Carroll Center for the Blind, Newton, MA, for
the collaboration and the support during the BlindAid system
design and research. We thank the four anonymous participants
for their time, eorts, and ideas.
is manuscript was accepted under the previous editorship of
J. Emmett Gardner.
... While exploring the VE, the learner interacts with the landmarks and clues and collects spatial information that will later support him or her in constructing a cognitive map that can be applied in real space (RS). A few research teams have developed and researched orientation VEs for users who are blind, such as [7][8][9][10][11][12][13][14][15][16][17]. Their research findings showed that people who are blind were able to explore VE systems independently, to construct cognitive maps as a result of the exploration, and to apply this spatial knowledge successfully in familiar and unfamiliar RSs. ...
... Their research findings showed that people who are blind were able to explore VE systems independently, to construct cognitive maps as a result of the exploration, and to apply this spatial knowledge successfully in familiar and unfamiliar RSs. Other orientation VR systems have been used mainly to help trainees who are blind to acquire spatial and O&M skills [9,[18][19][20][21]. These research findings have indicated the potential of VR systems to play a central role in three activities: as an exploration/navigation planning tool for independent traveling in unfamiliar RSs, as a training simulator for orientation, and as a diagnostic tool for O&M specialists to track and observe learners' spatial abilities and strategies. ...
... Both convey to the user abilities and activities that operate only in these VR systems and are not available in RS. The two VR systems, the BlindAid system [9,28] and the Virtual Cane (Wiimote) [29], have been previously studied. In this research, we examine and compare the spatial behavior of participants who are blind exploring two multisensorial VR systems. ...
Article
Full-text available
This research aims to examine the impact of virtual environments interface on the exploration process, construction of cognitive maps, and performance of orientation tasks in real spaces by users who are blind. The study compared interaction with identical spaces using different systems: BlindAid, Virtual Cane, and real space. These two virtual systems include user-interface action commands that convey unique abilities and activities to users who are blind and that operate only in these VR systems and not in real space (e.g., teleporting the user’s avatar or pointing at a virtual object to receive information). This research included 15 participants who are blind, divided into three groups: a control group and two experimental groups. Varied tasks (exploration and orientation) were used in two virtual environments and in real spaces, with both qualitative and quantitative methodologies. The results show that the participants were able to explore, construct a cognitive map, and perform orientation tasks. Participants in both virtual systems used these action commands during their exploration process: all participants used the teleport action command to move their avatar to the starting point and all Virtual Cane participants explored the environment mainly by using the look-around mode, which enabled them to collect spatial information in a way that influenced their ability to construct a cognitive map based on a map model.
... Therefore, we decided to name archetypical application scenarios to underline which generic spatial information was mainly used for each cluster. Generally speaking, VR applications for blind and visually impaired users can consist basically of any generic spatial information; for example, 3D haptic mathematical graphs in an education context (e.g., [19,30,[46][47][48][49][50][51][52]) or virtual proxies of real spaces ( [7,16,[41][42][43][53][54][55][56][57][58][59][60][61][62][63]). Especially, the knowledge gain and transfer in the latter context when training orientation and mobility aspects in VEs have been extensively analyzed. ...
... In summary, Figure 4 provides an illustrative example for each scale. Common examples for small, medium and large walkable and touchable VR applications following [40,54,57]. From left to right: Exocentric exploration with grounded force feedback, egocentric exploration with a virtual white cane in a tracked environment or by controlling an avatar with the keyboard or a game controller. ...
... Common examples for small, medium and large walkable and touchable VR applications following [40,54,57]. From left to right: Exocentric exploration with grounded force feedback, egocentric exploration with a virtual white cane in a tracked environment or by controlling an avatar with the keyboard or a game controller. ...
Article
Full-text available
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects' and spaces' limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium-and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback ('small scale') were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically ('medium scale') or avatar-walkable ('large scale') egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.
... R1-AUDIO R1.1 -3D Audio: 3D audio should be used to simulate the position of fixed and moving objects using spatial audio. The sounds should be emitted from the location of the object and gain intensity according to the user's approximation [3,21,22,36,40,41,50,66,74,83,97,110,113]. ...
... R2-HAPTIC R2.1 -Control: The O&M virtual environment should provide users with precise control of the navigation movements of their avatar. For example, they should provide interactions with the joystick to move forward and backwards the user's avatar and to rotate it following predefined angles (e.g., 45-degree clockwise rotation) [17,38,39,58,61,64,66,68,93]. ...
... R2.2 -Haptic Feedback: The virtual environment should provide tactile feedback describing objects located at the indoor map. For example, the system should use vibration or force feedback to indicate a collision with an obstacle [4,36,64,66,68,70,74,108,113]. ...
Article
BACKGROUND: Knowing their current position in the surroundings constitutes one of the biggest challenges faced by people with visual disabilities when they move around. For them, it is difficult to be aware of the direction in which they are going, and the location of nearby objects and obstacles. In this context, obtaining relevant spatial information is always very significant to these individuals. Hence, the research in the development of assistive technologies for needs and perspectives of people who are blind has been a promising area in terms of the orientation and mobility (O&M) challenges. OBJECTIVE: The purpose of this study is to systematically examine the literature on O&M virtual environments designed to support indoor navigation to identify techniques for both developing and evaluating the usability and cognitive impact of these applications. METHODS: A systematic literature review (SLR) was performed, considering population, intervention, outcomes, and study design as eligibility criteria. After a filtering process from 987 works retrieved from six databases, we extracted data from 51 papers, which meet the study selection criteria. RESULTS: The analysis of the 51 papers describing 31 O8M indoor virtual environments, indicated that O&M virtual environments to support indoor navigation are usually designed for desktop, adopt spatial audio as way to support orientation, and use joystick as primary interaction device. Regarding evaluation techniques, questionnaires, interviews, user observation, and performance logs are commonly used to evaluate usability in this context. In tests involving users, the participants are usually adults aged 21–59 years, who individually spend about 90 minutes split in usually two evaluation sessions. Most papers do not report any strategies to evaluate the cognitive impact of O&M virtual environments on users’ navigational and wayfinding skills. Thirteen papers (25.49%) reported the conduction of experiments or quasi-experiments and demonstrated pieces of evidence associated with a positive cognitive impact resultant from O8M indoor virtual environments usage. Finally, only four papers (7.84%) reported the development of indoor maps editors for O&M virtual environments. CONCLUSION: Our SLR summarizes the characteristics of 32 O&M virtual environments. It compiles state-of-the-art for indoor simulations in this domain and highlights their challenges and impacts in O&M training. Also, the absence of clear guidelines to design and evaluate O&M virtual environments and the few available computer editors of indoor maps appear as research opportunities.
... le and provides plentiful information for helping the wayfinder to orient oneself in regard to spatial relations among objects. Previous studies on the visually impaired show that there are two main levels that the blind and visually impaired use to attain knowledge for orientation and mobility, namely perceptual and conceptual (Ungar et. al, 1996; Schloerb et. al, 2011). " Mobility " refers to being able to move from one point to another safely and efficiently (Hersh and Johnson, 2008). Hill and Ponder (1976) define it as the capacity to move, the readiness and the facility to move. This will involve negotiating any obstacle in sidewalks which would be temporary or any unexpected change in direction. ...
... unexpected change in direction. " Navigation " refers to travelling from one place to another by using mobility skills while keeping oriented in relation to the purposeful course (Hersh and Johnson, 2008). At the perceptual level, other senses perceive information due to lack of vision or certain ability to overcome the impairment in the best way (Schloerb et. al, 2011; Goldstein 1999). The conceptual level focuses on developing strategies for mapping of space and generating an orientation path (Schloerb et. al, 2011). Furthermore, according to Hersh and Johnson (2008), and related studies, there are two perspectives for gaining spatial information; navigation-based learning and resource-based learning ...
... on to the purposeful course (Hersh and Johnson, 2008). At the perceptual level, other senses perceive information due to lack of vision or certain ability to overcome the impairment in the best way (Schloerb et. al, 2011; Goldstein 1999). The conceptual level focuses on developing strategies for mapping of space and generating an orientation path (Schloerb et. al, 2011). Furthermore, according to Hersh and Johnson (2008), and related studies, there are two perspectives for gaining spatial information; navigation-based learning and resource-based learning strategies. The difference between these two processes lies in experiencing the environment. Whereas in navigation-based learning the user experience ...
Article
Full-text available
1 ABSTRACT Movement from an origin to a destination within a city is an inevitable activity for all inhabitants, especially for those who are commuters. For the visually impaired, this movement task is a difficult activity, given inability to use visual properties and hence reliance on hearing and smell for navigation. This study seeks to determine the type of information that is acquired by the visually impaired to navigate from an origin to a destination. In essence the study is attempting to determine the wayfinding process in familiar environments among the visually impaired. An experiment with 12 totally visually impaired and 12 partially visually impaired students was conducted in Mashhad city. Our method is based on an analysis of wayfinding from a school for the visually impaired to a familiar destination in the urban area of the city. Questionnaire survey methods were used to determine reference points, which senses (hearing, touch, smell) were used and problems experienced in reaching the destination by walking. The key findings show that there are differences between the two groups in terms of their use of reference points, use of the senses and problems encountered on the wayfinding trip to the destination. The totally visually impaired displayed a reliance on touch, smell and hearing for gaining information from the environment, as opposed to the partially visually impaired who could rely on sight and other senses for their information. As a result of the study it is suggested that those who design aids for the visually impaired should have stronger experiences of the perceptions of the needs and problems encountered by the visually impaired during the wayfinding process. For urban planners and designers the results suggest the need for greater consideration of the problems and needs of the visually impaired in terms of street layout and pattern, pavement slope and material, and safety and security. 2 INTRODUCTION Finding one's way is a significant task for individuals engaged in daily activities. However, being oriented in a place with regard to objects and knowing where to go next, and ultimately how to return, is a main concern for inhabitants and, in particular, the visually impaired. Orientation refers to an individual's awareness of his/her position in the environment by maintaining the relationship to other objects (Hersh and Johnson, 2008). Sighted people in an airport would be aware of their position at that moment, similarly for the visually impaired who can hear the verbal announcement of the flights. But what about being in a large public space where there are no verbal announcements for blind users. Lack of information for the visually impaired about the potential to encounter different temporary obstacles or hazards, as well as deficiency in gaining information about distant landmarks, make wayfinding a serious problem for this target group (Loomis et al. 2001). This leads to not making independent journeys outside their neighborhood areas, or only to restricted or familiar areas (Clark-Carter, et. al, 1986).
... Previous research on using VEs for spatial exploration has depicted individuals' difficulties and unsuccessful performance of subsequent orientation tasks in RS (Munro, Breaux, Patrey, & Sheldon, 2002). Our previous research (Lahav & Mioduser, 2008;Lahav, Schloerb, Kummar, & Srinivasan, 2011), resembling the current findings, described participants' ability to manipulate spatial information and proceed confidently and successfully to the target during orientation tasks in RS. This research demonstrates the positive effect of BlindAid in the unfamiliar space. ...
Article
Full-text available
This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind.
... Children with severe visual impairment will have to learn to use tactile/haptic and auditory landmarks and guidelines when orienting, sometimes with the help of assistive technology [87]. Adaptation of the environment will also imply having a relatively stable physical environment, allowing the child to use established knowledge about the environment and build up a cognitive map of non-visual cues [88][89][90]. ...
Article
Juvenile Neuronal Ceroid Lipofuscinosis (JNCL) is characterized by severe visual impairment with onset around age 4-8years, and a developmental course that includes blindness, epilepsy, speech problems, dementia, motor coordination problems, and emotional reactions. There is presently no cure and the disease leads to premature death. There have been few studies of non-medical intervention for individuals with JNCL, probably because of the negative prognosis. The present chapter discusses the education of children and adolescents with JNCL on the basis of current knowledge about the variation in perceptual, cognitive and language abilities through the course of the disease, and the possibilities that exist for supporting coping and learning within and outside the classroom. Adapted and special needs education may contribute significantly to improved learning conditions, better maintenance of skills and less frustration for individuals with JNCL. This article is part of a Special Issue entitled: The Neuronal Ceroid Lipofuscionoses or Batten Disease.
Article
To lay the groundwork for devising, improving, and implementing new technologies to meet the needs of individuals with visual impairments, a systematic literature review was conducted to: a) describe hardware platforms used in assistive devices, b) identify their various applications, and c) summarize practices in user testing conducted with these devices. A search in relevant EBSCO databases for articles published between 1980 and 2014 with terminology related to visual impairment, technology, and tactile sensory adaptation yielded 62 articles that met the inclusion criteria for final review. It was found that while earlier hardware development focused on pin matrices, the emphasis then shifted toward force feedback haptics and accessible touch screens. The inclusion of interactive and multimodal features has become increasingly prevalent. The quantity and consistency of research on navigation, education, and computer accessibility suggest that these are pertinent areas of need for the visually impaired community. Methodologies for usability testing ranged from case studies to larger cross-sectional studies. Many studies used blindfolded sighted users to draw conclusions about design principles and usability. Altogether, the findings presented in this review provide insight on effective design strategies and user testing methodologies for future research on assistive technology for individuals with visual impairments.
Article
Purpose – The purpose of this paper is to examine the past 15 years of research and development (R&D) on the role of virtual environments (VEs) as an orientation and mobility (O&M) aid to enhance skills and to train people who are blind or newly blind. Design/methodology/approach – This paper describes and examines studies of 21 VE systems developed specifically to help people who are blind improve their O&M skills. These VE systems, equipped to supply appropriate perceptual and conceptual spatial information through haptic and auditory sensorial channels, are mainly focussed on two goals: helping congenitally blind or late blind persons to collect spatial information in advance and supporting people who are newly blind in practicing their O&M skills during rehabilitation. The R&D studies represented in these 21 studies were examined along three dimensions: descriptive information, system, and research. Findings – This paper highlights weaknesses and strengths of VE systems that have been developed in the past 15 years as O&M aids for people who are blind. These results have the potential to influence future R&D in this field. Originality/value – The author hopes that this paper will influence future R&D in this field and lead to accessible O&M VEs in practice and research.
Article
Full-text available
Haptic displays are emerging as effective interaction aids for improving the realism of virtual worlds. Being able to touch, feel, and manipulate objects in virtual environments has a large number of exciting applications. The underlying technology, both in terms of electromechanical hardware and computer software, is becoming mature and has opened up novel and interesting research areas. In this paper, we clarify the terminology of human and machine haptics and provide a brief overview of the progress recently achieved in these fields, based on our investigations as well as other studies. We describe the major advances in a new discipline, Computer Haptics (analogous to computer graphics), that is concerned with the techniques and processes associated with generating and displaying haptic stimuli to the human user. We also summarize the issues and some of our results in integrating haptics into multimodal and distributed virtual environments, and speculate on the challenges for the future.
Book
During September 10-14, 1984, we held a Research Workshop at the Lake Arrowhead Conference Center, California, bringing togeth­ er leaders in the field of electronic spatial sensors for the blind from the psychology, engineering, and rehabilitation areas. Our goal was to engage these groups in discussion with one another about prospects for the future of electronic spatial sensing, in the light of emerging technologies and the increasing sophistica­ tion of behavioral research related to this field. The papers in this book give an update on several of the key research traditions in thi s fi e 1 d. Broader overvi ews are provi ded in the paper by Brabyn, and in our Historical Overview, Final Commentary and the Introductions to each section. In a field as complex as this, some overlap of discussion is desirable and the reader with a serious interest in this field is advised to sample several opinions. This volume, and the conference on which it is based, received assistance from many people and organizations. The Scientific Affai rs Divi sion of the North Atl antic Treaty Organization sup­ ported the conference as part of their program of Advanced Research Workshops, and the Science and Technology to Aid the Handicapped Program of the National Science Foundation provided additional major financial support. The Center for Social and Behavioral Sciences Research of the University of California, Riverside provided financial as well as major logistical support.
Article
We provide a systematic study for generating interactive, virtual environments for the blind. We present our system as a tool for shape recognition and mobility training for the blind. In our system, head movement can be detected to indicate horizontal and vertical movements. Audio feedback is used for reinforcement. Our experiment for shape learning can guide the user in tracing the surface of a sphere by using the audio feedback. We also present a compelling case for using force feedback devices for visually impaired, and our experience with the PHANToM(TM) force feedback device is summarized. A detailed survey of present research is also presented.
Article
This study examined the mental effort required to monitor landmarks and the effect of the type of route on mobility-route training. The results revealed that the features of landmarks and competence in travel were significantly related, indicating that some environmental factors related to height and width are more easily learned when people can travel independently. A similar result was found when types of travel were compared.
Article
Exploration of unknown spaces is essential for the development of efficient orientation and mobility skills. Most of the information required for the exploration is gathered through the visual channel. People who are blind lack this crucial information, facing in consequence difficulties in mapping as well as navigating spaces. This study is based on the assumption that the supply of appropriate spatial information through compensatory sensorial channels may contribute to the spatial performance of people who are blind. The main goals of this study were (a) the development of a haptic virtual environment enabling people who are blind to explore unknown spaces and (b) the study of the exploration process of these spaces by people who are blind. Participants were 31 people who are blind: 21 in the experimental group exploring a new space using a multi-sensory virtual environment, and 10 in the control group directly exploring the real new space. The results of the study showed that the participants in the experimental group mastered the navigation of the unknown virtual space in a short time. Significant differences were found concerning the use of exploration strategies, methods, and processes by participants working with the multi-sensory virtual environment, in comparison with those working in the real space.
Article
The ADL taxonomy comprises 12 defined activities organized into actions from the easiest to the more demanding ones. The purpose of the study was to determine whether the actions in the activities have an ordered, categorical structure, and to compare the distribution of individuals at the levels within each activity in three different samples. Data were collected by occupational therapists from the following three samples: (i) persons 65 years or older with home help (n = 684); (ii) patients from different fields of occupational therapy (n = 373); and (iii) patients with stroke (n = 226). This study shows that there is an ordered, categorical structure within each activity defined in the ADL-taxonomy. The median proportion error of scales was 4%. Some of the activities/actions were redefined and one action was excluded because of the concentration of errors. People living in their own homes had a significantly higher level of ability in all activities compared to the other groups. The ADL taxonomy can be used for evaluation at individual as well as group levels, as every single activity can be used as an ordered scale.
Article
This study investigated whether extended training in an acoustically rich environment could enhance the spatial updating ability of 12 adults who were congenitally blind. After training, the adults' distance perception from a home-base location and novel locations was superior to that of a sighted control group, whereas their direction perception was comparable.
Article
The ADL taxonomy comprises 12 defined activities organized into actions from the easiest to the more demanding ones. The purpose of the study was to determine whether the actions in the activities have an ordered, categorical structure, and to compare the distribution of individuals at the levels within each activity in three different samples. Data were collected by occupational therapists from the following three samples: (i) persons 65 years or older with home help (n=684); (ii) patients from different fields of occupational therapy (n=373); and (iii) patients with stroke (n=226). This study shows that there is an ordered, categorical structure within each activity defined in the ADL-taxonomy. The median proportion error of scales was 4%. Some of the activities/actions were redefined and one action was excluded because of the concentration of errors. People living in their own homes had a significantly higher level of ability in all activities compared to the other groups. The ADL taxonomy can be used for evaluation at individual as well as group levels, as every single activity can be used as an ordered scale.
Article
This study aims at the conception of haptic and vocal navigation software that permits blind sailors to create and simulate ship itineraries. This question implies a problematic about the haptic strategies used by blind people to build their space representation when using maps. According to current theories, people without vision are able to construct cognitive maps of their environment but the lack of sight tends to lead them to build egocentric and sequential mental pictures of space. Nevertheless, exocentric and unified representations are more efficient. Can blind people be helped to construct more effective spatial pictures? Prior works have shown that strategies are the most important factors in spatial performance in large-scale space. To encode space in an efficient way, we made our subject use the cardinal points reference in small-scale space. During our case study, a compass establishes a frame of external cues. In this respect, we support the assumption that training based on systematic exocentric reference helps blind subjects to build unified space. At the same time, this training has led the blind sailor to change his haptic strategies to explore tactile maps and to perform better. This seems to modify his processing of space representation. Eventually, we would like to study the transfer between map representation and environment mobility. Our final point is about using strategy based on cardinal points and haptic virtual reality technologies to help the blind improve their spatial cognition.