Content uploaded by Orly Lahav
Author content
All content in this area was uploaded by Orly Lahav on May 12, 2016
Content may be subject to copyright.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 41
A Virtual Map to Support People Who Are Blind
in Navigation Through Real Spaces
Orly Lahav
Tel Aviv University
David W. Schloerb, Siddarth Kumar, and Mandayam A. Srinivasan
Massachusetts Institute of Technology
Most of the spatial information needed by sighted people to construct cognitive maps of spaces is gathered
through the visual channel. Unfortunately, people who are blind lack the ability to collect the required spa-
tial information in advance. e use of virtual reality as a learning and rehabilitation tool for people with
disabilities has been on the rise in recent years. is research is based on the hypothesis that the advance
supply of appropriate spatial information (perceptual and conceptual) through compensatory sensorial
channels within a virtual environment may assist people who are blind in their anticipatory exploration and
cognitive mapping of the unknown space. In this long-term research we developed and tested the BlindAid
system that combines 3D audio with a Phantom® haptic interface to allow the user to explore a virtual map
through a hand held stylus. e main goals of this research were to study the cognitive mapping process of
people who are blind when exploring complex virtual maps and how they apply this spatial knowledge later
in real space. e ndings supply strong evidence that interaction with the BlindAid system by people who
are blind provides a robust foundation for the participants’ development of comprehensive cognitive maps
of unknown real spaces.
e visual sense plays a primary role in guiding sighted
persons through an unknown environment and helping
them reach a destination safely. Unfortunately, people
who are blind face diculties in performing such tasks.
For most people who are blind, walking in an unknown
environment can be uncomfortable, even after orienta-
tion and mobility (O&M) rehabilitation training. In
this paper we dene the term O&M as “the eld dealing
with systematic techniques by which blind persons orient
themselves to their environment and move about inde-
pendently” (Blasch, Wiener, & Welsh, 1997). Research
on O&M skills of people who are blind in known and
unknown spaces (Ungar, Blades, & Spencer, 1996) indi-
cates that the support for the acquisition of spatial map-
ping and orientation skills should be supplied at two
main levels: perceptual and conceptual.
At the perceptual level, information perceived via other
senses should compensate for the deciency in the visual
channel. us, the touch, audio, and olfactory chan-
nels become powerful suppliers of information about
unknown environments. In addition to the regular au-
dio feedback, the audio channel includes echolocation,
which enables the use of echo sounds to collect sur-
rounding spatial information (Kish, 1997). At the con-
ceptual level, the focus is on supporting the development
of appropriate strategies for an ecient mapping of the
space and the generation of navigation paths.
Over the years, O&M aids have been developed as sec-
ondary aids to help people who are blind build cognitive
maps and explore real spaces. Research has been done on
the eectiveness of these aids for people who are blind
Journal of Special Education Technology
42 JSET 2011 Volume 26, Number 4
and their ability to perform spatial tasks that are simi-
lar to the tasks they perform in this research. ese sec-
ondary aids are not a replacement for primary aids such
as the long cane and the guide dog. Farmer and Smith
(1997) describe the long cane as a device used “to provide
detection or preview by extending the tactile sense of the
user.” ere are two types of O&M aids: preplanning
aids that provide the user with information before arrival
in an environment, and in situ aids that provide the user
with information about the environment. Preplanning
aids include a verbal description of the space, tactile
maps, strip maps, physical models (Espinosa & Ochaita,
1998; Herman, Herman, & Chatman, 1983; Rieser,
1989; Ungar et al., 1996); sound-based virtual environ-
ment () systems (Sánchez, Noriega, & Farías, 2008);
and digital audio and tactile screens, which allow the us-
ers to collect graphics information as diagrams and maps
(for example, the Nomad device or the Talking Tactile
Tablets). No O&M research has been done yet on the
digital audio and tactile screen technology.
In situ aids that have been developed in recent years in-
c lu de m ob il i ty a id s su c h a s ob st ac le de t ec t or s. So me o f t he
obstacle detection devices are based on ultrasonic echolo-
cation, which is used to sense objects. ese include, for
example, Sonicguide (Warren & Strelow, 1985), Kaspa
(Easton & Bentzen, 1999), Miniguide (GDP Research,
2005), and Palmsonar (Takes Corp., 2007). Other sys-
tems, such as the Tactile Vision Substitution System
(), are based on a black-and-white camera for in-
put and a small tactile tablet that is placed on the user’s
tongue for output (Bach-y-Rita, Tyler, & Kaczmarek,
2003). e orientation aids are based on embedded in-
formation and navigation systems. Environmental adap-
tation is needed for the embedded information devices,
such as Talking Signs, which place sensors in the envi-
ronment (Crandall, Bentzen, Myers, & Mitchell, 1995),
or the activated audio beacon that uses cell phone tech-
nology (Landau, Wiener, Naghshineh, & Giusti, 2005).
e navigation systems that use personal guidance, or
global positioning, systems () are based on satellite
communication and include VoiveNoteGPS, Trekker,
Waynder Access, and others (Golledge, Klatzky, &
Loomis, 1996; Loomis, Golledge, Klatzky, & Marston,
2007).
e inventory of O&M electronic aids encompasses
more than 150 systems, products, and devices (Roentgen,
Gelderblom, Soede, & de Witte, 2008). However, there
are a number of limitations in the use of these preplan-
ning and in situ aids. For example, the limited dimen-
sions of tactile maps and models may result in poor
resolution of the spatial information provided (e.g., lack
of precise topographical features or of accurate dimen-
sions and locations for structural objects). ere also are
diculties in producing them and acquiring updated
spatial information, and they rarely are available. As a
result of these limitations, people who are blind are less
likely to use preplanning aids in everyday life. e major
limitation of the in situ aids is that users must gather the
spatial information in the explored space, making it im-
possible to build the cognitive map in advance and creat-
ing a feeling of insecurity and dependence upon arrival
at a new space. Furthermore, the embedded informa-
tion devices require special installation in the real space.
From the perspective of safety and isolation, the in situ
aids are based mostly on auditory feedback, which in
real space can reduce users’ attention and isolate them
from the surrounding space, especially from auditory in-
formation such as cars, auditory landmarks, or personal
interactions.
e use of virtual reality in domains such as simulation-
based training, gaming, and entertainment industries
has been on the rise in recent years. In particular, this
technology is used in learning and rehabilitation envi-
ronments for people with physical, mental, and learning
disabilities (Schultheis & Rizzo, 2001; Standen, Brown,
& Cromby, 2001). e word “haptic” derives from the
Greek haptikos, “able to touch.” In this paper we use
haptic to describe touch-mediated manual interactions
with real or virtual environments, such as exploration for
extraction of information about the environment or ma-
nipulation for modifying the environment (Srinivasan
& Basdogan, 1997). Research on the implementa-
tion of haptic technologies within s (Basdogan &
Srinivasan, 2002; Biggs & Srinivasan, 2002; Salisbury
& Srinivasan, 1997) and their potential for supporting
rehabilitation training has been reported for sighted
people (Giess, Evers, & Meinzer, 1998; Gorman, Lieser,
Murray, Haluck, & Krummel, 1998). Technological ad-
vances, particularly in haptic interface technology, en-
able blind individuals to expand their spatial knowledge
by using articially made reality maps through haptic
and audio feedback (Parente & Bishop, 2003) and con-
struction of cognitive maps (Lahav & Mioduser, 2004;
Semwal & Evans-Kamp, 2000; Simonnet, Guinard, &
Tisseau, 2006).
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 43
e study t hat is described in this paper is part of a larger
research eort that included design and development of
the BlindAid system and a usability study on the system
components. In the present study we report on the con-
tribution of the BlindAid system to users who are blind
in exploring virtual maps in order to familiarize them-
selves with new real spaces. e main research questions
of this study were:
1. Which exploration processes do people who are
blind use for exploring an unknown space in a ?
2. Which structural components and relationships
are included in the cognitive map constructed by
people who are blind who explored the unknown
space in the ?
3. How does the cognitive map help blind people
who explored the perform the orientation tasks
in the real space?
In the next section, we will describe the BlindAid sys-
tem that was developed especially for this research. We
will present the general research method, present the
research results, and conclude with a discussion on the
merits of using the BlindAid system.
e BlindAid System
e BlindAid system, shown in Figure 1, was designed
through active collaboration among engineers and learn-
ing scientists at the Massac huset ts Institute of Technology
(MIT) Touch Lab, an expert on three-dimensional (3D)
audio in s, and an O&M instructor from the Carroll
Center for the Blind in Newton, Massachusetts. e
system provides virtual maps for people who are blind
and consists of application software running on a per-
sonal computer equipped with a haptic device and ste-
reo headphones. e haptic device, a Desktop Phantom
(SensAble Technologies), allows users who are blind to
interact manually with the . It has two primary func-
tions: (1) it controls the position of the user avatar with-
in the ; and (2) it provides haptic feedback and cues
similar to those generated by the tip of a long cane (e.g.,
stiness and texture) about the space from the tip of the
Phantom. For example, the indoor real space includes
dierent ground textures (tile oor, marble oor, rubber
oor, wood oor, or other); and each oor has a dierent
degree of stiness and texture feedback (hard, bouncy,
smooth, or rough). e BlindAid s simulate these
dierent ground textures. When users interact virtually
with a rubber oor, for example, the tip of the Phantom
produces a sense of bounce and bumpiness. Interacting
with a tile or marble oor generates a dierent sensation.
e headphones (Sennheiser HD580) present sounds to
the users as if they were standing in the .
e virtual workspace is a rectangular box that corre-
sponds to the usable physical workspace of the Phantom,
and the user avatar always is containedwithin the work-
space. ere are two methods for moving the virtual
workspace within the in order to explore beyond the
connes of the workspace. e rst method involves
pressing one of the arrow keys. Each arrow key press
shifts the workspace half of its width in the given di-
rection. e second method involves only the Phantom;
the user presses a button on the stylus, causing the user
avatar position to be xed in the . en, similar to the
way in which one repositions a computer mouse upon
reaching the edge of the mouse pad, the user—while
holding the stylus button—advances the virtual work-
space in the opposite direction.
Six action commands on the computer’s numeric keypad
permit the user to control other aspects of the system
while interacting with the . e commands include
restart, pause, start, install and recall landmark, addi-
tional audio information, and zoom in and zoom out.
For example, zoom in and zoom out allows the user to
change the horizontal scale of the virtual workspace
within the to display more or less detail. In addition,
every object within the is assigned a maximum zoom
out level. is allows the developer to control the level
of detail that may be accessible to the user at the dier-
ent zoom levels. For example, one of the zoom in levels
allows the user to explore the environment’s structure
without the objects in it. Note that the haptic and audio
feedbacks do not change with the zoom level and are
consistent for all the s in our tests.
In addition to the user mode described above, the sys-
tem also has edit and evaluation modes. In this stage
of research and development a semi-automated editor
can read an electronic blueprint le (dxf) to import the
walls of a building into a new , and by manual ed-
iting we can add other types of objects and dene the
audio and haptic feedback. In the future, this manual
editing process will be replaced by an automatic editor.
Furthermore, all the BlindAid s will be available and
Journal of Special Education Technology
44 JSET 2011 Volume 26, Number 4
accessible via the Internet, much like the visual maps
that are accessible to sighted people. e evaluation
mode allows researchers to record and review the avatar’s
position and orientation within the during an experi-
ment session. e computer records the user behavior in
a text le, and these data can be viewed directly or re-
played by the system like a video recording. As shown in
Figure 2, the central display demonstrates the user’s path
(the black dots interconnected by lines). e big black
dot represents the user’s avatar, and gray lines represent
private and public doors. e gray area represents a rub-
ber oor texture, and the three white rectangle areas rep-
resent a marble oor with a smooth texture. e nine
rectangle shapes represent objects in the environment
(such as a public phone, water bubbler, sculpture, or soda
machine). e upper right keyboard shows the user’s ex-
ecution of command actions. e researchers also can
listen to the sounds during playback. Further technical
details about the system are presented in an earlier paper
(Schloerb, Lahav, Desloge, & Srinivasan, 2010).
Method
Participants
e research included four participants who were se-
lected on the basis of seven criteria: totally blind (no vi-
sual ability), at least 21 years old, not multihandicapped,
trained in O&M, English speaking, having onset of
blindness at least two years prior to the experimental pe-
riod, and comfortable with the use of computers. One
participant was congenitally blind and three were adven-
titiously blind, one was female and three were male, ages
ranged from 41 to 53 years, one was a guide dog user,
and three were long cane users. To evaluate the partici-
pants’ initial O&M skills, each was asked to complete
a questionnaire on O&M issues. e results showed no
dierences in initial O&M ability among participants.
Each participant reported previous experience with
computer applications but no previous experience with
s or the Phantom device other than their experience
with it during our previous research (the usability study).
All the participants took part in our previous research,
which lasted two to three meetings (three hours total).
During this study, the participants interacted with 13
s and evaluated varied audio feedback (mono, stereo,
and stereo with rotation), haptic parameters (stiness,
dampness, static/dynamic friction, smooth and nons-
mooth texture), navigation tools, and command action.
is usability study included s that were not associ-
ated with real spaces. All the participants arrived at the
experiment room with the help of the researcher, who
met them at a public transportation station or on the
street when they were dropped o by a taxi. All four
participants took part in all the experiments.
Variables
ree groups of dependent variables were dened: the
process of the exploration task, construction of a cog-
nitive map, and performance of the orientation tasks.
ese variables were dened in our previous research
(Lahav, 2003; Lahav & Mioduser, 2008a).
Six variables were related to the exploration process:
1. Total duration was the total time spent accom-
plishing the task.
2. Spatial strategies were alternative strategies used
by the participants in their exploration. ese in-
cluded perimeter strategy (walking along a room’s
walls), grid strategy (exploring a room’s interior
by scanning the room), object-to-object strategy
(walking from one object to another), exploring
object area strategy (walking around an object and
Figure 1
e Blind-Aid system.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 45
exploring the space around it), and random strat-
egy (walking without pattern).
3. Systematic exploration was exploring the new en-
vironment in a planned, methodical pattern in or-
der to acquire spatial information. is included
systematic (exploring the space in a planned, me-
thodical pattern), systematic most of the time (us-
ing a systematic pattern most of the time during
the exploration process), restless but systematic
(wandering around in a space in a systematic pat-
tern), systematic some of the time (using a system-
atic pattern a few times during the exploration
process), and unsystematic (random wandering
around in a space).
4. Stops were the number of pauses made by the par-
ticipant during the exploration. Two values were
dened: short pauses (4–10 s) introduced for user’s
technical purposes (e.g., changing the grasp posi-
tion on the Phantom), and long pauses (more then
10 s) apparently used for cognitive processing (e.g.,
memorization or planning).
5. Command action was the use of the computer’s
numeric keypad to control spatial information aids
while interacting with the .
6. Frequency of command action usage.
e construction of a cognitive map process included
eight variables that were related to cognitive map compo-
nents and the cognitive map construction process. Four
variables were related to the cognitive map components:
(1) structural components, (2) structural component
location, (3) objects within the room, and (4) object lo-
cation. Four other variables were related to the cognitive
map construction process:
1. Spatial strategy was used for describing the space:
perimeter, object-to-object, items list, or starting
point perspective descriptions.
2. Spatial model was used for describing the space—
route model in which the environment is described
in terms of a series of displacements in space, map
model (a holistic overall description of the space),
and integrated representation of route and map
models.
3. Chronology of the descriptive process.
4. Spatial relationships were verbal descriptions that
related an object to a structure or to another object
by distances or directions.
e third group of variables examined the participants’
performance on the orientation tasks in the real space.
ere were four of these variables:
1. Successful completion was the participant’s ability
to nd the task’s target in the real space—good
navigation, arrived at the target’s zone, arrived at
the target’s zone with verbal assistance, and failed.
2. Type of path was the path that the participant
chose to take—direct, direct with limited walking
around, indirect, and wandering around.
3. Spatial strategies were alternative strategies used
by the participants in their navigation—perim-
eter, grid, object-to-object, exploring object area,
or other strategies.
4. Total duration was the total time spent accom-
plishing the task.
Instrumentations
e research included eight tools: three for the imple-
mentation, and ve for the collection of the data. ere
also were three tools for the implementation of the
studies.
Simulated environments. Four s were designed; they
were based on actual spaces at the MIT campus. ey
ranged from a simple space to a complex space. e com-
plexity of the environment was dened by environment
size, environment shape, and number of components
(Martinsen, Tellevik, Elmerskog, & Storliløkken, 2007).
Figure 2
Evaluation display.
Journal of Special Education Technology
46 JSET 2011 Volume 26, Number 4
e rst (1) included a lobby area in a square shape
(202 square meters). e second ( 2) was a new space
in an L shape with some structural similarity to 1 (291
square meters). e third environment (3) was a new
space with a lobby area that was similar to 1, a long
corridor with several private doors, a conference room,
and more than 25 objects (352 square meters). e last
(4) was a complex area that included 3 com-
ponents and a new space with a few long corridors, a
second conference room, a second lobby, and more than
50 new objects (663 square meters). (See Figure 3.) We
chose this simple-to-complex space approach to allow
users to learn how to explore the gradually by using
the BlindAid system and to examine their behavior in
dierent complex spaces. Because of safety and O&M
issues, these simulated environments were chosen in
conjunction with an O&M rehabilitation specialist.
Exploration task. Each participant was asked to explore
each freely and with time limitations. An O&M
rehabilitation specialist dened the exploration time
limitation of each . Before the exploration task, the
researchers informed the participants that they would be
asked to describe the space and its components at the
end of their exploration.
Orientation task. Each participant was asked to perform
ve orientation tasks in the real target space: a target-
object task, reverse to the task’s starting point, a perspec-
tive taking task, reverse to the task’s starting point, and a
point-of-location task from the same starting point that
was used in the . Each simulated environment had
ve orientation tasks that were unique to it. ese orien-
tation tasks were designed with an O&M rehabilitation
specialist.
In addition to the preceding three implementation tools,
a set of four tools was developed for the collection of
quantitative and qualitative data:
1. O&M questionnaire. e aim of this questionnaire
was to evaluate the participants’ O&M ability and
to nd dierences and similarities in their O&M
experience and abilities. e questionnaire had 50
questions about the participant’s O&M ability in-
doors and outdoors as well as in known and un-
known environments. Some of the questions were
adapted from O&M rehabilitation evaluation in-
struments for use in this research (Dodson-Burk
& Hill, 1989; Lahav, 2003; Sonn, Tornquist &
Svensson, 1999).
2. Observations. e participants were video record-
ed during their exploration and orientation tasks.
ese video recordings were transcribed.
3. Open interview. After completing the exploration
task of each , the participants were asked to de-
scribe the space verbally. is open interview was
video recorded and transcribed.
4. Computer log. e computer data enabled the re-
searchers to track the user’s exploration activities
in the in two ways: as a text le, and as a video
recording le. e integration of both sets of data
supplied information about the user’s exploration
strategies, spatial problem-solving abilities, the dis-
tances traversed, and path duration (see Figure 2).
Procedure
is study was a part of a larger research eort. e rst
study included usability experiments. During that rst
study, the participants learned how to operate and to
gather spatial information by using the BlindAid system.
All participants worked and were observed individually.
In the rst session, two consent forms were obtained:
authorization of participation in this study and photog-
raphy or videotaping of the experimental setup, followed
by an O&M questionnaire. All participants then had
four sessions (Sessions 2 to 5); each session focused on
one of the simulated environments, starting with 1
and nishing with 4. In the second session before ex-
ploring the 1, each participant learned how to move
the virtual workspace by using the arrow keys. After a
short learning period (2–3 min) the participants started
to explore 1. In the third session, before exploring 2,
the participants learned about the meaning of zoom in
and zoom out layers and how to operate them. In the
fourth session, before exploring 3, two randomly cho-
sen participants learned how to move the virtual work-
space by using the Phantom. In the fth session, they
explored 4 and each participant continued to use his
or her previous method to move the virtual workspace.
Every session included a exploration task, followed
by verbal description, which followed arrival for the rst
time at the target real space to perform ve orientation
tasks. Each session lasted about 1.5–2 hr and was video
recorded. Processing and analysis of the collected data
followed these sessions. During this stage, the researcher
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 47
and the O&M rehabilitation specialist evaluated the ex-
ploration and the orientation task performance of each
by each participant separately using the video record-
ing, the transcription, and computer log data. e video
recordings were stored on external HD media locked in
a cabinet in the research lab.
Data Analysis
To evaluate the participants’ O&M exploration and per-
formance we used our previous research coding schemes
instruments (Lahav, 2003; Lahav & Mioduser, 2008b).
ese coding schemes instruments were dened by three
O&M rehabilitation specialists who have been working
in a rehabilitation center for people who are blind for
more than 15 years. ey took part in the design and
construction of each coding scheme based on the ob-
servation of video data and computer logs; the identi-
cation and classication of exploration strategies; the
consolidation of evaluation instruments based on the
previous analyses and on the O&M literature (e.g.,
Jacobson, 1993; Jacobson, Kitchin, Garling, Golledge,
& Blades, 1998; Hill et al., 1993); and the implementa-
tion of the instruments for analyzing the participants’
O&M exploration, performance, and acquaintance with
the new space. Before analyzing this research data, an
O&M rehabilitation specialist evaluated a few observa-
tions (video, including the transcription and computer
logs) using the previous research-coding scheme to be as-
sured that all the variables were included and dened. In
addition, the O&M rehabilitation specialist took part in
the evaluation process, specically in the identication
and classication of exploration tasks and in the partici-
pants’ orientation tasks performance in the real space.
Results
Research Question 1: Which exploration processes do people
who are blind use for exploring an unknown space in a ?
Six aspects are of interest regarding the exploration
processes used by the four participants: total duration,
spatial strategies, systematic exploration, number and
kinds of pauses made while examining the new space,
Figure 3
Fourth virtual environment.
Note: e environment is a T-shape corridor, with each side of the long corridor ending in a rectangle that is used as a lobby.
Each lobby contains three elevators, a set of bathrooms (women and men), a water bubbler, and a set of re stairs. Four private
doors are located in the north wall of the long corridor, and there are another ve private doors and one public door in the oppo-
site wall. e public door is the entrance to the conference room, which is a rectangle with a window wall opposite the entrance
door. is conference room includes a long table with 14 chairs around it, a podium, two trash cans, a table, a magazine rack,
and a coat rack. Two private doors are located in the right wall of the corridor and one private door and two public doors of the
second conference room are located in the left wall. e second conference room is also a rectangle. It includes three tables, two
trash cans, a podium, a magazine rack, a coat rack, and four rows of eight chairs in each row (for a total of 32 chairs).
Journal of Special Education Technology
48 JSET 2011 Volume 26, Number 4
command actions, and frequency of command action
usage. e results present the unique exploration of
each participant and his or her spatial behavior diversity
during the exploration tasks. For example, Participant
1 explored the dierently than did Participant 4,
and eventually those dierences inuenced the cogni-
tive map and behavior in the real tasks. ese dierences
will be discussed later in this article. e results of the
four exploration tasks (1–4) of each participant are
shown in Table 1.
e total average of the exploration time increased in re-
lation to the complexity of the explored (shape, size,
and number of components). For example, in 1 the
average was 28:51 min, and in 4 it was 60:33 min.
In addition, dierences were found among the partici-
pants. For example, the duration time for Participant 1
or Participant 3 to explore the was in most of the
tasks twice the duration time of Participant 4. In all ex-
ploration tasks, 81.25% of the participants performed
the perimeter strategy as a rst spatial strategy to explore
the , the object-to-object strategy as a second strategy
(73%), and the grid strategy as a third strategy (75%).
Participants 3 and 4 excelled in their systematic explora-
tion during all the exploration tasks. For example,
during his perimeter strategy exploration, Participant 3
started to name the objects aloud and to relate them to
each other. For example, “Between Table 4 and Locker
1 there is a doorway; on the right of the doorway…;
there is a magazine rack on the left [of] Locker 2….”
During his grid strategy exploration, he tried to discover
new areas: “Let’s see what is across from there, second
table, two…where Table 1 is.” Participant 2 improved
his systematic exploration during the exploration tasks
and become more systematic during the exploration of
4. In the opposite range, Participant 1 had diculty
keeping a systematic exploration method during the en-
tire range of exploration tasks. During the analysis stage
of the exploration process, we noticed that Participant
4 mostly explored the right side of his path and covered
more hallways and fewer details (such as obstacles in his
way). One of the reasons might be that he is a guide dog
user and he applied his real space exploration strategies
in the (in the real space only his right hand is free
to explore) to use later on in the orientation tasks in the
real space.
As part of the exploration process we examined the
number and kinds of pauses participants made while
examining the new space. We divided the pauses into
two groups: short pauses (4–10 s) introduced for techni-
cal purposes, and long pauses (more than 10 s) used for
cognitive processing. e results show that the use of
short and long pauses increased in relation to the type
of that was explored. In 1, the participants used
an average of 18 short pauses and eight long pauses.
e number of pauses increased in 4, where the par-
ticipants used an average of 41 short pauses and 12 long
pauses. Long pauses of 1:30 to 3 min each were observed
in 3 and 4. Participants 1 and 3 used many pauses
(short and long), and they used them in dierent ways.
Participant 1 used the pauses for resting and memoriz-
ing the objects. Participant 3 used the pauses to build his
next path based on where he wanted to go and what he
wanted to discover or to check.
During the exploration of the s the participants used
command actions that allowed them to get more spatial
information and to navigate in the . e most used
tool was Additional Audio Information. In 1, the fre-
quency of accessing this tool averaged 28.5, and in 4
the average was 90.5. We also examined the length of
time that this tool was used. We found that 13% to 24%
of the participants’ exploration total duration was with
the Additional Audio Information tool. In the second
, three of the four participants used this tool for 26%
to 34% of the total duration. Accessing and time of use
of this tool increased in relation to the type of .
During the exploration tasks, the participants could ask
the system to send them to the starting point auto-
mat ica lly. e use of this tool i ncre ased a s the become
larger and more complex; for example in 1 0.5 times,
and in 4 4.5 times. Similar increases occurred with
the Pause tool. During the rst two s no participant
used the Pause tool, but in 4 three of the participants
used it during their exploration. A fourth tool allowed
the participants to Install and Recall Landmarks (PL)
or to Recall Researcher Landmarks (RL) that were in-
stalled in advance. ese tools were used mostly in 1,
and their use decreased in relation to the type of .
All the participants used the Zoom In and Zoom Out
command actions. e number of uses of these tools was
consistent in all the s, although the participants used
the Zoom Out command for a longer period of time.
Each participant used the Zoom In or Zoom Out com-
mands in dierent points during their exploration. For
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 49
Table 1
Participants’ Performance in the Four VE Exploration Tasks
Stops Commands Action
nVE Time
Spatial
Strategy
Syste matic
Exploration Short Long
Move to
Start
Additional
Audio /Time
Zoom
In/ Time
Zoom Out/
Time
Install
Landmark PL RL
11 49:13 1- Perimeter
2- EOS
3- Grid
Systematic
restless 18 26 2 49
06:24 624
247:34 1- Obj-obj
2- Perimeter
3- Random
Some of the
time 32 11 4 62
09:02 1
00:07 1
14:00 10 12 1
364:09 1- Perimeter
2- Obj-obj
3- Grid
Systematic
restless 32 11 1 98
11:38 113
41:11:22 1- Obj-obj
2- Perimeter
3- Random
Some of the
time 62 16 4 66
09:37 3
02:50 2
07:00 7
21 20:19 1- Perimeter Most of the
time 213
07:10 5144
230:09 1- EOS
2- Perimeter
3- Obj-obj
Systematic
restless 14 5 1 14
08:49 1
17:22 1
04:07
337:44 1- Perimeter
2- Obj-obj
3- Grid
Systematic
restless 17 5 7 29
01:47
435:39 1- Perimeter
2- Obj-obj
3- Grid
Excellent 4 2 6 94
06:13 2
05:45 3
12:00
continued
Journal of Special Education Technology
50 JSET 2011 Volume 26, Number 4
Table 1, continued
Participants’ Performance in the Four VE Exploration Tasks
Stops Commands Action
nVE Time
Spatial
Strategy
Syste matic
Exploration Short Long
Move to
Start
Additional
Audio /Time
Zoom
In/ Time
Zoom Out/
Time
Install
Landmark PL RL
31 32:07 1- Perimeter
2- Obj-obj
3- Grid
Excellent 44 4 38
06:12 326
255:28 1- Perimeter
2- Obj-obj
3- Grid
Excellent 65 4 1 79
14:37 1
06:07 11
335:09 1- Perimeter
2- Obj-obj Excellent 27 5 1 34
03:35 2
06:35 1
01:15 15
41:32:29 1- Perimeter
2- Obj-obj Most of the
time 95 25 7 86
36:02
4113:45 1- Perimeter
2- Obj-obj
3- Grid
Excellent 6 3 14
02:47 161
223:02 1- Perimeter
2- Obj-obj
3- Grid
Excellent 11 1 45
07:56 1
01:05 2
07:00 1
327:07 1- Perimeter
2- Obj-obj
3- Grid
Excellent 7 4 2 43
04:15 1
00:05 4
03:56 1
442:42 1- Perimeter
2- Obj-obj Excellent 5 5 1 116
07:28
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 51
example, Participant 1 expressed his preference by say-
ing, “My preferences will be to try it normally rst and
then if things start to be too cluttered I will take the
layers to see.” Participant 3 decided to start his explora-
tion of 3 without objects (Zoom In). He also used the
Additional Audio Information tool. After three minutes
he used the Zoom Out and explored the MIT campus
for another 2:37 min; he then returned to the starting
point and explored the environment without objects
(Zoom In) for another 4:20 min. Afterward, he contin-
ued to explore the environment with all the objects in it
until he nished his task.
Participants 2 and 4 learned how to use the Phantom
and used it to explore 3 and 4. All the participants
used the arrow keys or the Phantom when it was needed
to extend their map.
Research Question 2: Which structural components and re-
lationships are included in the cognitive map constructed
by people who are blind who explored the unknown space
in the ?
After exploring each , the participants described the
environment verbally. ese results expressed the par-
ticipants’ ability to present verbally the cognitive map
that they built as a result of their exploration. e results
represented in Table 2 show that, in all four s, the
structure components were described better than the ob-
jects in all three variables (components name, location,
and location related). For example, in 4 the struc-
ture components named average was 60% compared to
26% of objects named. Structure location average was
42% compared to 15%, and structure location related
to other components was 38% compared to 19%. e
amount of spatial information about the environment
in all variables (structure and objects) decreased in rela-
tion to the type. For example, in 1 each participant
mentioned all the structure components (100%) in his or
her verbal description, and in 4 the average was 60%.
For describing the spaces, 63% of the participants used
the perimeter strategy and only 31% used the object-to-
object strategy. For the spatial model, 69% of the par-
ticipants used the route model. In both variables (spatial
strategy and spatial model), 75% of the participants were
systematic in all four s. Like the dierences that were
found among the participants’ exploration behaviors,
dierences also emerged in the ability to describe the
cognitive map. For example, Participant 1 in 1 rotated
her environment’s components (structure and objects)
by 180 degrees; in 3 she placed the environment’s ob-
jects in the opposite direction of their real location (e.g.,
right objects were placed in the left side). Participant 3
showed a high ability to describe the four environments,
Participant 4 was second, and Participant 1 showed a
lower ability to describe them.
Research Question 3: How does the cognitive map help the
blind person who explored the to perform the orienta-
tion tasks in the real space?
After the construction of the cognitive map, the par-
ticipants were asked to perform ve orientation tasks in
the real space. It should be recalled that the participants
entered the real space for the rst time to perform these
tasks, and they were not given the option to explore the
rooms rst. Four variables were examined: successful
completion of the tasks, type of path, spatial strategy, and
time spent on task (see Table 3). Most of the participants
performed the target-object tasks and perspective-taking
tasks successfully by choosing a direct path to the target.
Nevertheless, the reverse tasks were performed success-
fully in a shorter time and in a more direct path than the
task itself. Most of the participants used the object-to-
object strategy to perform their tasks in the real space.
ree participants performed their orientation tasks by
using their echolocation ability. ey transferred the
spatial information that was collected via the by hap-
tic and audio feedback and applied it in the real space as
an auditory traveler. For example, during the exploration
in the 3, Participant 3 arrived at a recess area that was
located in a corridor in front of the Research Laboratory
of Electronics () main door. In the target-object task
he was asked to nd the bulletin board (this object was
located to the right of the main door). During his
rst time in the real space he walked into the corridor,
got the echolocation information about the recess area
to his right (without checking it with his long cane),
turned left to the main door, and went from there
to the bulletin board. In the point-on-the-location tasks
the participants succeeded in performing an average of
65%–88% in the rst three spaces. e tasks included
only indoor objects. e fourth space included outdoor
objects (buildings and streets), and only 31% succeeded
in performing this task.
Journal of Special Education Technology
52 JSET 2011 Volume 26, Number 4
As in the previous results, dierences were found among
the participants’ performances. Participant 1 succeeded
in performing only 5 out of 16 tasks. He used the pe-
rimeter strategy and the indirect path to nd his way
in the real space. In the point-on-the-location task he
succeeded in only 36% of the cases. is participant had
diculty adjusting his exploration skills and had di-
culty transferring and applying them in the real space.
e other three participants used mainly the object-to-
object strategy and a direct path to nd their way to the
target object. Participants 1 and 2 walked mostly with
their long cane in the middle of the corridor, using their
echolocation traveler techniques. ey heard the echo
from the walls and the recess area and did not bump
their long cane against the walls. ey transferred their
spatial information from the , which was based most-
ly on tactile traveler techniques and audio landmarks
(e.g., the recess area).
At the starting point of each perspective-taking task,
Participant 4 chose to discover and search for known
landmarks before he started the task. is participant
used a guide dog in the real space tasks and, as a result, he
walked very quickly in the real space, passed landmarks
he was aware of, and needed to recalculate his distance
and time. After his rst task performance in the real
space he said, “When I arrived I had the general under-
standing what I will nd there, I had three elevators….
What I don’t get is the sense of size. I had in my mind
that this space will be much bigger then what it was.”
Discussion
e research reported here was an eort to assess the
contribution of s as a secondary orientation aid that
would allow people who are blind to learn about un-
known spaces in advance and to apply this spatial
knowledge in the real space. e results helped us elu-
cidate three issues concerning the contribution of the
Table 2
e Average Performance of the Cognitive Map Construction
Verbal Description Structure Verbal Description Objects
VE Components Location
Location
Related Components Location
Location
Related
Spatial
Strategy
Spatial
Model
1 100% 75% 63% 53% 21% 21% Perimeter
(n=2)
Obj-Obj
(n=2)
Route
(n=2)
Map (n=2)
2 57% 50% 43% 53% 40% 30% Perimeter
(n=3)
Item list
(n=1)
Route
(n=3)
3 80% 57% 52% 50% 22% 19% Perimeter
(n=3)
Obj-Obj
(n=1)
Route
(n=3)
Map (n=1)
4 60% 42% 38% 26% 15% 19% Perimeter
(n=2)
Obj-Obj
(n=2)
Route
(n=2)
Map (n=2)
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 53
BlindAid system in the exploration and learning process
of unknown spaces by people who are blind. Spatial Behavior
As found in our previous research (Lahav & Mioduser,
2004) the exploration in the s gave participants a
stimulating, comprehensive, and thorough acquaintance
with the target space. e high degree of compatibility
Table 3
e Average Performance of the Orientation Tasks in the Real Space
Real
Space Task Successful
Direct
Path Strategy Time
1Target-object 50% 50% Perimeter (n=2); Obj-Obj (=1); Other (n=1) 1:07
reverse 100% 100% Perimeter (n=2); Obj-Obj (n=1); Other
(n=1) 0:17
Perspective-taking 75% 75% Perimeter (n=1); Obj-Obj (n=3) 0:41
reverse 75% 75% Perimeter (n=1); Obj-Obj (n=3) 0:36
Point-on-the-location 65%
2Target-object 75% 75% Perimeter (n=1); Obj-Obj (n=2) 0:43
reverse 75% 75% Obj-Obj (n=3) 0:28
Perspective-taking 75% 75% Obj-Obj (n=3) 1:27
reverse 75% 75% Obj-Obj (n=3) 0:52
Point-on-the-location 88%
3Target-object 100% 100% Perimeter (n=1); Obj-Obj (n=3) 1:11
reverse 100% 100% Perimeter (n=1); Obj-Obj (n=3) 0:31
Perspective-taking 75% 50% Perimeter (n=1); Obj-Obj (n=3) 1:58
reverse 100% 100% Perimeter (n=1); Obj-Obj (n=3) 0:35
Point-on-the-location 69%
4Target-object 50% 50% Perimeter (n=1); Obj-Obj (n=2); Other
(n=1) 2:35
reverse 75% 100% Obj-Obj (n=4) 0:44
Perspective-taking 50% 75% Obj-Obj (n=4) 4:13
reverse 75% 100% Obj-Obj (n=4) 1:11
Point-on-the-location 31%
Mean Target-object 69% 69%
reverse 88% 94%
Perspective-taking 69% 69%
reverse 81% 88%
Point-on-the-location 63%
Journal of Special Education Technology
54 JSET 2011 Volume 26, Number 4
between the components and the real space on one
hand and the exploring methods supported by the
on the other contributed to the users’ exploration ability.
ese features also enabled participants to implement
exploration patterns in a systematic method that they
commonly used in real spaces, but in a qualitatively dif-
ferent manner. During the orientation tasks in the real
space the participants were able to recall their cognitive
map by active and problem-solving tasks that might in-
crease their recall ability. ey were able to manipulate
their spatial knowledge very well, especially in the re-
verse tasks and the perspective-taking tasks. Most of the
participants were able to transfer the tactile information
that they collected as a tactile traveler in the s to audi-
tory landmarks and were able to use echolocation land-
marks during their walk in the real space. ese abilities
allowed the participants to recall their spatial informa-
tion as needed in a exible way.
e s oer new spatial tools that do not exist in real
space for people who are blind. ese tools enable users
to control the level of spatial information. During the
exploration tasks the participants were able to use zoom
in and zoom out levels and to develop new navigational
skills. ese capabilities can expand spatial information
about the surrounding area and can increase people’s
spatial awareness. Nevertheless, more learning sessions
will be needed in order for them to adapt the meaning
behind the zoom in and zoom out method conceptually.
One of the participants suggested that he would like to
have a vertical Zoom In tool. By using a vertical Zoom
In tool he could explore the environment on multiple
levels, perhaps helping him to ascertain his location in
the building, the location of exits, or in which direction
the street is located.
Methodology
Exploring a complex environment (Martinsen et al.,
2007) raised some methodology issues that need to be
considered in future research. In this research, as in our
previous research, the exploration task included free
exploration. e idea behind it was to support the par-
ticipant’s independent exploration in the . Exploring
complex environments might aect a participant’s cog-
nitive overload, and we think that in future research the
exploration task needs to include free exploration for a
measure of time, with instructed exploration tasks af-
terward. e instructed exploration task needs to allow
the participant to use both route and map model spatial
models. e inclusion of an instructed exploration task
allows the participant to gather broad information about
the environment and to focus on the landmark locations
that will need to be reached later in the real space.
Exploring a complex environment revealed the need for
developing a new data-collecting tool that allows better
mirroring of the participant’s cognitive map. In previ-
ous research we used verbal description and a physical
model (Lahav & Mioduser, 2005). As in other research
results that include complex environments, the verbal
description is dull, and using the physical model is a long
process. Using raised dot drawings or other embossed
drawing techniques can inuence three parameters: long
period of time to teach and to practice the technique;
long period of time to create a tactile image; and, most
important, long period of time to learn the concept that
stands behind the 3D image. ese new data-collecting
tools will need to be explored and studied in future
research.
Future Research and Development of the
BlindAid System
Additional research and development eorts will trans-
form this promising technology into a useful learn-
ing and rehabilitation tool. e BlindAid system as an
O&M simulation-learning tool can support a variety
of target populations, including people who are newly
blind, people who are blind who need to improve their
O&M skills, and students in the O&M teacher pro-
gram. is system allows the instructor or the user to
control the size and shape of the environment as well as
the level of density, and it further allows control of the
levels of spatial complexity. e learning simulation
could be based on a schematic structure to detail the
structure with objects within it.
e BlindAid system can assist O&M specialists in reha-
bilitation centers as a simulator with which their clients
can interact and be trained as part of their O&M reha-
bilitation program. e spatial behavior of Participant 1
in this research highlights the need for a mirroring sim-
ulation that will allow people who are blind to under-
stand their positive and negative spatial behavior. Using
this simulation tool with O&M specialist interventions
in the training process might improve their orientation
skills and awareness. Nevertheless, the system can help
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 55
compensate for the shortage in rehabilitation funding,
increasing the number of training hours for each client.
is system also can be used as a training simulation for
students who are studying to become O&M teachers.
ey can practice exploration methods as blindfolded
users. Furthermore, development will support people
who are blind who have already received O&M train-
ing in downloading virtual maps via the Internet and
exploring real spaces before arrival. is could function
like services such as Google Maps or MapQuest, which
allow sighted people to explore new spaces in advance.
In addition to serving as a learning and training tool,
the BlindAid system can be used as a diagnostic tool. An
O&M specialist can predict a participant’s spatial be-
havior in a real space by observing his or her exploration
behavior in the . is diagnostic tool can be applied
during or after an O&M rehabilitation process. It also
can be used as an indicator to conrm suitability for a
guide dog. e BlindAid haptic stylus simulates the way
the user leads the dog in the real space. A guide dog user
needs to apply high spatial ability to nd secure paths in
a real space, and this diagnostic tool allows O&M spe-
cialists to track and observe how their clients think and
behave during exploration of a new space.
is study’s results also have important implications for
the continuation of the research and implementation
purposes. In regard to research, further studies should
examine how people who are newly blind construct spa-
tial cognitive maps of spaces using the during their
rehabilitation program; how they use these maps for
navigating in the real environments; and, consequent-
ly, how the system contributes to their rehabilitation
process. Further studies should compare the BlindAid
system with other orientation tools such as , tactile
maps, or models to understand how this system can
enhance and improve the orientation ability of people
who are blind compared with other technologies. At
another level, the development of more comprehensive
environment-editing tools for the will support the
creation of a variety of models of spaces (e.g., public
buildings, shopping areas) enabling pre- and postvisit
exploration and recall of unknown spaces by people who
are blind. ese implementations also may serve the re-
search and practitioner community as models for the
further development of technology-based tools for sup-
porting learning processes and performance of people
with special needs.
References
Bach-y-Rita, P., Tyler, M. E., & Kaczmarek, K. A. (2003). Seeing
with the brain. International Journal of Human-Computer Inter-
action, 15(2), 285–295.
Basdogan, C., & Srinivasan, M. A. (2002). Haptic rendering in vir-
tual environments. In K. M. Stanney (Ed.), Virtual environment
handbook. Mahweh, NJ: Erlbaum.
Biggs, S. J., & Srinivasan, M. A. (2002). Haptic interfaces. In K.
M. Stanney (Ed.), Virtual environment handbook. Mahweh, NJ:
Erlbaum.
Blasch, B. B., Wiener, W. R., & Welsh R. L. (1997). Foundations of
orientation and mobility. New York, NY: American Foundation
for the Blind.
Crandall, W., Bentzen, B. L., Myers, L., & Mitchell, P. (1995).
Transit accessibility improvement through talking signs remote in-
frared signage, a demonstration and evaluation. San Francisco,
CA: Smith-Kettlewell Eye Research Institute, Rehabilitation
Engineering Research Center.
Dodson-Burk, B., & Hill, E. W. (1989). Preschool orientation and
mobility screening. A publication of Division IX of the Associa-
tion for Education and Rehabilitation of the Blind and Visually
Impaired. New York, NY: American Foundation for the Blind.
Easton, R. D., & Bentzen, B. L. (1999). e eect of extended
acoustic training on spatial updating in adults who are congeni-
tally blind. Journal of Visual Impairment and Blindness, 93(7),
405–415.
Espinosa, M. A., & Ochaita, E. (1998). Using tactile maps to im-
prove the practical spatial knowledge of adults who are blind.
Journal of Visual Impairment and Blindness, 92(5), 338–345.
Farmer, L. W., & Smith, D. L. (1997). Adaptive technology. In B.
B. Blasch, W. R. Wiener, & R. L. Welsh (Eds.), Foundations of
orientation and mobility. New York, NY: American Foundation
for the Blind.
GDP Research. (2005). e Miniguide ultrasonic mobility aid. Re-
trieved from http://www.gdp-research.com.au/minig_1.htm
Giess, C., Evers, H., & Meinzer, H. P. (1998). Haptic volume render-
ing in dierent scenarios of surgical planning. Paper presented at
the ird Phantom Users Group Workshop, MIT, Cambridge,
MA.
Golledge, R. G., Klatzky, R. L., & Loomis, J. M. (1996). Cogni-
tive mapping and waynding by adults without vision. In J. Por-
tugali (Ed.), e Construction of Cognitive Maps (pp. 215–246).
Netherlands: Kluwer.
Gorman, P. J., Lieser, J. D., Murray, W. B., Haluck, R. S., & Krum-
mel, T. M. (1998). Assessment and validation of force feedback vir-
tual reality based surgical simulator. Paper presented at the ird
Phantom Users Group Workshop, MIT, Cambridge, MA.
Herman, J. F., Herman, T. G., & Chatman, S. P. (1983). Construct-
ing cognitive maps from partial information: A demonstration
study with congenitally blind subjects. Journal of Visual Impair-
ment and Blindness, 77(5), 195–198.
Journal of Special Education Technology
56 JSET 2011 Volume 26, Number 4
Hill, E., Rieser, J., Hill, M. M., Hill, M., Halpin, J., & Halpin
R. (1993). How persons with visual impairments explore novel
spaces: Strategies of good and poor performers. Journal of Visual
Impairment and Blindness, 87(8), 295–301.
Jacobson, W. H. (1993). e art and science of teaching orientation
and mobility to persons with visual impairments. New York, NY:
American Foundation for the Blind.
Jacobson, R. D., Kitchin, R., Garling, T., Golledge, R., & Blades,
M. (1998). Learning a complex urban route without sight: Com-
paring naturalistic versus laboratory measures. Paper presented at
the International Conference of the Cognitive Science Society of
Ireland, University College, Dublin, Ireland.
Kish, D. (1997). When darkness lights the way: How the blind may
function as specialists in movement and navigation (Master’s the-
sis). California State University, Los Angeles.
Lahav, O. (2003). Blind persons’ cognitive mapping of unknown spaces
and acquisition of orientation skills, by using audio and force-feed-
back virtual environment. (Doctoral dissertation). Tel-Aviv Uni-
versity, Israel (Hebrew).
Lahav, O., & Mioduser, D. (2004). Exploration of unknown spaces
by people who are blind, using a multisensory virtual environ-
ment. Journal of Special Education Technology, 19(3), 15–24.
Lahav, O., & Mioduser, D. (2005). Blind persons’ acquisition of
spatial cognitive mapping and orientation skills supported by
virtual environment. International Journal on Disability and Hu-
man Development, 4(3), 231–237.
Lahav, O., & Mioduser, D. (2008a). Construction of cognitive maps
of unknown spaces using a multi-sensory virtual environment
for people who are blind. Computers in Human Behavior, 24,
1139–1155.
Lahav, O., & Mioduser, D. (2008b). Haptic-feedback support for
the cognitive mapping of unknown spaces by people who are
blind. International Journal of Human-Computer Studies, 66(1),
23–35.
Landau, S., Wiener, W., Naghshineh, K., & Giusti, E. (2005). Cre-
ating accessible science museums with user-activated environ-
mental audio beacons (Ping!). Assistive Technology, 17, 133–143.
Loomis, J. M., Golledge, R. G., Klatzky, R. L., & Marston, J. R.
(2007). Assisting waynding in visually impaired travelers. In
G. L. Allen (Ed.), Applied Spatial Cognition: From research to cog-
nitive technology (pp. 179–202). Mahwah NJ: Erlbaum.
Martinsen, H., Tellevik, J. M., Elmerskog, B., & Storliløkken, M.
(2007). Mental eort on mobility route learning. Journal of Vi-
sual Impairment and Blindness, 101(6), 327–350.
Parente, P., & Bishop, G. (2003). BATS: e Blind Audio Tactile
Mapping System. Savannah, GA: Association for Computing
Machinery Southeast Conference.
Rieser, J. J. (1989). Access to knowledge of spatial structure at novel
points of observation. Journal of Experimental Psychology: Learn-
ing, Memory, and Cognition, 15(6), 1157–1165.
Roentgen, U. R., Gelderblom, G. J., Soede, M., & de Witte, L. P.
(2008). Inventory of electronic mobility aids for persons with
visual impairments: A literature review. Journal of Visual Impair-
ment and Blindness, 102(11), 702 –724.
Salisbury, J. K., & Srinivasan, M. A. (1997). Phantom-based haptic
interaction with virtual objects. IEEE Computer Graphics and
Applications, 17(5), 6–10.
Sánchez, J., Noriega, G., & Farías, C. (2008). Mental representa-
tion of navigation through sound-based virtual environments. Paper
presented at the 2008 AERA Annual Meeting, New York, NY.
Schloerb, D. W., Lahav, O., Desloge, J. G., & Srinivasan, M. A.
(2010). BlindAid: Virtual environment system for self-reliant trip
planning and orientation and mobility training. Paper presented
at IEEE Haptics Symposium, Waltham, MA.
Schultheis, M. T., & Rizzo, A. A. (2001). e application of virtual
reality technology for rehabilitation. Rehabilitation Psychology,
46(3), 296–311.
Semwal, S. K., & Evans-Kamp, D. L. (2000). Virtual environments
for visually impaired. Paper presented at the 2 International
Conference on Virtual Worlds, Paris, France.
Simonnet, M., Guinard, J.-Y., & Tisseau, J. (2006). Preliminary
work for vocal and haptic navigation software for blind sailors.
International Journal of Disability and Human Development.
52(2), 61–67.
Sonn, U., Tornquist, K., & Svensson, E. (1999). e ADL taxon-
omy—From individual categorical data to ordinal categorical
data. Scandinavian Journal of Occupational erapy, 6, 11–20.
Srinivasan, M. A., & Basdogan, C. (1997). Haptics in virtual envi-
ronments: Taxonomy, research status, and challenges. Computers
and Graphics, 21(4), 393 – 404.
Standen, P. J., Brown, D. J., & Cromby, J. J. (2001). e eective
use of virtual environments in the education and rehabilitation
of students with intellectual disabilities. British Journal of Educa-
tion Technology, 32(3), 289–299.
Takes Corporation. (2007). Owner’s manual: Palmsonar PS231-7.
Retrieved from http://www.palmsonar.com/231-7/prod.htm
Ungar, S., Blades, M., & Spencer, S. (1996). e construction of
cognitive maps by children with visual impairments. In J. Por-
tugali (Ed.), e construction of cognitive maps. Netherlands: Klu-
wer Academic.
Warren, D. H., & Strelow, E. R. (1985). Electronic spatial sensing for
the blind. Boston, MA: Martinus Nijho.
Author Notes
Orly Lahav is a researcher and lecturer in the Technology and
Learning Program, Department of Education in Mathematics,
Science and Technology, School of Education, at Tel-Aviv
University. Orly Lahav was a postdoctoral associate, David W.
Schloerb is a research scientist, Siddarth Kumar was a doctoral
student, and Mandayam A. Srinivasan is a senior research scien-
tist, all at the Laboratory for Human and Machine Haptics (e
Touch Lab), Research Laboratory of Electronics, Massachusetts
Institute of Technology.
Journal of Special Education Technology
JSET 2011 Volume 26, Number 4 57
Correspondence concerning this article should be addressed to
Orly Lahav, School of Education, Tel Aviv University, Tel Aviv,
Israel. Email: lahavo@post.tau.ac.il
is research was supported in part by a grant from e
National Institutes of Health–National Eye Institute (Grant
No. 5R21EY16601-2), and supported in part by e European
Commission, Marie Curie International Reintegration Grants
(Grant No. FP7-PEOPLE-2007-4-3-IRG). We acknowledge
the discussions with and the audio system development by Jay
Desloge, and the Carroll Center for the Blind, Newton, MA, for
the collaboration and the support during the BlindAid system
design and research. We thank the four anonymous participants
for their time, eorts, and ideas.
is manuscript was accepted under the previous editorship of
J. Emmett Gardner.