Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Blind study''

Abstract For most people who are blind, exploring an unknown environment can be unpleasant, uncomfortable, and unsafe. Over the past years, the use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on the rise. This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are blind with anticipatory exploration. In this research we developed and tested the BlindAid system, which allows the user to explore a virtual environment. The two main goals of the research were: (a) evaluation of different modalities (haptic and audio) and navigation tools, and (b) evaluation of spatial cognitive mapping employed by people who are blind. Our research included four participants who are totally blind. The preliminary findings confirm that the system enabled participants to develop comprehensive cognitive maps by exploring the virtual environment. Keywords: Blind, Cognitive-map, Virtual environment, Haptic interaction Go to: Introduction The visual sense plays a primary role in guiding a sighted person through an unknown environment and assisting him or her to reach a destination safely. Unfortunately, people who are blind face difficulties in performing such tasks. Research on orientation and mobility (O&M) skills of people who are blind in known and unknown spaces (Passini & Proulx, 1988; Ungar, Blades, & Spencer, 1996) indicates that the support for the acquisition of spatial mapping and orientation skills should be supplied at two main levels: perceptual and conceptual. At the perceptual level, information obtained via other senses should compensate for the deficiency in the visual channel. Amendola (1969) based her pioneering work in sensory training on the systematic collection of information from the immediate environment through haptic, audio, and olfactory senses. The word “haptic” derives from the classical Greek haptikos “able to touch.” In this paper we use the term haptic to describe touch-mediated manual interactions with real or virtual environments (VEs) (Srinivasan & Basdogan, 1997). At the conceptual level, the focus of such training lies in supporting the development of appropriate orientation strategies (Jacobson, 1993), spatial models (Fletcher, 1980; Kitchin & Jacobson, 1997), orientation problem solving to achieve efficient cognitive mapping of a space, and applying that mapping during navigation. Research on spatial models indicates that people who are blind mainly use the route model when navigating in spaces (Fletcher, 1980). Over the years, secondary O&M aids have been developed to help blind persons explore real spaces. The secondary aids represented below are not a replacement for primary aids, such as the long cane and the guide dog. The existing inventory of O&M electronic aids encompasses more than 146 systems, products, and devices (Roentgen, Gelderblom, Soede, & de Witte, 2008). There are two types of secondary O&M aids: preplanning aids that provide the user with information before his or her arrival in the environment (e.g., verbal description, tactile maps, strip maps, physical models, and talking tactile maps) and in-situ planning aids that provide the user with information about the environment in-situ (e.g., Sonicguide, Talking Signs, embedded sensors in the environment, activated audio beacon using cell phone technology, and GPS). However, there are a number of limitations in the use of these preplanning and in-situ aids. For example, the limited dimensions of tactile maps and models may result in poor resolution of the provided spatial information. There are difficulties in publishing them and acquiring updated spatial information, and, furthermore, they are rarely available. As a result of these limitations, people who are blind are less likely to use preplanning aids in everyday life. The major limitation of the in-situ aids is that the user must gather the spatial information in the explored space, making it impossible to build the cognitive map in advance and creating a feeling of insecurity and dependence upon first arrival at a new space. From the perspective of safety and isolation, the in-situ aids are based mostly on auditory feedback, which in real space can reduce users’ attention and isolate them from the surrounding space, especially from auditory information such as crossing cars, auditory landmarks, or personal interactions. The use of virtual reality in domains such as simulation-based training for learning and rehabilitation for people with disabilities has been on the rise in recent years (Schultheis & Rizzo, 2001). Research on the implementation of haptic technologies within VEs and their potential for supporting learning and rehabilitation training has been reported for people who are blind. Sound-based VEs have been research and developed (D’Atri et al., 2007; Gonzalez-Mora, 2003; Kurniawan, Sporka, Nevec, & Slavik, 2004; Sánchez, Noriega, & Farías, 2008; Seki & Sato, 2010). These research results show that users required high attention to the auditory feedback. Technological advances in haptic interface technology enable people who are blind to expand their spatial knowledge by using artificially made reality through haptic and audio feedback (Evett, Battersby, Ridley, & Brown, 2009; Lahav & Mioduser, 2004; Lécuyer et al., 2003; Semwal & Evans-Kamp, 2000; Tzovaras, Nikolakis, Fergadis, Malasiotis, & Stavrakis, 2004). These research results show that the users were able to recognize shapes and objects and to distinguish the exact position of the object in the space. Tzovaras et al., (2004) showed that the majority of their research participants preferred to have VEs that were based on haptic and audio feedback; they perceived the feel of the virtual objects to be close to that of the physical objects. In the research results obtained by Lécuyer et al., (2003) participants criticized the predefined path that was not compatible with exploration in real space. The study described in this paper is part of a larger research effort comprising the design, development, and evaluation of a VE system for users who are blind (Lahav, 2003; Lahav & Mioduser, 2004, 2008). The current preliminary study aimed to examine which VE properties could provide perceptual and conceptual spatial information and allow users to gather and expand their spatial information. The two main research questions were: Which haptic, audio, and exploration tool properties in the VE did the users prefer? How do these VE properties support exploration strategies, exploration processes, and construction of a user’s cognitive map? Go to: The BlindAid System The BlindAid system, shown in Figure 1, was designed through active collaboration between engineers and learning scientists at the MIT Touch Lab, an expert on three-dimensional (3D) audio in VEs, and an O&M instructor from the Carroll Center for the Blind. The system provides virtual maps for people who are blind and consists of application software running on a personal computer equipped with a haptic device and stereo headphones. Figure 1 Figure 1 A user using the BlindAid system. The BlindAid system approach allows users to explore the VE freely based on their prior real space orientation abilities. Therefore the BlindAid uses real-life spatial landmarks (haptic and auditory). VE Overview and Scale The current system simulates a single vertical level, such as one floor in a building. The VE is bounded by two horizontal planes, floor and ceiling, which extend infinitely in all horizontal directions. On the computer’s visual display only simple graphics are presented, intended for researchers. For the user these components are represented by haptic and audio feedback. Normally, because the range of motion of the Phantom is only about 10 cm, this requires that the size of objects displayed to the user by the haptic device be greatly scaled down. In contrast, the audio system plays sounds to the user at full scale. System Properties Haptic The haptic device, a Desktop Phantom (SensAble Technologies), has two primary functions: (1) it controls the motion of the avatar in the VE and (2) it provides haptic feedback, such as stiffness and texture to the blind user, giving haptic cues about the space similar to those generated by a long cane. The general haptic properties of an object’s surface are characterized by four normalized parameters: stiffness, damping, static friction, and dynamic friction (SensAble Technologies 2005, Table B-13). The three different texture ridge parameters are smooth, saw tooth, and sinusoid. In addition, the user can interact with virtual ground texture that gives haptic cues about the space similar to those generated by a long cane. For example, when users interact virtually with a marble floor, the tip of the Phantom produces a sense of smoothness. Audio The audio system includes three audio modes: mono, stereo, or stereo with rotation. The stereo mode allows the user to hear the direction and distance of sounds in the VE as if he or she were standing at the location of the avatar. Optionally, the user can control the orientation of the avatar by rotating the Phantom stylus about its long axis. This stereo rotation mode is intended to aid in judging the direction of sounds, similar to turning one’s head. The BlindAid system includes three types of sound: contact, background, and landmark. Contact sounds are typically generated when the user avatar comes in contact with an object, in order to provide the user with information about the object. The contact sounds included earcons and audio labels. The term “earcons” is defined by Blattner, Sumikawa, and Greenberg (1989, page 13) as “nonverbal audio messages used in the user-computer interface to provide information about some computer object, operation, or interaction.” Each component was designated by short additional audio feedback that included a detailed description about the component. A background sound source is defined to be at a particular point within its specified region, playing continuously while the avatar is within the region. Landmarks sounds play in response to a key press and serve as audio beacons. Up to three landmarks may be predefined by the researcher and five by the user. The BlindAid system includes three modes of interface: user, evaluation, and editor modes. User mode The Phantom is used to control the position of the user avatar within the VE. Seven command actions on the computer’s numeric keypad provide means for the user to control other aspects of the system (restart, pause, start, install landmark, recall landmark, additional audio information, and head rotation). Motion of the user’s avatar is limited to the virtual workspace, so that the avatar is always contained within the workspace and faces to the north. There are two methods for moving the virtual workspace: (a) The user presses one of the arrow keys; each arrow-key press shifts the workspace 1/2 its width in the given direction. When this happens, the Phantom gently moves the user’s hand an equal distance in the opposite direction such that the avatar position in the VE remains unchanged. (b) The user presses and holds a button on the Phantom causing the user avatar position to be fixed in the VE. Then, similar to the way in which one repositions a computer mouse upon reaching the edge of the mouse pad, the user moves back from the physical workspace boundary, causing the virtual workspace to advance in the opposite direction. Editor mode A semi-automated editor can read an electronic blueprint file to import the walls of a building into a new VE, and through manual editing we can add other types of objects and define the audio and haptic feedback. Evaluation mode This mode allows researchers to record a user’s behavior in an experimental session for later monitoring of user progress and problems. These data can be viewed directly as a text file or replayed by the system like a screen recording. As shown in Figure 2, the central display demonstrates the user’s path (the black dots interconnected by lines). The big black dot represents the user’s avatar. The gray area represents a rubber floor; the white rectangle areas represent a marble floor; gray lines represent doors. Further technical details about the system are presented in our earlier paper (Schloerb, Lahav, Desloge, & Srinivasan, 2010). Figure 2 Figure 2 Evaluation display. Go to: Method Participants The research included four participants who were selected on the basis of seven criteria: totally blind, at least 21 years old, not having multiple disabilities, trained in O&M, English speaking, having onset of blindness at least two years prior to the experimental period, and comfortable with the use of computers. One participant was congenitally blind and three were adventitiously, one was female and three were male, ages ranged from 41 to 53 years old, one was a guide dog user and three were long cane users. To evaluate the participants’ initial O&M skills, each was asked to complete a questionnaire on O&M issues. Further details about the O&M questionnaire can be found in section Instrumentations/ O&M questionnaire. The results showed no differences in initial O&M ability among participants. It is important to note that this sample is small, due to the exploratory nature of this study. We chose participants who had extensive experience as computer users and could provide a broad range of feedback about their virtual interactions, helping us improve the BlindAid system to suit the needs of people who are blind. Variables Two groups of dependent variables were defined: (a) the exploration process (total duration, exploration strategies, systematic exploration, number of objects that were found, and use of command actions; and (b) the construction of a cognitive map (structural components and objects, object location, creation of references or landmarks, spatial description, and spatial strategies). Instrumentations The research included two implementation instruments and four collections of the data instruments. The two implementation instruments were: Simulated environments Thirteen VEs were designed, not based on any actual space. They ranged from a simple area to a complex area. We chose this simple-to-complex-space approach to allow the user to gradually learn how to explore the VE by using the BlindAid system. The first VE had only four walls. The second space contained four walls and two objects. The first two VEs were intended to train the participants in using and gathering information through the VE. The next six VEs (VE3-VE8) examined the preferred haptic feedback by the user. These six VEs are similar in shape and size and included seven different square objects. One such VE (VE5) is shown in Figure 3. There were a total of 18 square objects with varied haptic parameters. Each object was classified in one of three groups: (1) varied general haptic properties (stiffness, damping, and static/dynamic friction) all with smooth texture; (2) various nonsmooth textures with the same general haptic properties; and (3) a mixture of general haptic and texture properties. As a result of an interaction with the square object, the participants received an audio feedback that identified each object by a spoken alphabet letter. Figure 3 Figure 3 The fifth VE. The next VEs provided the user with different audio modes: VE9 provided mono mode, VE10 provided stereo mode, and VE11 provided stereo with rotation. The three environments were similar but not identical. Each was a rectangular shape of the same size, and contained four objects. Three of the objects were attached to the walls and one was placed in the interior space. VE12 and VE13 were similar and focused on evaluating the navigation tools and command actions. Figure 4 presents VE13, a rectangular shape with three rooms with marble floor (the white areas) and a public corridor with rubber floor (the gray area) with nine rectangular objects in it. Figure 4 Figure 4 The 13th VE. VE exploration task Each participant was asked to freely explore without time limitations. The researchers informed the participants that they would be asked to describe the room and its components at the end of their exploration, and to evaluate the VE features. In addition to the above two implementation instruments, a set of four collections of the data instruments were developed: O&M questionnaire In the first session, this 50-item questionnaire assessed the participant’s self-evaluated O&M abilities and experiences regarding indoor and outdoor, as well as known and unknown environments. Some of the questions were adapted from O&M rehabilitation evaluation instruments for use in this research (Dodson-Burk & Hill, 1989; Sonn, Tornquist & Svensson, 1999; Lahav, 2003). Observations The participants were video recorded during the studies. These video recordings were transcribed. Open interview After completing the exploration task, the participants were asked to describe the space verbally. This open interview was video recorded and transcribed. Modeling kit The participants used a modeling kit to construct a physical model of the space. The kit was made up of a tactile structure of the VE drawing embossed on paper, with two alternative options for the room’s structure, and Lego blocks, which represented the objects. Computer log The BlindAid system enables the instructor to collect the user’s activities in the VEs and to present the information in the evaluation mode (see Figure 2). Data Analysis To evaluate the participant’s performance, we applied coding schemes that were mostly developed in previous research studies by four O&M rehabilitation specialists (Lahav, 2003; Lahav & Mioduser, 2008). Based on the data collection instruments and O&M literature (Hill et al., 1993; Jacobson, 1993; Jacobson, Kitchin, Garling, Golledge, & Blades, 1998), they designed and constructed each of the two coding schemes. Each coding scheme contained dependent variables: the process of exploration task or construction of cognitive map (further details are presented in Method/variables). Using qualitative methods the researchers analyzed each participant’s data (video recording, transcription, and computer log). The computer log data were also parsed and analyzed using quantitative software (Excel). Procedure Throughout the BlindAid intervention, all participants worked and were observed individually. After completing the O&M questionnaire, they explored the VEs, starting with VE1 and finishing with VE13. In order to determine favorable haptic parameters, participants were asked to explore VE3 to VE8. After each VE, the participants were asked to list the objects they felt most comfortable interacting with. The next three VEs (VE9-VE11) focused on the auditory parameters. The participants were asked to explore each VE and afterward they answered questions about the audio feedback. Last, the participants were asked to explore VE12 and VE13, after which they answered questions about the navigation tools and command actions. In addition, following the exploration in VE9 to VE13, the participants were asked to give a verbal description of the space and to construct a physical model of it using the modeling kit. After VE13, the participants were asked to comment again about the VE features and their future capabilities. These experiments lasted about two or three meetings (three hours total). Go to: Results Question 1: Which haptic, audio, and exploration tool properties in the VE did the users prefer? Haptic Properties The haptic results (VE3-VE8) show that 50%-88% of the participants expressed a preference for nine of the 18 test objects; six of them were objects with general haptic properties with smooth texture. All the participants indicated that they preferred interactions with smooth and solid VE components because they were less confusing and required the gathering of less information. For example, B. said: ‘It can be too confusing, too many textures … each object having their own texture will be almost too much, yes! keep it simple and solid.’ Among objects they preferred two haptic types: hard and soft. Nevertheless, all the participants mentioned that they would prefer, for safety reasons, that certain components (e.g., stairs, alarm door) be designated with a unique rigid texture. The participants made differentiation between haptic and audio feedback, the Phantom helped to locate objects in the VE, and to trace the structure and object shape, and the audio feedback helped them to gather additional information about the objects. As B. explained, ‘I use the Phantom for orientation. Audio gives me more information about the object description. May be too distracting with both’. Or C. said, ‘As soon I hear I’m touching something, I don’t care what it feels like anymore’. Audio Properties Three audio modes, including mono, stereo, and stereo with rotation, were tested in different VEs (VE9-VE11). At the end of the three audio tests, three of the participants chose the stereo as an audio feedback, and one chose the mono. A., the participant who chose the mono as an audio feedback, explained, ‘I think I found it [stereo] more confusing. It was sort of an additional variable I had trouble tracking, I didn’t find the rotation very helpful, I think it was confusing…the stereo was necessary only when determining which direction to go in the map…’. All the other participants said that the stereo gave them a sense of the ambient sound of the space, helped them determine which direction to go in the map, and gave them more orientation to the overall space. On the other hand, the stereo with rotation was an additional factor that they needed to track, imagining their orientation in the VE while simultaneously hearing the audio feedback. For example, D. said, ‘I didn’t find it terribly useful; [it] added another dimension, which I didn’t find necessary… it adds another layer of complexity that doesn’t help, it lets me confuse myself more’. Beside the audio mode, the BlindAid included three types of sound – contact, background, and landmark. All the participants mentioned that the short contact feedback needed to be clear and recognizable. All the participants agreed on the way the VE components were represented by earcons or labeled audio effects. The additional audio feedback was on-demand. Usually, after exploring the VE, they repeatedly used this tool to gather more information about the VEs’ components. As A. said, ‘Actually I like to have both, because my memory does not want to remember any specifics, so I use the additional description, I am not always sure about name or something I need to remember’. The participants did not report being overloaded by the audio feedback. Similar to real space background sound (e.g., street noise), the VE background audio effect assisted the users in orienting themselves in the space. The continuous VE background sound with the stereo mode was effective and necessary. Exploration Tool Properties The participant was trained to use each method in a specific VE (VE12 and VE13). In the end, all four of the participants chose to use the Phantom method for moving the virtual workspace. They found that the Phantom method was a much more intuitive and natural motion. It was more immediately associated with the long cane and conveyed a sense of participation and having control over movements. For example, A. said, ‘In my mind [it] is associated with a cane or a sort of a traveling feat, so it sort of was more immediate…it gave me the sense of actually moving. Sense of having some participation and control over the movement’. For others it was a more natural motion. D. said, ‘It seemed more of a natural motion to me; with the Phantom every time I push the button I imagine I was making the pen stick to the surface and then move it over… I can mentally see myself doing that, that just seemed more natural to me’. By using this method they were able to drag the workspace to be on an angle, and not just move left and right or forward and backward as accomplished by using the arrow keys. In addition, by using the Phantom button the participants developed a new strategy that served as a location anchor during the workspace movement process. The participants were able to install and to recall landmarks. They used this tool mostly in a complex VE, and they usually installed only two (out of five). Question 2: How do these VE properties support exploration strategies, exploration processes, and construction of a user’s cognitive map? Exploration Strategies and Processes The computer log and the videotapes provided an understanding of the participant’s exploration strategies and process. We observed that as the complexity of the environment increased, so did the mean exploration time. The three environments (VE9-VE11) were in a rectangular shape of the same size, and contained four objects. The mean exploration time was similar (VE9-04:34; VE10-03:25; and VE11-04:16). VE13 was a more complex space (Figure 4); the mean exploration time was 15:25. The different strategies employed by the participants included: (a) perimeter strategy, where the participant walked along the boundary of the VE; (b) grid strategy, where the participant explored the room interior; (c) exploration of object areas (EOA) strategy, where the participant walked around objects trying to create or identify landmarks; and (d) object-to-object strategy (Obj-Obj), where the participant walked from one object to another. These strategies are shown in Figure 5. Figure 5 Figure 5 Exploration strategies. It was observed that the primary strategy used by the participants to explore the VEs was the perimeter strategy (75%-100%). The grid strategy and EOA strategy were used as secondary exploration strategies (50%-100%). For example, Figure 6 shows the strategies that were used by participant C in a VE10. This participant at first used the perimeter strategy and later the grid and the EOA strategies. Figure 6 Figure 6 The third participant in VE number 10. During the VE exploration process all the participants employed a systematic strategy. The participants successfully found all structural components and objects that were placed in the five environments. Among the command actions, the additional audio and the pause/start were used the most. In addition, the participants preferred to place the recall landmarks close to objects rather than in open spaces. It was observed that this tool was used in more complex VEs, such as VE13. Cognitive Map Components It was observed that a combination of verbal description and model construction allowed the participants to recall successfully what they had learned during exploration. The examinations of participant’s verbal description data are shown in Table 1 and the evaluations of participant’s model data are shown in Table 2. The percentages in these tables present the average recall, use of a spatial description or a spatial strategy during verbal description, and model construction. The participants were more likely to verbally describe the space by use of the structure or object components than by their location. In three of the five VEs a perimeter exploration strategy led to a perimeter spatial description. As spatial strategy the participants alternately used the route model and the map model during the five verbal descriptions. Table 1 Table 1 Verbal description Table 2 Table 2 Model construction Table 2 presents participants’ spatial recall during the construction of the model. In VE9 most of the participants did not choose the correct structure components (shape and structure distribution). In VE10-VE13 75% of the participants chose the correct structure components. In regard to the objects, most of the objects components were recalled (88%-100%), and the orientations of components were better remembered (84%-100%) than their location (80%-85%). This trend was observed in all VEs. As spatial description the participants used mainly two types: perimeter or categorization (grouping components or areas). After the experiments, the participants were asked, Do you think that the VE can assist you and others in exploring unknown places? Do you have other suggestions? The participants mentioned that it was easy to learn how to collect spatial information by using the BlindAid; it resembled the way they explored real space with a long cane. As a result of exploring unknown spaces with the BlindAid, A. started to examine his exploration process and how he understood spaces: ‘In my mind [it] is associated with a cane or a sort of a traveling feat. It gave me the sense of actually … [a] sense of having some participation and control over the movement. It got me thinking about how I represent spaces that I go to. It also opens the possibility of getting blind people to think about how they think about the environments they are walking through. I think it is pretty useful, it makes me realize something about the way I understand spaces’. Unlike A., Participant B. mentioned that he felt that the construction of the cognitive map process was very slow: ‘It was definitely more useful than I thought in the beginning… it was easier but it is still tiring. This is a very slow way of building up a map. My brain has to wait and get each new piece at different times and build up things’. Thinking about new ideas to integrate into the VE, D. wished to have the ability to explore the slopes and staircase. Others would have liked the Phantom to take them to a landmark automatically, instead of the landmark acting like an audio beacon. C. and D. suggested new exploration methods to explore the spaces. C. expressed, ‘I would love to see the ability to do multiple levels. I always want to know where I am in the building, and if I were to just immediately drop to the ground floor, which way would I go to get out’. For artistic reasons D. would have liked to explore the building structure shape from its outside: ‘I am not sure how practical it is but it would be interesting to have an example of a well-known structure, like the Eiffel tower or the Taj Mahal. I think it could create some additional interest [among] blind people themselves and how to use this information’. As a result of the participants’ lack of experience in real space, they express the potential of the use of VE. A. said, ‘Sometimes your perception of a space can be very different from reality and seeing that on a virtual map can be very clarifying’. C. described his difficulties in navigating a familiar public building ‘I wish I [could] give you an order for all the … Court House, South Station, Library. The public library would be excellent, I can never get in that building and downstairs to where I want to be, because I do not understand the way it’s laid out. I know that there are rooms down there, there are several conference rooms and the resource lab but I cannot walk [by] myself and find my way from the front door to the resource lab. I always have to have someone show me where it is. Because I cannot get a map in my mind to where that room is from the front door. How about Kendall Station, the T stop, could you model the inbound entrance station and the outbound station and have the road in between… That is what is so confusing to so many people is getting down. The Hynes auditorium [Station], I [have used] that for years and years and I know that there is a way to get out of there and takes you up and puts you on Newbury street, and I do not know where that is. I always come out to Mass. Avenue’. Go to: Discussion The research reported here is part of a research effort to understand the unique haptic, audio, and navigation tools properties that can support users who are blind in exploring VEs independently. These results helped us elucidate several issues concerning the forthcoming design of VEs for people who are blind and to determine the contribution of the BlindAid system in the exploration and learning process of spaces by people who are blind. Keep it Simple Throughout the research the participants asked for simple, clear, and short spatial information that would be easy to remember and to track. These requests appear in their evaluation of the haptic and audio feedback and the exploration tools. User-Defined Setting Configuration Lecuyer et al., (2003), allowed their user to navigate in the VE on a predefined path and at a constant speed. The participants’ criticisms that were noted in their research concerned the ‘passivity’ of the navigation; the uniform path and speed were unlike real-life exploration. In contrast, the BlindAid system provides the participants the ability to explore the space in a free style mode and to control the amount of haptic and auditory information. For example, the participants choose to move the virtual workspace by using the Phantom method, ‘having some participation and control over the movement’, as A. said. In addition, the participants preferred the general haptic properties with smooth texture feedback. By choosing these they actually asked for haptic feedback that is dependent on their interaction with the object, unlike the texture feedback, which causes vibration to the user’s hand in each interaction without the ability to control it. Furthermore, the participants were able to control the amount of audio feedback they wanted to receive, and the results show that they used the additional audio feedback as needed. Implementation of Prior Orientation Behavior The high degree of similarity between the Phantom device to the long cane and the exploration of methods supported by the VE both contributed to the users’ ability to transfer knowledge from real space to the VE. These features enabled participants to implement exploration patterns they commonly used in real spaces, but in a qualitatively different manner. In addition, the participants combined the real space strategies with the exploration tools that are unique to the VE. Similar results were reported in previous studies on spatial performance by sighted participants (Witmer, Bailey, Knerr, & Parsons 1996; Darken & Peterson, 2002), and in our previous research (Lahav & Mioduser, 2004). BlindAid as an Orientation Aid Using the BlindAid system to explore unknown spaces illicited metathinking on the part of the participants: How do they represent spaces? How do they think about spaces? The BlindAid as an O&M aid can assist people who are blind to think about how they represent spaces. As a result of the lack of accessibility to spatial information in familiar and unfamiliar spaces (places they visited in the past 7-20 years) they still need the assistance of the sighted to find their way. It seems that they consider the BlindAid system an orientation tool that allows them to access spatial information independently, and to that end each participant gave the researchers a list of suggestions for future VEs. Go to: Conclusions: Limitations and Future Implications There are limitations to research conclusions inherent in any preliminary study. In the near future we are planning to conduct a follow-up study, employing 10 people who are blind. In the current research the participants constructed verbal descriptions and physical models instead of navigating in a real space. The latter has greater external validity with some tradeoffs. The decision to use no real spaces was based on availability and accessibility to spaces in different shapes and sizes, and the number of components contained in the space during the period of research. This study’s results have important implications for the continuation of the research, and also for implementation. Further research studies should examine the participants’ exploration of unknown environments and apply this spatial knowledge in the corresponding real spaces via orientation real-space tasks. Additional variables to be studied should relate to the participants’ development of comprehensive cognitive maps for indoor and outdoor spaces. Other research might examine the participants’ ability to construct cognitive maps as a result of exploration of several vertical levels in the VE. Finally, a comparison between type of exploration (free style mode versus predefined path) on their exploration ability, construction of cognitive map, apply this knowledge in the real space, and its impact on their orientation motivation to explore new spaces, and engagement. These new VEs to be researched and developed will need to be simple and easy to learn, allowing people to operate independently and to collect spatial information in a short period of time. Finally at the implementation level, the BlindAid system could play a central role in four potential applications. First, a training-simulator for O&M rehabilitation training for the newly blind would allow to the practice of O&M skills with extra time in a safe environment. Second, an O&M diagnostic tool would allow the O&M specialist to track and observe the participant’s spatial behavior, such as O&M skills, spatial strategy, and O&M problem solving. Third, today, when the number of people who are travelling for pleasure and business is in the rise, and the need for new spatial information is increasing, the BlindAid could support people who are adventitiously and congenitally blind in exploring and collecting spatial information in advance. The spatial information could be available and accessible via the Internet, much like the visual maps that are accessible to sighted people. Finally, a multimodality (visual, audio, and haptic) spatial system, could provide spatial information for a broad population (including people who are blind, visually impaired, or elderly) for navigation in shopping areas, public buildings, transportation spaces, or an academic campus. Go to: Acknowledgments This research was partially supported by a grant from The National Institutes of Health - National Eye Institute (Grant No. 5R21EY16601-2), and partially supported by The European Commission, Marie Curie International Reintegration Grants (Grant No. FP7-PEOPLE-2007-4-3-IRG). We appreciate our discussions with Jay Desloge, who developed the audio system, and thank the Carroll Center for the Blind, Newton, MA for its collaboration and support during the BlindAid system design and research. We thank the four anonymous participants for their time, efforts, and ideas. Go to: References Amendola R. Touch kinesthesis and grasp (haptic perception) Newton, MA: held by the Carroll Center for The Blind; 1969. Unpublished Manuscript. Blattner MM, Sumikawa DA, Greenbery RM. Earcons and icons: their structure and common design principles. Human-Computer Interaction. 1989;4(1):11–14. Darken RP, Peterson B. Spatial orientation, wayfinding and representation. In: Stanney KM, editor. Handbook of Virtual Environments Design, Implementation, and Applications. Hillsdale, NJ: Erlbaum; 2002. D’Atri E, Medaglia CM, Serbanati A, Ceipidor UB, Panizzi E, D’Atri A. A system to aid blind people in the mobility: a usability test and its results; Paper presented at the Second International Conference on Systems; Sainte-Luce, Martinique. 2007. Dodson-Burk B, Hill EW. A publication of division IX of the association for education and rehabilitation of the blind and visually impaired. New York, NY: American Foundation for the Blind; 1989. Preschool orientation and mobility screening. Evett L, Battersby S, Ridley A, Brown DJ. An interface to virtual environments for people who are blind using Wii technology -- mental models and navigation. Journal of Assistive Technologies. 2009;3(2):30–39. Fletcher JF. Spatial representation in blind children 1: development compared to sighted children. Journal of Visual Impairment and Blindness. 1980;74(12):318–385. Gonzalez-Mora JL. VASIII: Development of an interactive device based on virtual acoustic reality oriented to blind rehabilitation. Jornadas de Seguimiento de Proyectos en Tecnologías Informáticas 2003 Hill E, Rieser J, Hill M, Hill M, Halpin J, Halpin R. How persons with visual impairments explore novel spaces: strategies of good and poor performers. Journal of Visual Impairment and Blindness. 1993;87(8):295–301. Jacobson WH. The Art and Science of Teaching Orientation and Mobility to Persons with Visual Impairments. New York, NY: American Foundation for the Blind; 1993. Jacobson RD, Kitchin R, Garling T, Golledge R, Blades M. Learning a complex urban route without sight: comparing naturalistic versus laboratory measures; Paper presented at the International Conference of the Cognitive Science Society of Ireland, Mind III, Ireland: University College; Dublin, Ireland. 1998. Kitchin RM, Jacobson RD. Techniques to collect and analyze the cognitive map knowledge of persons with visual impairment or blindness: issues of validity. Journal of Visual Impairment and Blindness. 1997;91(4):360–376. Kurniawan SH, Sporka A, Nemec V, Slavik P. Design and user evaluation of a spatial audio system for blind users; Paper presented at the 5th International Conference Series on Disability, Virtual Reality and Associated Technologies (ICDVRAT’04); Oxford, UK. 2004. Lahav O. Doctoral dissertation. Tel Aviv University; Israel (Hebrew): 2003. Blind persons’ cognitive mapping of unknown spaces and acquisition of orientation skills, by using audio and force-feedback virtual environment. Lahav O, Mioduser D. Exploration of unknown spaces by people who are blind, using a multisensory virtual environment (MVE) Journal of Special Education Technology. 2004;19(3):15–23. Lahav O, Mioduser D. Construction of cognitive maps of unknown spaces using a multi-sensory virtual environment for people who are blind. Computers in Human Behavior. 2008;24(3):1139–1155. Lécuyer A, Mobuchon P, Mégard C, Perret J, Andriot C, Colinot JP. HOMERE: a multimodal system for visually impaired people to explore virtual environments; Paper presented at the IEEE Virtual Reality Conference 2003; Los Angeles, CA. 2003. Passini R, Proulx G. Wayfinding without vision: an experiment with congenitally blind people. Environment and Behavior. 1988;20(2):227–252. Roentgen UR, Gelderblom GJ, Soede M, de Witte LP. Inventory of electronic mobility aids for persons with visual impairments: a literature review. Journal of Visual Impairment and Blindness. 2008;102(11):702–724. Sánchez J, Noriega G, Farías C. Mental representation of navigation through sound-based virtual environments; Paper presented at the 2008 AERA Annual Meeting; New York, NY. 2008. Schloerb DW, Lahav O, Desloge JG, Srinivasan MA. BlindAid: virtual environment system for self-reliant trip planning and orientation and mobility training; Paper presented at IEEE Haptics Symposium; Waltham, MA. 2010. pp. 363–370. Schultheis MT, Rizzo AA. The application of virtual reality technology for rehabilitation. Rehabilitation Psychology. 2001;46(3):296–311. Seki Y, Sato T. A training system of orientation and mobility for blind people using acoustic virtual reality. IEEE TNSRE 2010 2010 [PubMed] Semwal SK, Evans-Kamp DL. Virtual environments for visually impaired; Paper presented at the 2nd International Conference on Virtual Worlds; Paris, France. 2000. SensAble Technologies. OpenHaptics™ Toolkit, version 2.0. 2005 API Reference. Sonn U, Tornquist K, Svensson E. The ADL taxonomy - from individual categorical data to ordinal categorical data. Scandinavian Journal of Occupational Therapy. 1999;6(1):11–20. Srinivasan MA, Basdogan C. Haptics in virtual environments: taxonomy, research status, and challenges. Computers and Graphics. 1997;21(4):393–404. Tzovaras D, Nikolakis G, Fergadis G, Malasiotis S, Stavrakis M. Design and implementation of haptic virtual environments for the training of visually impaired. IEEE Translations on Neural Systems and Rehabilitation Engineering. 2004;12(2):266–278. [PubMed] Ungar S, Blades M, Spencer S. The construction of cognitive maps by children with visual impairments. In: Portugali J, editor. The Consruction of Cognitive Maps. Netherlands: Kluwer Academic; 1996. Witmer BG, Bailey JH, Knerr BW, Parsons KC. Virtual spaces and real world places: transfer of route knowledge. International Journal of Human-Computer Studies. 1996;45(4):413–428.