1. Introduction
Mobile interactive systems have become common tools in assisting pedestrian mobility. For example, more and more pedestrians are using smartphones equipped with navigation software, particularly to navigate unfamiliar places. Moreover, continuous research in the area of pedestrian navigation explores novel ways that navigation guidance is conveyed to the users in order to improve usability [
1,
2,
3,
4,
5,
6,
7]. Although, such tools are considered very useful, several studies have shown that this form of passive assistance does not help pedestrians memorise journeys nor does it help them become familiar with the environment [
8,
9,
10]. Indeed, on smartphones, common navigation systems support wayfinding using directive instructions [
11]. As shown by Gardony and colleagues, such systems do not provide opportunities to develop spatial skills, and they tend to reduce spatial awareness [
12,
13]. These effects can have a broader negative impact on the cognitive abilities of users, especially when they are used frequently. Studies involving taxi drivers [
10] have demonstrated that enhanced navigation skills are positively correlated with increased activity and gray matter in the hippocampus in people’s brains. Konishi and Bohbot also showed that reduced navigational skills can contribute to cognitive decline during normal aging [
9].
Considering the potentially negative impact of traditional smartphone navigation in cognitive abilities [
14], our aim is to explore how pedestrian navigation systems, operated either through smartphones or augmented reality (AR) glasses, can be used to help users memorise a route and enable them to navigate without the need for navigation systems. This paper reports on the design, methodology, and results of a user study with 20 participants focusing on mobile pedestrian navigation systems and their effects in journey memorisation. In this study, we investigate and compare the effects of smartphone-based navigation vs. AR (augmented reality)-glasses-based navigation.
As reported by Ruginski and their colleagues [
15], traditional map-based automated navigation assistance can have a negative effect on spatial transformation abilities. They observed a negative impact in the case of changes in perspective and on environmental learning [
15]. Indeed, a study by Meneghetti and colleagues has shown that navigation learning involves only visuospatial abilities, and not verbal ones [
16]. This important result explains why automated navigation systems that do not expand the user’s visuospatial skills lead to degradation in spatial knowledge acquisition [
17].
For several years, researchers have been interested in how to integrate landmarks into the design of navigation systems. It is considered that landmark-based navigation systems can help improve the user’s knowledge of the environment during wayfinding tasks [
18]. Work from Montello [
19] showed that landmarks are salient entities in the environment, and they can help improve the user’s survey knowledge during wayfinding tasks: the representation of knowledge about a specific location goes progressively from knowing landmarks, then paths, and finally to a global survey knowledge. Our work is motivated by these findings. Our aim is to study the effects of landmark-based navigation systems, deployed on smartphones or AR glasses, in helping users learn how to navigate the environment without the need for wayfinding technology, ultimately.
The spatial knowledge acquisition in pedestrian navigation has been investigated by a limited number of studies, comparing different interaction modalities or devices.
1.2. Spatial Knowledge Acquisition with Pedestrian Navigation Technologies
In [
24], the authors compared four navigation system variations implemented on tablets, that differed in their level of automation and attention demand. Automation in that context considered notifications on path finding, being triggered automatically by the application or triggered manually on user request. The authors explained that “participants using systems with higher levels of automation seemed not to acquire enough spatial knowledge to reverse the route without navigation errors”. Amirian et al. [
25] designed a tablet-based AR navigation system that relied on automated landmark recognition, with superimposition of navigational signage (orientation arrows and distance information). This system was compared to turn-by-turn map-based navigation system. Their system demonstrated significant improvements in acquiring local knowledge. These results further highlight the limited effects of map-based navigation with respect to spatial knowledge acquisition. More recently, a number of studies have explored the use of landmarks for pedestrian navigation, coupled with different interaction modalities, aiming to improve the acquisition of road knowledge and thus promote autonomy in orientation [
26,
27].
The work by Liu et al. [
28] focused on the use of landmarks in an indoor environment. In that work, they explored the use of iconic holograms in a mixed-reality environment supported by hololens glasses. Augmented landmarks were employed to address the challenge that in indoor environments, corridors and rooms may look alike, making it difficult for people to find physical landmarks to help them navigate. The results showed that virtual semantic landmarks assisted the acquisition of corresponding knowledge, as these landmarks were the second most often labelled in the landmark locating task [
28].
A number of studies explored the effects of user interaction with the navigation system and their impact on spatial knowledge acquisition. In [
29], the findings suggested that higher user engagement with the navigation system correlated with better spatial knowledge acquisition. Huang and colleagues [
30] explored the effects of smartphone map-based navigation through different interaction modes: (1) visual map, (2) voice, or (3) augmented reality (through the smartphone). They found no significant difference in spatial knowledge acquisition in the context of map-based navigation.
Kamilakis and colleagues [
31] proposed a mobile application, which addressed the practical requirements of public transport users (visualisation of nearby transit stops along with the timetable information of transit services passing by those stops). In their paper, they studied the utility and experience perceived by users interacting with mobile augmented reality (MAR) vs. map-based mobile application interfaces. MAR has been shown to offer an enjoyable intuitive interaction model. This demonstrated its potential for directly linking digital content with the user’s physical environment, thereby enabling the experiential exploration of the surrounding elements. The offering of an improved sense of orienteering relatively to surrounding physical elements (e.g., unambiguous interpretation of the direction towards a Point of Interest) should be regarded as another strong aspect of sensor-based MAR applications. The authors concluded that MAR interfaces still need to resolve major usability issues until they can be regarded as an indisputable substitute for traditional map-based interfaces. Most likely, emerging devices such as smart glasses (which involve principally different methods for interacting with digital content) would affect the quality of the experience perceived by the users. In the study [
8], GPS (Global Positioning System) users traveled longer distances and made more stops during a walk than map users and direct-experience participants. Moreover, GPS users traveled more slowly, made larger direction errors, drew sketch maps with poorer topological accuracy, and rated wayfinding tasks as more difficult than direct-experience participants. In another study [
32], the authors proposed a novel system for pedestrian navigation assistance that included global landmarks (for example, the Eiffel Tower) in navigation instructions. They found a better performance, as compared to turn-by-turn navigation systems, in terms of navigating the environment and building a more accurate mental map.
Reflecting on the existing research on pedestrian navigation systems, we can observe that there has been extensive work on map-based systems and their negative effects on spatial knowledge acquisition. With respect to AR systems, the current work focuses primarily on AR through smartphones or tablets. Although these studies demonstrate that landmark-based navigation can improve the acquisition of spatial knowledge, there is no work that has explored the effects of AR glasses as the interaction medium for navigation. To the best of our knowledge, no studies have addressed the spatial knowledge acquisition using AR glasses and smartphones with a landmark-based navigation system. Furthermore, when considering the approaches for assessing spatial knowledge, there is limited work regarding the effects in spatial transformations (i.e., the ability to mentally change the perspective of space), which is positively correlated with environmental learning during memory tests.
The next section presents the methodology we followed for the study, including the navigation path definition, the navigation system design, and the data capture. The results concerning the memory tests are presented in
Section 3. The paper ends with a discussion of the obtained results and a conclusion.
4. Discussion
Traditional navigation aid systems are known for being less efficient when it comes to spatial knowledge acquisition. This paper aimed to explore the use of landmark-based assistance, as stated by other researchers [
18,
43], and determine its benefit for memorising paths and survey knowledge. Its objective was to investigate AR glasses and smartphones for pedestrian navigation and their effects on spatial knowledge acquisition. The results indicated that people using AR glasses, performed better in terms of landmark memorisation, compared to those using smartphones. These results are promising.
Concerning the methodology, Liu and colleagues [
28] investigated the effects of spatial knowledge acquisition derived from sketch maps and landmark locating tasks. Their study involved testing the ability of participants to successfully navigate to a specific destination and return back, without the support of technology, relying mainly on landmarks. Our methodology was influenced by this work, involving tasks of positioning landmarks and guidance information on a map.
With respect to memorisation, Lu et al. compared the memorisation of participants in three conditions [
27]. During the experiment within a VR environment, participants were expected to memorise landmarks. Immediately after, memory tests were performed using photos taken within the VR environment. Effectively, in their experiments, they assessed the short-term memory acquisition immediately after the activity. In our study, we extended our approach to assessing memorisation, by including an analysis of memorisation tests one week later. This involved the physical repetition of the trip without navigation guidance followed by related memorisation tests. We believe that our approach delivers more robust evidence of the memorisation of landmarks over longer periods of time.
The work by Afrooz et al. focused on the comparison between active navigation and passive navigation to assess their effects on spatial and visual memory during wayfinding [
44]. In this work, the authors asked participants to navigate through a predefined route within a university campus, either by following the experimenter (passive navigation) or by leading them (active navigation). The authors did not employ a virtual environment to investigate wayfinding (as commonly performed in other similar works) in order to ensure the direct response of any findings to the real physical environment [
45]. A university campus rather than a city was selected to control confounding variables and avoid factors affecting the data collection process, such as crowds and noise. Knowledge acquisition was assessed through a range of tests involving scene recognition, recollection of the left–right orientation of scenes (mirror image discrimination), and route recollection (sketch maps). However, all the memory tests were performed immediately after the navigation tasks, allowing the analysis of short-term memory acquisition. In our work, we explicitly attempted to assess both the short-term and longer-term (a week later) effects of landmark memorisation following the use of wayfinding technologies.
Reflecting on the original hypothesis for this study: “After using navigation technologies, the AR Glasses’ experience supports landmark memorisation more effectively than Smartphones”, the memory tests analysis showed that AR glasses offered better support for landmark memorisation and also for survey knowledge acquisition after combining both memorisation scores. We acknowledge that the results of this study are based on a limited scale experiment, performed within a single location. In that respect, we consider that further studies can potentially allow for a deeper understanding and more thorough comparison of the effects of the two modalities under different conditions.
Based on the outcomes of our study, we can hypothesise about the factors that may have led to these results. One clear advantage of connected glasses may be that the user is able to constantly look at their surroundings during their walk (therefore, also landmarks), without being severely disrupted by any digital information that may be displayed via augmented reality. With a smartphone, on the other hand, the user’s gaze often shifts from the environment to the smartphone. One hypothesis, therefore, is that being able to perceive the environment along the way makes it easier to memorise the environment and its constituent elements (landmarks). In a complementary study involving users with intellectual disabilities, various participants expressed such a view. For instance, one participant explained: “I rather choose to use these AR Glasses than my smartphone with Google Maps. At least, these glasses are in front of my eyes and wouldn’t need to look down. If I look all the time at my smartphone, I won’t be able to see the street. They will serve when I go to unfamiliar places.” [
46].
Comparison with Previous Work and Contribution. The main contribution of this work was to investigate the effects of AR glasses compared to smartphones on spatial knowledge acquisition. This research question has not been addressed before using a similar approach. Previous studies have compared navigation devices without evaluating the memorisation task, and for those who did, the AR experience was limited to smartphone support.
Our results show that AR glasses can be a suitable support to provide active landmark-based assistance for pedestrian navigation. It offers a better experience favouring spatial knowledge acquisition. It is legitimate to expect that using similar systems will help people to gain improved spatial awareness and create more autonomy for their mobility.
Limitations and Future Studies. The generalisability of the results is limited by having two groups with different sizes for the memory test. In addition, our study considered navigating two groups of people through one single path. Despite these limitations, the obtained results are valid to answer the research question and support our hypothesis. The statistical method used was insensitive to the difference in group sizes. Moreover, the navigation path was identified following a rigorous method based on local residents’ knowledge and expertise in the area.
Author Contributions
Conceptualization, A.L., C.K., S.L. and C.E.; methodology, A.L., C.K., S.L. and C.E.; software, A.L.; validation, A.L., P.N. and C.E.; formal analysis, A.L.; data curation, A.L.; writing—original draft preparation, A.L., C.K., S.L. and C.E.; writing—review and editing, A.L., C.E., C.K., S.L. and P.N.; visualization, A.L.; supervision, C.K., C.E. and S.L.; funding acquisition, A.L., C.K., S.L. and C.E. All authors have read and agreed to the published version of the manuscript.
Funding
I-SITE Univ. Lille-Europe (mobility grants: EWAID and Go-Smart).
Data Availability Statement
Not applicable.
Acknowledgments
The authors thank the Département du Nord and the Agence Nationale pour la Rénovation Urbaine (ANRU) for their financial support within the framework of the ValMobile project as well as the PRIMOH research pole. They thank the I-SITE Univ. Lille-Europe (mobility grants: EWAID & Go-Smart). They also thank all the participants of the study. They are also grateful to the reviewers for their many constructive comments.
Conflicts of Interest
The authors declare no conflict of interest.
References
- von Jan, V.; Bertel, S.; Hornecker, E. Information Push and Pull in Tactile Pedestrian Navigation Support. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI ’18), Barcelona, Spain, 3–6 September 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 55–62. [Google Scholar] [CrossRef]
- Morais, I.; Condado, M.; Quinn, R.; Patel, S.; Morreale, P.; Johnston, E.; Hyde, E. Design of a Contextual Digital Wayfinding Environment. In Proceedings of the Design, User Experience, and Usability. Application Domains; Marcus, A., Wang, W., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 221–232. [Google Scholar]
- Akpinar, E.; Yeşilada, Y.; Temizer, S. The Effect of Context on Small Screen and Wearable Device Users’ Performance—A Systematic Review. ACM Comput. Surv. 2020, 53, 1–44. [Google Scholar] [CrossRef]
- da Fonseca, F.P.; Conticelli, E.; Papageorgiou, G.; Ribeiro, P.; Jabbari, M.; Tondelli, S.; Ramos, R. Use and Perceptions of Pedestrian Navigation Apps: Findings from Bologna and Porto. ISPRS Int. J. Geo Inf. 2021, 10, 446. [Google Scholar] [CrossRef]
- Tachiquin, R.; Velázquez, R.; Del-Valle-Soto, C.; Gutiérrez, C.A.; Carrasco, M.; De Fazio, R.; Trujillo-León, A.; Visconti, P.; Vidal-Verdú, F. Wearable Urban Mobility Assistive Device for Visually Impaired Pedestrians Using a Smartphone and a Tactile-Foot Interface. Sensors 2021, 21, 5274. [Google Scholar] [CrossRef] [PubMed]
- Ruginski, I.; Giudice, N.; Creem-Regehr, S.; Ishikawa, T. Designing mobile spatial navigation systems from the user’s perspective: An interdisciplinary review. Spat. Cogn. Comput. 2022, 22, 1–29. [Google Scholar] [CrossRef]
- Prandi, C.; Barricelli, B.R.; Mirri, S.; Fogli, D. Accessible wayfinding and navigation: A systematic mapping study. Univers. Access Inf. Soc. 2023, 22, 185–212. [Google Scholar] [CrossRef]
- Ishikawa, T.; Fujiwara, H.; Imai, O.; Okabe, A. Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. J. Environ. Psychol. 2008, 28, 74–82. [Google Scholar] [CrossRef]
- Konishi, K.; Bohbot, V.D. Spatial navigational strategies correlate with gray matter in the hippocampus of healthy older adults tested in a virtual maze. Front. Aging Neurosci. 2013, 5, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Maguire, E.A.; Gadian, D.G.; Johnsrude, I.S.; Good, C.D.; Ashburner, J.; Frackowiak, R.S.; Frith, C.D. Navigation-related structural change in the hippocampi of taxi drivers. Proc. Natl. Acad. Sci. USA 2000, 97, 4398–4403. [Google Scholar] [CrossRef] [Green Version]
- Wiener, J.M.; Buchner, S.J.; Holscher, C. Taxonomy of human wayfinding tasks: A knowledge-based approach. Spat. Cogn. Comput. 2009, 9, 152–165. [Google Scholar] [CrossRef]
- Gardony, A.L.; Brunyé, T.T.; Mahoney, C.R.; Taylor, H.A. How Navigational Aids Impair Spatial Memory: Evidence for Divided Attention. Spat. Cogn. Comput. 2013, 13, 319–350. [Google Scholar] [CrossRef]
- Dahmani, L.; Bohbot, V.D. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci. Rep. 2020, 10, 6310. [Google Scholar] [CrossRef] [Green Version]
- Lakehal, A.; Lepreux, S.; Letalle, L.; Kolski, C. From wayfinding model to future context-based adaptation of HCI in Urban Mobility for pedestrians with active navigation needs. Int. J. Hum.-Comput. Interact. 2021, 37, 378–389. [Google Scholar] [CrossRef]
- Ruginski, I.T.; Creem-regehr, S.H.; Stefanucci, J.K.; Cashdan, E. GPS use negatively affects environmental learning through spatial transformation abilities. J. Environ. Psychol. 2019, 64, 12–20. [Google Scholar] [CrossRef] [Green Version]
- Meneghetti, C.; Zancada-menéndez, C.; Sampedro-piquero, P.; Lopez, L.; Martinelli, M.; Ronconi, L.; Rossi, B. Mental representations derived from navigation: The role of visuo-spatial abilities and working memory. Learn. Individ. Diff. 2016, 49, 314–322. [Google Scholar] [CrossRef]
- Parush, A.; Ahuvia, S.; Erev, I. Degradation in Spatial Knowledge Acquisition When Using Automatic Navigation Systems. In Spatial Information Theory. COSIT 2007. Lecture Notes in Computer Science; Winter, S., Duckham, M., Kulik, L., Kuipers, B., Eds.; Springer: Berlin/Heidelberg, Germnay, 2007; Volume 736, pp. 238–254. [Google Scholar]
- Siegel, A.W.; White, S.H. The development of spatial representations of large-scale environments. Adv. Child Dev. Behav. 1975, 10, 9–55. [Google Scholar] [CrossRef]
- Montello, D.R. Navigation. In The Cambridge Handbook of Visuospatial Thinking; Shah, A.M.P., Ed.; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, MA, USA, 2005; pp. 257–294. [Google Scholar]
- Rehrl, K.; Hausler, E.; Leitinger, S.; Bell, D. Pedestrian navigation with augmented reality, voice and digital map: Final results from an in situ field study assessing performance and user experience. J. Locat. Based Serv. 2014, 8, 75–96. [Google Scholar] [CrossRef]
- Walther-Franks, B.; Malaka, R. Evaluation of an augmented photograph-based pedestrian navigation system. In Proceedings of the International Symposium on Smart Graphics, Rennes, France, 27–29 August 2008; Springer: Cham, Switzerland, 2008; pp. 94–105. [Google Scholar] [CrossRef]
- Guedira, Y.; Kolski, C.; Lepreux, S. Pedestrian Navigation through Pictograms and Landmark Photos on Smart Glasses: A Pilot Study. In Proceedings of the 19th International Conference on Human-Computer Interaction, RoCHI 2022, Craiova, Romania, 6–7 October 2022; Popescu, P., Kolski, C., Eds.; Matrix Rom: Bucharest, Romania, 2022; pp. 13–20. [Google Scholar]
- Wen, J.; Helton, W.S.; Billinghurst, M. A study of user perception, interface performance, and actual usage of mobile pedestrian navigation aides. Proc. Hum. Factors Ergon. Soc. 2013, 57, 1958–1962. [Google Scholar] [CrossRef]
- Brugger, A.; Richter, K.F.; Fabrikant, S.I. How does navigation system behavior influence human behavior? Cogn. Res. Princ. Implic. 2019, 4, 5. [Google Scholar] [CrossRef] [Green Version]
- Amirian, P.; Basiri, A. Landmark-based pedestrian navigation using augmented reality and machine learning. In Progress in Cartography; Springer: Cham, Switzerland, 2016; pp. 451–465. [Google Scholar] [CrossRef]
- Zhang, Y.; Nakajima, T. Exploring 3D Landmark-Based Map Interface in AR Navigation System for City Exploration. In Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia (MUM’21), Leuven, Belgium, 5–8 December 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 220–222. [Google Scholar] [CrossRef]
- Lu, J.; Han, Y.; Xin, Y.; Yue, K.; Liu, Y. Possibilities for Designing Enhancing Spatial Knowledge Acquirements Navigator: A User Study on the Role of Different Contributors in Impairing Human Spatial Memory During Navigation. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21), Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
- Liu, B.; Ding, L.; Meng, L. Spatial knowledge acquisition with virtual semantic landmarks in mixed reality-based indoor navigation. Cartogr. Geogr. Inf. Sci. 2021, 48, 305–319. [Google Scholar] [CrossRef]
- Brugger, A.; Richter, K.F.; Fabrikant, S.I. Distributing attention between environment and navigation system to increase spatial knowledge acquisition during assisted wayfinding. In Proceedings of the International Conference on Spatial Information Theory, Regensburg, Germany, 9–13 September 2018; Number 198809. pp. 19–22. [Google Scholar] [CrossRef]
- Huang, H.; Schmidt, M.; Gartner, G. Spatial Knowledge Acquisition with Mobile Maps, Augmented Reality and Voice in the Context of GPS-based Pedestrian Navigation: Results from a Field Test. Cartogr. Geogr. Inf. Sci. 2012, 39, 107–116. [Google Scholar] [CrossRef] [Green Version]
- Kamilakis, M.; Gavalas, D.; Zaroliagis, C. Mobile User Experience in Augmented Reality vs. Maps Interfaces: A Case Study in Public Transportation. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy, 15–18 June 2016; Springer: Cham, Switzerland, 2016; pp. 388–396. [Google Scholar] [CrossRef]
- Wenig, N.; Wenig, D.; Ernst, S.; Malaka, R.; Hecht, B.; Schoning, J. Pharos: Improving navigation instructions on smartwatches by including global landmarks. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI—ACM 2017, Vienna, Austria, 4–7 September 2017; pp. 1–13. [Google Scholar] [CrossRef] [Green Version]
- Tversky, B. Distortions in cognitive maps. Geoforum 1992, 23, 131–138. [Google Scholar] [CrossRef]
- Wood, D. How maps work. Cartogr. Int. J. Geogr. Inf. Geovisualization 1992, 29, 66–74. [Google Scholar] [CrossRef]
- Coors, V.; Elting, C.; Kray, C.; Laakso, K. Presenting Route Instructions on Mobile Devices: From Textual Directions to 3D Visualization. Explor. Geovis. 2005, 1, 529–550. [Google Scholar] [CrossRef]
- Gabbard, J.L.; Swan, J.E.; Hix, D. The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence Teleoperators Virtual Environ. 2006, 15, 16–32. [Google Scholar] [CrossRef]
- Rzayev, R.; Woźniak, P.W.; Dingler, T.; Henze, N. Reading on smart glasses: The effect of text position, presentation type and walking. In Proceedings of the Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018; pp. 1–9. [Google Scholar]
- Lakehal, A.; Lepreux, S.; Efstratiou, C.; Kolski, C.; Nicolaou, P. Investigating Smartphones and AR Glasses for Pedestrian Navigation and their Effects in Spatial Knowledge Acquisition. In Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’20), Oldenburg, Germany, 5–8 October 2020; ACM: New York, NY, USA, 2021; pp. 1–7. [Google Scholar]
- Hegarty, M.; Richardson, A.E.; Montello, D.R.; Lovelace, K.; Subbiah, I. Development of a self-report measure of environmental spatial ability. J. Intell. 2002, 30, 425–447. [Google Scholar] [CrossRef]
- Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
- Brooke, J.; Jordan, P.W.; Thomas, B.; Weerdmeester, B.A.; McClelland, I.L. Usability Evaluation in Industry; Taylor and Francis: New York, NY, USA, 1996; pp. 189–194. [Google Scholar]
- Rehrl, K.; Häusler, E.; Steinmann, R.; Leitinger, S.; Bell, D.; Weber, M. Pedestrian navigation with augmented reality, voice and digital map: Results from a field study assessing performance and user experience. In Advances in Location-Based Services. Lecture Notes in Geoinformation and Cartography; Springer: Cham, Switzerland, 2012; pp. 3–20. [Google Scholar] [CrossRef]
- Millonig, A.; Schechtner, K. Developing landmark-based pedestrian-navigation systems. IEEE Trans. Intell. Transp. Syst. 2007, 8, 43–49. [Google Scholar] [CrossRef]
- Afrooz, A.; White, D.; Parolin, B. Effects of active and passive exploration of the built environment on memory during wayfinding. Appl. Geogr. 2018, 101, 68–74. [Google Scholar] [CrossRef]
- Emo, B.; Silva, J.P.; Javadi, A.H.; Howard, L.; Spiers, H.J. How spatial properties of a city street network influence brain activity during navigation. In Proceedings of the the Spatial Cognition 2014, Bremen, Germany, 15–19 September 2014. [Google Scholar]
- Lakehal, A. User-Centred Design and Evaluation of Interactive System Assisting the Mobility of People with Intellectual Disability. Ph.D. Thesis, Université Polytechnique Hauts-de-France, Valenciennes, France, 2022. [Google Scholar]
- Velázquez, R.; Pissaloux, E.; Rodrigo, P.; Carrasco, M.; Giannoccaro, N.I.; Lay-Ekuakille, A. An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback. Appl. Sci. 2018, 8, 578. [Google Scholar] [CrossRef] [Green Version]
- Al-Khalifa, S.; Al-Razgan, M.S. Ebsar: Indoor guidance for the visually impaired. Comput. Electr. Eng. 2016, 54, 26–39. [Google Scholar] [CrossRef]
- Szucs, V.; Guzsvinecz, T.; Magyar, A. Improved algorithms for movement pattern recognition and classification in physical rehabilitation. In Proceedings of the 10th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2019, Naples, Italy, 23–25 October 2019; pp. 417–424. [Google Scholar] [CrossRef]
- Elgendy, M.; Sik-Lányi, C.; Kelemen, A. A Novel Marker Detection System for People with Visual Impairment Using the Improved Tiny-YOLOv3 Model. Comput. Methods Programs Biomed. 2021, 205, 106112. [Google Scholar] [CrossRef]
- Courbois, Y.; Blades, M.; Farran, E.K.; Sockeel, P. Do individuals with intellectual disability select appropriate objects as landmarks when learning a new route? J. Intellect. Disabil. Res. 2013, 57, 80–89. [Google Scholar] [CrossRef]
- Letalle, L.; Lakehal, A.; Mengue-Topio, H.; Saint-Mars, J.; Kolski, C.; Lepreux, S.; Anceaux, F. Ontology for Mobility of People with Intellectual Disability: Building a Basis of Definitions for the Development of Navigation Aid Systems. In Proceedings of the HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience Design. HCII; Springer: Cham, Switzerland, 2020; Volume 12212 LNCS, pp. 322–334. [Google Scholar] [CrossRef]
Figure 1.
Footpath between the residential buildings of the study area.
Figure 2.
Residential buildings.
Figure 3.
Highlighted landmarks in the targeted residential area.
Figure 4.
Selected path with different landmarks and guidance instructions.
Figure 5.
Mobile phone application displaying the instruction: “Turn Left after the Phone Booth in 20 meters”.
Figure 6.
AR glasses application displaying the instruction: “Turn Right when you see the Bus Stop in 30 meters”.
Figure 7.
Decision point illustrating a landmark (Bus Stop).
Figure 8.
Sequence of instructions displayed when the user is approaching a decision point.
Figure 9.
Activity diagram of the protocol.
Figure 10.
Memory test 1 for participant SP8.
Figure 11.
Memorisation score analysis (landmarks).
Figure 12.
Memorisation score analysis (path segments).
Figure 13.
Memorisation score analysis (landmarks + path segments).
Table 1.
The SBSOD items [
39].
No | Item |
---|
1 | I am very good at giving directions. |
2 | I have a poor memory for where I left things. |
3 | I am very good at judging distances. |
4 | My “sense of direction” is very good. |
5 | I tend to think of my environment in terms of cardinal directions (N, S, E, W). |
6 | I very easily get lost in a new city. |
7 | I enjoy reading maps. |
8 | I have trouble understanding directions. |
9 | I am very good at reading maps. |
10 | I don’t remember routes very well while riding as a passenger in a car. |
11 | I don’t enjoy giving directions. |
12 | It’s not important to me to know where I am. |
13 | I usually let someone else do the navigational planning for long trips. |
14 | I can usually remember a new route after I have traveled it only once. |
15 | I don’t have a very good “mental map” of my environment. |
Table 2.
The SBSOD score for the ARG group.
Participant | ARG1 | ARG2 | ARG3 | ARG4 | ARG5 | ARG6 | ARG7 | ARG8 | ARG9 | ARG10 |
---|
SBSOD | 81 | 63 | 67 | 76 | 83 | 57 | 72 | 62 | 65 | 64 |
Table 3.
The SBSOD score for the SP group.
Participant | SP1 | SP2 | SP3 | SP4 | SP5 | SP6 | SP7 | SP8 | SP9 | SP10 |
---|
SBSOD | 68 | 64 | 80 | 65 | 58 | 57 | 47 | 44 | 65 | 67 |
Table 4.
System Usability Scale (SUS) items [
41].
No | Item |
---|
1 | I think that I would like to use this system frequently. |
2 | I found the system unnecessarily complex. |
3 | I thought the system was easy to use. |
4 | I think that I would need the support of a technical person to be able to use this system. |
5 | I found the various functions in this system were well integrated. |
6 | I thought there was too much inconsistency in this system. |
7 | I would imagine that most people would learn to use this system very quickly. |
8 | I found the system very cumbersome to use. |
9 | I felt very confident using the system. |
10 | I needed to learn a lot of things before I could get going with this system. |
Table 5.
The SUS score summary for the ARG and SP groups.
Group | ARG (Before) | ARG (After) | SP (Before) | SP (After) |
---|
M | 66.25 | 73.50 | 68.25 | 75.25 |
Med | 62.50 | 76.25 | 67.50 | 77.50 |
Min | 32.50 | 37.50 | 52.50 | 50.00 |
Max | 92.50 | 92.50 | 92.50 | 92.50 |
SD | 21.87 | 16.72 | 9.93 | 11.81 |
Table 6.
Evaluation of the memorisation of landmarks (Participant SP8).
Landmark | Footpath | Zebra Crossing | Phone Booth | Bus Stop | Woody’s Bar | Zebra Crossing | Bishopden Court | Main Road | Hemsdell Court |
---|
Notation | L1 | L2 | L3 | L4 | L5 | L6 | L7 | L8 | L9 |
M. Test 1 | 0 | 2 | 0 | 1 | 2 | 1 | 0 | 0 | 0 |
M. Test 2 | 0 | 2 | 0 | 1 | 2 | 0 | 0 | 0 | 0 |
M. Test 3 | 0 | 2 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
Table 7.
Memory tests: landmarks results for the ARG group (the score is in the range of 0 to 18).
Participant | ARG1 | ARG2 | ARG3 | ARG4 | ARG5 | ARG6 | ARG7 | ARG8 | ARG9 | ARG10 |
---|
ARG M. Test 1 | 8 | 12 | 9 | 10 | 11 | 12 | 5 | NA | 12 | 7 |
ARG M. Test 2 | 8 | 13 | 8 | 9 | 11 | 8 | 4 | NA | 10 | 6 |
ARG M. Test 3 | 9 | 13 | 8 | 8 | 13 | 2 | 7 | NA | 12 | 10 |
Table 8.
Memory tests: landmarks results for the SP group (the score is in the range of 0 to 18).
Participant | SP1 | SP2 | SP3 | SP4 | SP5 | SP6 | SP7 | SP8 | SP9 | SP10 |
---|
SP M. Test 1 | 10 | 6 | 7 | 4 | 12 | 8 | 7 | 6 | 2 | 11 |
SP M. Test 2 | 6 | 7 | 7 | 7 | 6 | 8 | 9 | 5 | 1 | 5 |
SP M. Test 3 | 6 | 7 | 2 | 7 | 6 | 10 | 11 | 4 | 3 | 8 |
Table 9.
Evaluation of segment memorisation (Participant SP8).
Segment | Departure - L1 | L1 - L2 | L2 - L3 | L3 - L4 | L4 - L5 | L5 - L6 | L6 - L7 | L7 - L8 | L8 - L9 | Score MS_S |
---|
M. Test 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 6 |
M. Test 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 7 |
M. Test 3 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
Table 10.
Memory tests: segment results for the ARG group (the score is in the range of 0 to 9).
Participant | ARG1 | ARG2 | ARG3 | ARG4 | ARG5 | ARG6 | ARG7 | ARG8 | ARG9 | ARG10 |
---|
ARG M. Test 1 | 8 | 9 | 8 | 7 | 9 | 9 | 3 | nan | 9 | 7 |
ARG M. Test 2 | 9 | 9 | 9 | 7 | 9 | 9 | 3 | nan | 9 | 8 |
ARG M. Test 3 | 9 | 9 | 9 | 7 | 9 | 9 | 5 | nan | 9 | 9 |
Table 11.
Memory tests: segments results for the SP group (the score is in the range of 0 to 9).
Participant | SP1 | SP2 | SP3 | SP4 | SP5 | SP6 | SP7 | SP8 | SP9 | SP10 |
---|
SP M. Test 1 | 7 | 6 | 4 | 4 | 7 | 8 | 5 | 6 | 7 | 8 |
SP M. Test 2 | 7 | 6 | 4 | 4 | 7 | 8 | 7 | 7 | 4 | 8 |
SP M. Test 3 | 7 | 6 | 6 | 5 | 7 | 8 | 7 | 6 | 8 | 9 |
Table 12.
Memory tests: global score results for the ARG group (the score is on a scale out of 10).
Participant | ARG1 | ARG2 | ARG3 | ARG4 | ARG5 | ARG6 | ARG7 | ARG8 | ARG9 | ARG10 |
---|
ARG M. Test 1 | 6.67 | 8.33 | 6.94 | 6.67 | 8.06 | 8.33 | 3.06 | nan | 8.33 | 5.83 |
ARG M. Test 2 | 7.22 | 8.61 | 7.22 | 6.39 | 8.06 | 7.22 | 2.78 | nan | 7.78 | 6.11 |
ARG M. Test 3 | 7.5 | 8.61 | 7.22 | 6.11 | 8.61 | 5.56 | 4.72 | nan | 8.33 | 7.78 |
Table 13.
Memory tests: global score results for the SP group (the score is on a scale out of 10).
Participant | SP1 | SP2 | SP3 | SP4 | SP5 | SP6 | SP7 | SP8 | SP9 | SP10 |
---|
SP M. Test 1 | 6.67 | 5.0 | 4.17 | 3.33 | 7.22 | 6.67 | 4.72 | 5.0 | 4.44 | 7.5 |
SP M. Test 2 | 5.56 | 5.28 | 4.17 | 4.17 | 5.56 | 6.67 | 6.39 | 5.28 | 2.5 | 5.83 |
SP M. Test 3 | 5.56 | 5.28 | 3.89 | 4.72 | 5.56 | 7.22 | 6.94 | 4.44 | 5.28 | 7.22 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).