Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Don’t Confuse! Redrawing GUI Navigation Flow in Mobile Apps for Visually Impaired Users

Published: 23 October 2024 Publication History

Abstract

Mobile applications (apps) are integral to our daily lives, offering diverse services and functionalities. They enable sighted users to access information coherently in an extremely convenient manner. However, it remains unclear if visually impaired users, who rely solely on the screen readers (e.g., Talkback) to navigate and access app information, can do so in the correct and reasonable order. This may result in significant information bias and operational errors. Furthermore, in our preliminary exploration, we explained and clarified that the navigation sequence-related issues encountered by visually impaired users could be categorized into two types: unintuitive navigation sequence and unapparent focus switching. Considering these issues, in this work, we proposed a method named RGNF (Re-draw GUI Navigation Flow). It aimed to enhance the understandability and coherence of accessing the content of each component within the Graphical User Interface (GUI), together with assisting developers in creating well-designed GUI navigation flow (GNF). This method was inspired by the characteristics identified in our preliminary study, where visually impaired users expected navigation to be associated with close position and similar shape of GUI components that were read consecutively. Thus, our method relied on the principles derived from the Gestalt psychological model, aiming to group GUI components into different regions according to the laws of proximity and similarity, thereby redrawing the GNFs. To evaluate the effectiveness of our method, we calculated sequence similarity values before and after redrawing the GNF, and further employed the tools proposed by Alotaibi et al. to measure the reachability of GUI components. Our results demonstrated a substantial improvement in similarity (0.921) compared to the baseline (0.624), together with the reachability (90.31%) compared to the baseline GNF (74.35%). Furthermore, a qualitative user study revealed that our method had a positive effect on providing visually impaired users with an improved user experience.

References

[1]
K. M. Mack, E. J. McDonnell, D. Jain, L. L. Wang, J. E. Froehlich, and L. Findlater, “What do we mean by “accessibility research”?: A literature survey of accessibility papers in chi and assets from 1994 to 2019,” in Proc. CHI Conf. Human Factors Comput. Syst., 2021, pp. 1–18.
[2]
C. Vendome, D. Solano, S. Linan, and M. Linares-Vásquez, “Can everyone use my app? An empirical study on accessibility in android apps,” in Proc. IEEE Int. Conf. Softw. Maintenance Evol. (ICSME), 2019, pp. 41–52.
[3]
M. Zhang, H. Liu, C. Chen, G. Gao, H. Li, and J. Zhao, “AccessFixer: Enhancing GUI accessibility for low vision users with R-GCN model,” IEEE Trans. Softw. Eng., vol. 50, no. 2, pp. 173–189, Feb. 2024.
[4]
M. Zhang, Y. Zhang, G. Gao, and H. Liu, “Enhancing accessibility of web-based SVG buttons: An optimization method and best practices,” Expert Syst. Appl., vol. 238, 2023, Art. no.
[5]
P. T. Chiou, A. S. Alotaibi, and W. G. J. Halfond, “BAGEL: An approach to automatically detect navigation-based web accessibility barriers for keyboard users,” in Proc. CHI Conf. Human FactorsComput. Syst., 2023, pp. 1–17.
[6]
A. S. Alotaibi, P. T. Chiou, and W. G. J. Halfond, “Automated detection of talkback interactive accessibility failures in android applications,” in Proc. IEEE Conf. Softw. Testing, Verification Validation (ICST), 2022, pp. 232–243.
[7]
M. H. Ashcraft, “Fundamentals of cognition,” Routledge, vol. 4, pp. 1–632, Sep. 1997.
[8]
M. Xie, Z. Xing, S. Feng, C. Chen, L. Zhu, and X. Xu, “Psychologically-inspired, unsupervised inference of perceptual groups of gui widgets from GUI images,” in Proc. 30th ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2022, pp. 332–343.
[9]
Z. Mengxi, “The codes and datasets for the paper titled “Don’t confuse! Redrawing GUI navigation flow in mobile apps for visually impaired users,” Zenodo, Mar. 2024.
[10]
P. T. Chiou, A. S. Alotaibi, and W. G. J. Halfond, “Detecting dialog-related keyboard navigation failures in web applications,” in Proc. IEEE/ACM 45th Int. Conf. Softw. Eng. (ICSE), 2023, pp. 1368–1380.
[11]
J. Kitzinger, “Qualitative research: Introducing focus groups,” BMJ, vol. 311, pp. 299–302, Jul. 1995.
[12]
“Android developers accessibility service,” 2022. Accessed: Apr. 2016. [Online]. Availble: https://developer.android.com/reference/android/accessibilityservice/AccessibilityService
[13]
Y. xin Zhang, S. Chen, L. Fan, C. Chen, and X. Li, “Automated and context-aware repair of color-related accessibility issues for android apps,” in Proc. 31st ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2023, pp. 1225–1267.
[14]
J. Sun, T. Su, J. Jiang, J. Wang, G. Pu, and Z. Su, “Property-based fuzzing for finding data manipulation errors in android apps,” in Proc. 31st ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2023, pp. 1088–1100.
[15]
A. Alshayban, I. Ahmed, and S. Malek, “Accessibility issues in android apps: State of affairs, sentiments, and ways forward,” in Proc. IEEE/ACM 42nd Int. Conf. Softw. Eng. (ICSE), 2020, pp. 1323–1334.
[16]
A. S. Alotaibi, P. T. Chiou, F. M. Tawsif, and W. G. J. Halfond, “Scalefix: An automated repair of UI scaling accessibility issues in android applications,” in Proc. IEEE Int. Conf. Softw. Maintenance Evol. (ICSME), 2023, pp. 147–159.
[17]
J. F. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986.
[18]
M. Xie, S. Feng, Z. Xing, J. Chen, and C. Chen, “UIED: A hybrid tool for GUI element detection,” in Proc. 28th ACM Joint Meeting Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2020, pp. 1655–1659.
[19]
J. Flusser, S. Farokhi, I. CyrilHöschl, T. Suk, B. Zitová, and M. Pedone, “Recognition of images degraded by Gaussian blur,” IEEE Trans. Image Process., vol. 25, no. 2, pp. 790–806, Feb. 2016.
[20]
L.-W. Han, Y.-C. Tian, and Q. Qi, “Research on edge detection algorithm based on improved sobel operator,” MATEC Web Conferences, vol. 309, 2020, Art. no.
[21]
Y. Wu, Y. Kang, J. Luo, Y. He, and Q. Yang, “FedCG: Leverage conditional gan for protecting privacy and maintaining competitive performance in federated learning,” in Proc. Int. Joint Conf. Artif. Intell., 2021, pp. 1–8.
[22]
J. H. Hosang, R. Benenson, and B. Schiele, “Learning non-maximum suppression,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 6469–6477.
[23]
C.-H. Yeh, C.-Y. Lin, K. Muchtar, H.-E. Lai, and M.-T. Sun, “Three-pronged compensation and hysteresis thresholding for moving object detection in real-time video surveillance,” IEEE Trans. Ind. Electron., vol. 64, no. 6, pp. 4945–4955, Jun. 2017.
[24]
M. Sornam, M. S. Kavitha, and M. Nivetha, “Hysteresis thresholding based edge detectors for inscriptional image enhancement,” in Proc. IEEE Int. Conf. Comput. Intell. Comput. Res. (ICCIC), 2016, pp. 1–4.
[25]
M. Eigenmann, P. Nandy, and M. H. Maathuis, “Structure learning of linear Gaussian structural equation models with weak edges,” in Proc. Conf. Uncertainty Artif. Intell., 2017, pp. 1–18.
[26]
D. P. Huttenlocher, G. A. Klanderman, and W. Rucklidge, “Comparing images using the hausdorff distance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 9, pp. 850–863, Sep. 1993.
[27]
J. Nilsson and T. Akenine-Möller, “Understanding SSIM,” 2020,.
[28]
M. J. van Kreveld, T. Miltzow, T. Ophelders, W. Sonke, and J. L. Vermeulen, “Between shapes, using the hausdorff distance,” Comput. Geometry, vol. 100, 2020, Art. no.
[29]
D. Karimi and S. E. Salcudean, “Reducing the hausdorff distance in medical image segmentation with convolutional neural networks,” IEEE Trans. Med. Imag., vol. 39, no. 2, pp. 499–513, Feb. 2020.
[30]
J. Chen, R. Wang, L. Liu, and J. Song, “Clustering of trajectories based on hausdorff distance,” in Proc. Int. Conf. Electron., Commun. Control (ICECC), 2011, pp. 1940–1944.
[31]
Z. Liu, C. Chen, J. Wang, Y. Huang, J. Hu, and Q. Wang, “Owl eyes: Spotting UI display issues via visual understanding,” in Proc. 35th IEEE/ACM Int. Conf. Automated Softw. Eng. (ASE), 2020, pp. 398–409.
[32]
M. Bajammal and A. Mesbah, “Semantic web accessibility testing via hierarchical visual analysis,” in Proc. IEEE/ACM 43rd Int. Conf. Softw. Eng. (ICSE), 2021, pp. 1610–1621.
[33]
C. Li, Y. Jiang, and C. Xu, “Push-button synthesis of watch companions for android apps,” in Proc. IEEE/ACM 44th Int. Conf. Softw. Eng. (ICSE), 2022, pp. 1793–1804.
[34]
M. Deza and E. Deza, “Encyclopedia of distances,” Mathematics, vol. 1, pp. 1–756, Oct. 2016.
[35]
A. Rodrigues, K. Montague, H. Nicolau, and T. Guerreiro, “Getting smartphones to talkback: Understanding the smartphone adoption process of blind users,” in Proc. 17th Int. ACM SIGACCESS Conf. Comput. & Accessibility, 2015, pp. 23–32.
[36]
“Accessibility scanner,” 2016. Accessed: Mar. 2016. [Online]. Available: https://play.google.com/store/apps
[37]
“Ibm mobile accessibility checker,” 2017. Accessed: Mar. 2017. [Online]. Available: https://www.ibm.com/able/toolkit/tools/
[38]
“Automated accessibility test tool,” 2020. Accessed: Jul. 2020. [Online]. Available: https://github.com/paypal/AATT
[39]
A. S. Alotaibi, P. T. Chiou, and W. G. J. Halfond, “Automated repair of size-based inaccessibility issues in mobile applications,” in Proc. 36th IEEE/ACM Int. Conf. Automated Softw. Eng. (ASE), 2021, pp. 730–742.
[40]
A. Alshayban and S. Malek, “Accessitext: Automated detection of text accessibility issues in android apps,” in Proc. 30th ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2022, pp. 984–995.
[41]
“Accessibility test framework,” 2016. Accessed: Jun. 2016. [Online]. Available: https://github.com/google/Accessibility-Test-Framework-for-Android
[42]
S. Yan, “IBM strengthens mobile app accessibility and usability,” [EB/OL], 2016. Accessed: Mar. 2016. [Online]. Available: https://www.ibm.com/blogs/age-and-ability/2016/10/12/ibm-strengthens-mobile-app-accessibility-and-usability/
[43]
S. Hao, B. Liu, S. Nath, W. G. J. Halfond, and R. Govindan, “PUMA: Programmable UI-automation for large-scale dynamic analysis of mobile apps,” in Proc. 12th Annu. Int. Conf. Mobile Syst., Appl., Services, 2014, pp. 204–217.
[44]
M. Eler, J. Rojas, Y. Ge, and G. Fraser, “Automated accessibility testing of mobile apps,” in Proc. IEEE 11th Int. Conf. Softw. Testing, Verification Validation (ICST), 2018, pp. 116–126.
[45]
S. Chen, C. Chen, L. Fan, M. Fan, X. Zhan, and Y. Liu, “Accessible or not an empirical investigation of android app accessibility,” IEEE Trans. Softw. Eng., vol. 48, no. 10, pp. 3954–3968, Oct. 2022.
[46]
M. Schlichtkrull, T. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling, “Modeling relational data with graph convolutional networks,” 2018,.
[47]
L. R. Milne and R. E. Ladner, “Blocks4All: Overcoming accessibility barriers to blocks programming for children with visual impairments,” in Proc. CHI Conf. Human Factors Comput. Syst., 2018, pp. 1–10.
[48]
K. Hara, V. Le, and J. E. Froehlich, “Combining crowdsourcing and google street view to identify street-level accessibility problems,” in Proc. SIGCHI Conf. Human Factors Comput. Syst., 2013, pp. 631–640.
[49]
J. P. Bigham, J. T. Brudvik, and B. Zhang, “Accessibility by demonstration: Enabling end users to guide developers to web accessibility solutions,” in Proc. 12th Int. ACM SIGACCESS Conf. Comput. Accessibility, 2010, pp. 35–42.
[50]
G. Koutrika, L. Liu, and S. J. Simske, “Generating reading orders over document collections,” in Proc. IEEE 31st Int. Conf. Data Eng., 2015, pp. 507–518.
[51]
X. Zhang, F. Cheng, and S. Wang, “Spatio-temporal fusion based convolutional sequence learning for lip reading,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2019, pp. 713–722.
[52]
N. Salehnamadi, A. Alshayban, J.-W. Lin, I. Ahmed, S. M. Branham, and S. Malek, “Latte: Use-case and assistive-service driven automated accessibility testing framework for android,” in Proc. CHI Conf. Human Factors Comput. Syst., 2021, pp. 1–11.
[53]
N. Salehnamadi, F. Mehralian, and S. Malek, “Groundhog: An automated accessibility crawler for mobile apps,” in Proc. 37th IEEE/ACM Int. Conf. Automated Softw. Eng., 2022, pp. 1–12.
[54]
N. Salehnamadi, Z. He, and S. Malek, “Assistive-technology aided manual accessibility testing in mobile apps, powered by record-and-replay,” in Proc. CHI Conf. Human Factors Comput. Syst., 2023, pp. 1–20.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 50, Issue 12
Dec. 2024
384 pages

Publisher

IEEE Press

Publication History

Published: 23 October 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media