Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3597638.3608423acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

Understanding Strategies and Challenges of Conducting Daily Data Analysis (DDA) Among Blind and Low-vision People

Published: 22 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Being able to analyze and derive insights from data, which we call Daily Data Analysis (DDA), is an increasingly important skill in everyday life. While the accessibility community has explored ways to make data more accessible to blind and low-vision (BLV) people, little is known about how BLV people perform DDA. Knowing BLV people’s strategies and challenges in DDA would allow the community to make DDA more accessible to them. Toward this goal, we conducted a mixed-methods study of interviews and think-aloud sessions with BLV people (N=16). Our study revealed five key approaches for DDA (i.e., overview obtaining, column comparison, key statistics identification, note-taking, and data validation) and the associated challenges. We discussed the implications of our findings and highlighted potential directions to make DDA more accessible for BLV people.

    References

    [1]
    Robert Amar, James Eagan, and John Stasko. 2005. Low-Level Components of Analytic Activity in Information Visualization. In Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization(INFOVIS ’05). IEEE Computer Society, USA, 15. https://doi.org/10.1109/INFOVIS.2005.24
    [2]
    Giulia Barbareschi, Catherine Holloway, Katherine Arnold, Grace Magomere, Wycliffe Ambeyi Wetende, Gabriel Ngare, and Joyce Olenja. 2020. The Social Network: How People with Visual Impairment Use Mobile Phones in Kibera, Kenya. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376658
    [3]
    Daniel W. Barowy, Emery D. Berger, and Benjamin Zorn. 2018. ExceLint: Automatically Finding Spreadsheet Formula Errors. Proc. ACM Program. Lang. 2, OOPSLA, Article 148 (oct 2018), 26 pages. https://doi.org/10.1145/3276518
    [4]
    Ann Blandford and Simon Attfield. 2022. Interacting with information. Springer Nature.
    [5]
    Jeremy Boy, Ronald A. Rensink, Enrico Bertini, and Jean-Daniel Fekete. 2014. A Principled Way of Assessing Visualization Literacy. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 1963–1972. https://doi.org/10.1109/TVCG.2014.2346984
    [6]
    Ian Bruce, Aubrey McKennell, and Errol C Walker. 1991. Blind and partially sighted adults in Britain: the RNIB survey. Vol. 1. Bernan Press (PA).
    [7]
    Richard Burns, Sandra Carberry, and Stephanie Elzer Schwartz. 2019. An automated approach for the recognition of intended messages in grouped bar charts. Computational Intelligence 35, 4 (2019), 955–1002.
    [8]
    Ed H. Chi, Peter Pirolli, Kim Chen, and James Pitkow. 2001. Using Information Scent to Model User Information Needs and Actions and the Web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’01). Association for Computing Machinery, New York, NY, USA, 490–497. https://doi.org/10.1145/365024.365325
    [9]
    Andy Cockburn, Amy Karlson, and Benjamin B. Bederson. 2009. A Review of Overview+detail, Zooming, and Focus+context Interfaces. ACM Comput. Surv. 41, 1, Article 2 (jan 2009), 31 pages. https://doi.org/10.1145/1456650.1456652
    [10]
    Juliet M. Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative Sociology 13, 1 (1990), 3–21. https://doi.org/10.1007/BF00988593
    [11]
    Melissa DeJonckheere and Lisa M Vaughn. 2019. Semistructured interviewing in primary care research: a balance of relationship and rigour. Family Medicine and Community Health 7, 2 (2019). https://doi.org/10.1136/fmch-2018-000057 arXiv:https://fmch.bmj.com/content/7/2/e000057.full.pdf
    [12]
    Brenda Dervin. 1992. From the mind’s eye of the user: The sense-making qualitative-quantitative methodology. Sense-making methodology reader (1992).
    [13]
    Iyad Abu Doush and Enrico Pontelli. 2013. Non-visual navigation of spreadsheets. Universal Access in the Information Society 12, 2 (2013), 143–159. https://doi.org/10.1007/s10209-012-0272-1
    [14]
    Danyang Fan, Alexa Fay Siu, Hrishikesh Rao, Gene Sung-Ho Kim, Xavier Vazquez, Lucy Greco, Sile O’Modhrain, and Sean Follmer. 2023. The Accessibility of Data Visualizations on the Web for Screen Reader Users: Practices and Experiences During COVID-19. ACM Trans. Access. Comput. 16, 1, Article 4 (mar 2023), 29 pages. https://doi.org/10.1145/3557899
    [15]
    Danyang Fan, Alexa Fay Siu, Wing-Sum Adrienne Law, Raymond Ruihong Zhen, Sile O’Modhrain, and Sean Follmer. 2022. Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 477, 19 pages. https://doi.org/10.1145/3491102.3517790
    [16]
    John H. Flowers, Dion C. Buhman, and Kimberly D. Turnage. 1997. Cross-Modal Equivalence of Visual and Auditory Scatterplots for Exploring Bivariate Data Samples. Human Factors 39, 3 (1997), 341–351. https://doi.org/10.1518/001872097778827151 arXiv:https://doi.org/10.1518/001872097778827151PMID: 9394628.
    [17]
    Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. InFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery, New York, NY, USA, 417–426. https://doi.org/10.1145/2501988.2502032
    [18]
    Simon Gay, Marc-Aurèle Rivière, and Edwige Pissaloux. 2018. Towards Haptic Surface Devices with Force Feedback for Visually Impaired People. In Computers Helping People with Special Needs, Klaus Miesenberger and Georgios Kouroupetroglou (Eds.). Springer International Publishing, Cham, 258–266.
    [19]
    Nicholas A. Giudice, Hari Prasath Palani, Eric Brenner, and Kevin M. Kramer. 2012. Learning Non-Visual Graphical Information Using a Touch-Based Vibro-Audio Interface. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (Boulder, Colorado, USA) (ASSETS ’12). Association for Computing Machinery, New York, NY, USA, 103–110. https://doi.org/10.1145/2384916.2384935
    [20]
    Cole Gleason, Amy Pavel, Xingyu Liu, Patrick Carrington, Lydia B Chilton, and Jeffrey P Bigham. 2019. Making memes accessible. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. 367–376.
    [21]
    Cole Gleason, Amy Pavel, Xingyu Liu, Patrick Carrington, Lydia B. Chilton, and Jeffrey P. Bigham. 2019. Making Memes Accessible. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 367–376. https://doi.org/10.1145/3308561.3353792
    [22]
    Ricardo E. Gonzalez Penuela, Paul Vermette, Zihan Yan, Cheng Zhang, Keith Vertanen, and Shiri Azenkot. 2022. Understanding How People with Visual Impairments Take Selfies: Experiences and Challenges. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (Athens, Greece) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 63, 4 pages. https://doi.org/10.1145/3517428.3550372
    [23]
    Jenna L Gorlewicz, Jessica Burgner, Thomas J Withrow, and Robert J Webster III. 2014. Initial Experiences Using Vibratory Touchscreens to Display Graphical Math Concepts to Students with Visual Impairments. Journal of Special Education Technology 29, 2 (2014), 17–25. https://doi.org/10.1177/016264341402900202
    [24]
    Jonathan Grudin. 2001. Partitioning Digital Worlds: Focal and Peripheral Awareness in Multiple Monitor Use. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’01). Association for Computing Machinery, New York, NY, USA, 458–465. https://doi.org/10.1145/365024.365312
    [25]
    João Guerreiro and Daniel Gonçalves. 2016. Scanning for Digital Content: How Blind and Sighted People Perceive Concurrent Speech. ACM Trans. Access. Comput. 8, 1, Article 2 (jan 2016), 28 pages. https://doi.org/10.1145/2822910
    [26]
    Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction. arxiv:2303.05063 [cs.CL]
    [27]
    Marti Hearst. 2009. Search user interfaces. Cambridge university press.
    [28]
    Harold V Henderson and Paul F Velleman. 1981. Building multiple regression models interactively. Biometrics (1981), 391–411.
    [29]
    Leona Holloway, Swamy Ananthanarayan, Matthew Butler, Madhuka Thisuri De Silva, Kirsten Ellis, Cagatay Goncu, Kate Stephens, and Kim Marriott. 2022. Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey Motion Graphics for People Who Are Blind or Have Low Vision. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (Athens, Greece) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 32, 16 pages. https://doi.org/10.1145/3517428.3544797
    [30]
    Leona M Holloway, Cagatay Goncu, Alon Ilsar, Matthew Butler, and Kim Marriott. 2022. Infosonics: Accessible Infographics for People Who Are Blind Using Sonification and Voice. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 480, 13 pages. https://doi.org/10.1145/3491102.3517465
    [31]
    Tao Hu, Shouhu Xuan, Yinduan Gao, Quan Shu, Zhenbang Xu, Shuaishuai Sun, Jun Li, and Xinglong Gong. 2022. Smart Refreshable Braille Display Device Based on Magneto-Resistive Composite with Triple Shape Memory. Advanced Materials Technologies 7, 1 (2022), 2100777. https://doi.org/10.1002/admt.202100777 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/admt.202100777
    [32]
    Dietmar Jannach, Thomas Schmitz, Birgit Hofer, and Franz Wotawa. 2014. Avoiding, finding and fixing spreadsheet errors – A survey of automated approaches for spreadsheet QA. Journal of Systems and Software 94 (2014), 129–150. https://doi.org/10.1016/j.jss.2014.03.058
    [33]
    Shaun K. Kane, Chandrika Jayant, Jacob O. Wobbrock, and Richard E. Ladner. 2009. Freedom to Roam: A Study of Mobile Device Adoption and Accessibility for People with Visual and Motor Disabilities. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, Pennsylvania, USA) (Assets ’09). Association for Computing Machinery, New York, NY, USA, 115–122. https://doi.org/10.1145/1639642.1639663
    [34]
    Johan Kildal. 2009. Developing an interactive overview for non-visual exploration of tabular numerical information. Ph. D. Dissertation. University of Glasgow.
    [35]
    Jonathan Lazar, Aaron Allen, Jason Kleinman, and Chris Malarkey. 2007. What Frustrates Screen Reader Users on the Web: A Study of 100 Blind Users. International Journal of Human–Computer Interaction 22, 3 (2007), 247–269. https://doi.org/10.1080/10447310709336964 arXiv:https://doi.org/10.1080/10447310709336964
    [36]
    Hae-Na Lee and Vikas Ashok. 2022. Impact of Out-of-Vocabulary Words on the Twitter Experience of Blind Users. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 608, 20 pages. https://doi.org/10.1145/3491102.3501958
    [37]
    Sukwon Lee, Sung-Hee Kim, and Bum Chul Kwon. 2017. VLAT: Development of a Visualization Literacy Assessment Test. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 551–560. https://doi.org/10.1109/TVCG.2016.2598920
    [38]
    Wentao Lei, Mingming Fan, and Juliann Thang. 2022. “I Shake The Package To Check If It’s Mine”: A Study of Package Fetching Practices and Challenges of Blind and Low Vision People in China. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 268, 15 pages. https://doi.org/10.1145/3491102.3502063
    [39]
    Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2015. Shape Displays: Spatial Interaction with Dynamic Physical Form. IEEE Computer Graphics and Applications 35, 5 (2015), 5–11. https://doi.org/10.1109/MCG.2015.111
    [40]
    Can Liu, Yun Han, Ruike Jiang, and Xiaoru Yuan. 2021. ADVISor: Automatic Visualization Answer for Natural-Language Question on Tabular Data. In 2021 IEEE 14th Pacific Visualization Symposium (PacificVis). 11–20. https://doi.org/10.1109/PacificVis52677.2021.00010
    [41]
    Guanhong Liu, Xianghua Ding, Chun Yu, Lan Gao, Xingyu Chi, and Yuanchun Shi. 2019. "I Bought This for Me to Look More Ordinary": A Study of Blind People Doing Online Shopping. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300602
    [42]
    Alan Lundgard and Arvind Satyanarayan. 2022. Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS) (2022). https://doi.org/10.1109/TVCG.2021.3114770
    [43]
    Yuyu Luo, Jiawei Tang, and Guoliang Li. 2021. nvBench: A Large-Scale Synthesized Dataset for Cross-Domain Natural Language to Visualization Task. arxiv:2112.12926 [cs.HC]
    [44]
    Stephen MacNeil, Parth Patel, and Benjamin E. Smolin. 2022. Expert Goggles: Detecting and Annotating Visualizations Using a Machine Learning Classifier. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, Article 58, 3 pages. https://doi.org/10.1145/3526114.3558627
    [45]
    Paula Maddigan and Teo Susnjak. 2023. Chat2VIS: Generating Data Visualizations via Natural Language Using ChatGPT, Codex and GPT-3 Large Language Models. IEEE Access 11 (2023), 45181–45193. https://doi.org/10.1109/ACCESS.2023.3274199
    [46]
    Gary Marchionini and Ryen White. 2007. Find what you need, understand what you find. International Journal of Human [# x02013] Computer Interaction 23, 3 (2007), 205–237.
    [47]
    Natalina Martiniello, Werner Eisenbarth, Christine Lehane, Aaron Johnson, and Walter Wittich. 2022. Exploring the use of smartphones and tablets among people with visual impairments: Are mainstream devices replacing the use of traditional visual aids?Assistive Technology 34, 1 (2022), 34–45.
    [48]
    N. Martiniello, W. Eisenbarth, C. Lehane, A. Johnson, and W. Wittich. 2022. Exploring the use of smartphones and tablets among people with visual impairments: Are mainstream devices replacing the use of traditional visual aids?Assist Technol 34, 1 (2022), 34–45. https://doi.org/10.1080/10400435.2019.1682084 1949-3614 Martiniello, Natalina Orcid: 0000-0002-2739-8608 Eisenbarth, Werner Lehane, Christine Orcid: 0000-0001-7872-0890 Johnson, Aaron Wittich, Walter Orcid: 0000-0003-2184-6139 Journal Article Research Support, Non-U.S. Gov’t United States 2019/11/08 Assist Technol. 2022 Jan 2;34(1):34-45. Epub 2019 Nov 7.
    [49]
    C. P. McCabe. 1986. Data on Blindness and Visual Impairment in the U.S.: A Resource Manual on Characteristics, Education, Employment, and Service Delivery. Archives of Ophthalmology 104, 8 (08 1986), 1142–1142. https://doi.org/10.1001/archopht.1986.01050200048043
    [50]
    Silvia Mirri, Silvio Peroni, Paola Salomoni, Fabio Vitali, and Vincenzo Rubano. 2017. Towards accessible graphs in HTML-based scientific articles. In 2017 14th IEEE Annual Consumer Communications & Networking Conference (CCNC). 1067–1072. https://doi.org/10.1109/CCNC.2017.7983287
    [51]
    Meredith Ringel Morris, Annuska Zolyomi, Catherine Yao, Sina Bahram, Jeffrey P Bigham, and Shaun K Kane. 2016. " With most of it being pictures now, I rarely use it" Understanding Twitter’s Evolving Accessibility to Blind Users. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5506–5516.
    [52]
    Rosiana Natalie, Jolene Loh, Huei Suen Tan, Joshua Tseng, Ian Luke Yi-Ren Chan, Ebrima H Jarjue, Hernisa Kacorri, and Kotaro Hara. 2021. The Efficacy of Collaborative Authoring of Video Scene Descriptions. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 17, 15 pages. https://doi.org/10.1145/3441852.3471201
    [53]
    David Nixon and Mike O’Hara. 2010. Spreadsheet auditing software. arXiv preprint arXiv:1001.4293 (2010).
    [54]
    David Noever and Forrest McKee. 2023. Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models. arxiv:2301.13382 [cs.CL]
    [55]
    Maciej P. Polak and Dane Morgan. 2023. Extracting Accurate Materials Data from Research Papers with Conversational Language Models and Prompt Engineering. arxiv:2303.05352 [cs.CL]
    [56]
    Venkatesh Potluri, John Thompson, James Devine, Bongshin Lee, Nora Morsi, Peli De Halleux, Steve Hodges, and Jennifer Mankoff. 2022. PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 46, 13 pages. https://doi.org/10.1145/3526113.3545700
    [57]
    Giorgio Presti, Dragan Ahmetovic, Mattia Ducci, Cristian Bernareggi, Luca Ludovico, Adriano Baratè, Federico Avanzini, and Sergio Mascetti. 2019. WatchOut: Obstacle Sonification for People with Visual Impairment or Blindness. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 402–413. https://doi.org/10.1145/3308561.3353779
    [58]
    Sajjadur Rahman, Mangesh Bendre, Yuyang Liu, Shichu Zhu, Zhaoyuan Su, Karrie Karahalios, and Aditya G. Parameswaran. 2021. NOAH: Interactive Spreadsheet Exploration with Dynamic Hierarchical Overviews. Proc. VLDB Endow. 14, 6 (feb 2021), 970–983. https://doi.org/10.14778/3447689.3447701
    [59]
    André Rodrigues, Kyle Montague, Hugo Nicolau, and Tiago Guerreiro. 2015. Getting Smartphones to Talkback: Understanding the Smartphone Adoption Process of Blind Users. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (Lisbon, Portugal) (ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 23–32. https://doi.org/10.1145/2700648.2809842
    [60]
    Daniel M. Russell. 2003. Learning to see, seeing to learn: visual aspects of sensemaking. In Human Vision and Electronic Imaging VIII, Bernice E. Rogowitz and Thrasyvoulos N. Pappas (Eds.). Vol. 5007. International Society for Optics and Photonics, SPIE, 8 – 21. https://doi.org/10.1117/12.501132
    [61]
    Daniel M. Russell, Mark J. Stefik, Peter Pirolli, and Stuart K. Card. 1993. The Cost Structure of Sensemaking. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems (Amsterdam, The Netherlands) (CHI ’93). Association for Computing Machinery, New York, NY, USA, 269–276. https://doi.org/10.1145/169059.169209
    [62]
    JooYoung Seo. 2019. Is the Maker Movement Inclusive of ANYONE?: Three Accessibility Considerations to Invite Blind Makers to the Making World. TechTrends 63, 5 (2019), 514–520. https://doi.org/10.1007/s11528-019-00377-3
    [63]
    Woosuk Seo and Hyunggu Jung. 2022. Challenges and Opportunities to Improve the Accessibility of YouTube for People with Visual Impairments as Content Creators. Univers. Access Inf. Soc. 21, 3 (aug 2022), 767–770. https://doi.org/10.1007/s10209-020-00787-8
    [64]
    Ather Sharif, Sanjana Shivani Chintalapati, Jacob O. Wobbrock, and Katharina Reinecke. 2021. Understanding Screen-Reader Users’ Experiences with Online Data Visualizations. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 14, 16 pages. https://doi.org/10.1145/3441852.3471202
    [65]
    Ather Sharif and Babak Forouraghi. 2018. evoGraphs — A jQuery plugin to create web accessible graphs. In 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC). 1–4. https://doi.org/10.1109/CCNC.2018.8319239
    [66]
    Ather Sharif, Olivia H. Wang, and Alida T. Muongchan. 2022. “What Makes Sonification User-Friendly?” Exploring Usability and User-Friendliness of Sonified Responses. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (Athens, Greece) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 45, 5 pages. https://doi.org/10.1145/3517428.3550360
    [67]
    Ather Sharif, Olivia H. Wang, Alida T. Muongchan, Katharina Reinecke, and Jacob O. Wobbrock. 2022. VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 478, 19 pages. https://doi.org/10.1145/3491102.3517431
    [68]
    Ather Sharif, Andrew Mingwei Zhang, Anna Shih, Jacob O Wobbrock, and Katharina Reinecke. 2022. Understanding and Improving Information Extraction From Online Geospatial Data Visualizations for Screen-Reader Users. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–5.
    [69]
    Ather Sharif, Andrew Mingwei Zhang, Anna Shih, Jacob O. Wobbrock, and Katharina Reinecke. 2022. Understanding and Improving Information Extraction From Online Geospatial Data Visualizations for Screen-Reader Users. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (Athens, Greece) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 61, 5 pages. https://doi.org/10.1145/3517428.3550363
    [70]
    B. Shneiderman. 1996. The eyes have it: a task by data type taxonomy for information visualizations. In Proceedings 1996 IEEE Symposium on Visual Languages. 336–343. https://doi.org/10.1109/VL.1996.545307
    [71]
    Alexa Siu, Gene S-H Kim, Sile O’Modhrain, and Sean Follmer. 2022. Supporting Accessible Data Visualization Through Audio Data Narratives. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 476, 19 pages. https://doi.org/10.1145/3491102.3517678
    [72]
    Alexa F. Siu. 2019. Advancing Accessible 3D Design for the Blind and Visually-Impaired via Tactile Shape Displays. In Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19 Adjunct). Association for Computing Machinery, New York, NY, USA, 146–149. https://doi.org/10.1145/3332167.3356875
    [73]
    Alexa F. Siu, Danyang Fan, Gene S-H Kim, Hrishikesh V. Rao, Xavier Vazquez, Sile O’Modhrain, and Sean Follmer. 2021. COVID-19 Highlights the Issues Facing Blind and Visually Impaired People in Accessing Data on the Web. In Proceedings of the 18th International Web for All Conference (Ljubljana, Slovenia) (W4A ’21). Association for Computing Machinery, New York, NY, USA, Article 11, 15 pages. https://doi.org/10.1145/3430263.3452432
    [74]
    Abigale Stangl, Nitin Verma, Kenneth R. Fleischmann, Meredith Ringel Morris, and Danna Gurari. 2021. Going Beyond One-Size-Fits-All Image Descriptions to Satisfy the Information Wants of People Who Are Blind or Have Low Vision. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 16, 15 pages.
    [75]
    Tony Stockman. 2004. The Design and Evaluation of Auditory Access to Spreadsheets. In ICAD 2004: The 10th Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6-9 2004, Proceedings, Stephen Barrass and Paul Vickers (Eds.). International Community for Auditory Display. http://www.icad.org/websiteV2.0/Conferences/ICAD2004/posters/stockman.pdf
    [76]
    Tony Stockman, Christopher Frauenberger, and Greg Hind. 2005. Interactive sonification of spreadsheets. Georgia Institute of Technology.
    [77]
    Lida Theodorou, Daniela Massiceti, Luisa Zintgraf, Simone Stumpf, Cecily Morrison, Edward Cutrell, Matthew Tobias Harris, and Katja Hofmann. 2021. Disability-First Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data Collectors. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 27, 12 pages. https://doi.org/10.1145/3441852.3471225
    [78]
    Violeta Voykinska, Shiri Azenkot, Shaomei Wu, and Gilly Leshed. 2016. How blind people interact with visual content on social networking services. In Proceedings of the 19th acm conference on computer-supported cooperative work & social computing. 1584–1595.
    [79]
    Alexandra Vtyurina, Adam Fourney, Meredith Ringel Morris, Leah Findlater, and Ryen W. White. 2019. Bridging Screen Readers and Voice Assistants for Enhanced Eyes-Free Web Search. In The World Wide Web Conference (San Francisco, CA, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 3590–3594. https://doi.org/10.1145/3308558.3314136
    [80]
    Ruolin Wang, Zixuan Chen, Mingrui Ray Zhang, Zhaoheng Li, Zhixiu Liu, Zihan Dang, Chun Yu, and Xiang ’Anthony’ Chen. 2021. Revamp: Enhancing Accessible Information Seeking Experience of Online Shopping for Blind or Low Vision Users. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 494, 14 pages. https://doi.org/10.1145/3411764.3445547
    [81]
    Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, and Wenjuan Han. 2023. Zero-Shot Information Extraction via Chatting with ChatGPT. arxiv:2302.10205 [cs.CL]
    [82]
    Karl E Weick. 1995. Sensemaking in organizations. Vol. 3. Sage.
    [83]
    Karl E. Weick. 2000. Making Sense of the Organization.
    [84]
    Karl E. Weick, Kathleen M. Sutcliffe, and David Obstfeld. 2005. Organizing and the Process of Sensemaking. Organization Science 16, 4 (2005), 409–421. https://doi.org/10.1287/orsc.1050.0133 arXiv:https://doi.org/10.1287/orsc.1050.0133
    [85]
    Jian Xu, Syed Masum Billah, Roy Shilkrot, and Aruna Balasubramanian. 2019. DarkReader: Bridging the Gap Between Perception and Reality of Power Consumption in Smartphones for Blind Users. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 96–104. https://doi.org/10.1145/3308561.3353806
    [86]
    Yahoo. 2022. Yahoo Finance. https://finance.yahoo.com/.
    [87]
    Haixia Zhao, Catherine Plaisant, Ben Shneiderman, and Ramani Duraiswami. 2004. Sonification of Geo-Referenced Data for Auditory Information Seeking: Design Principle and Pilot Study. In ICAD. Citeseer.
    [88]
    Shaojian Zhu, Daisuke Sato, Hironobu Takagi, and Chieko Asakawa. 2010. Sasayaki: An Augmented Voice-Based Web Browsing Experience. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (Orlando, Florida, USA) (ASSETS ’10). Association for Computing Machinery, New York, NY, USA, 279–280. https://doi.org/10.1145/1878803.1878870
    [89]
    Jonathan Zong, Crystal Lee, Alan Lundgard, JiWoong Jang, Daniel Hajas, and Arvind Satyanarayan. 2022. Rich Screen Reader Experiences for Accessible Data Visualization. arXiv preprint arXiv:2205.04917 (2022).

    Cited By

    View all
    • (2024)Designing Unobtrusive Modulated Electrotactile Feedback on Fingertip Edge to Assist Blind and Low Vision (BLV) People in Comprehending ChartsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642546(1-20)Online publication date: 11-May-2024

    Index Terms

    1. Understanding Strategies and Challenges of Conducting Daily Data Analysis (DDA) Among Blind and Low-vision People

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASSETS '23: Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility
      October 2023
      1163 pages
      ISBN:9798400702204
      DOI:10.1145/3597638
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. BLV
      2. DDA
      3. blind and low vision
      4. daily data analysis
      5. data accessibility
      6. data exploration
      7. interview
      8. qualitative study
      9. think-aloud

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • National Science Foundation
      • Guangzhou Science and Technology Program City-University Joint Funding Project
      • Guangdong Provincial Key Lab of Integrated Communication, Sensing and Computation for Ubiquitous Internet of Things

      Conference

      ASSETS '23
      Sponsor:

      Acceptance Rates

      ASSETS '23 Paper Acceptance Rate 55 of 182 submissions, 30%;
      Overall Acceptance Rate 436 of 1,556 submissions, 28%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)184
      • Downloads (Last 6 weeks)9
      Reflects downloads up to 11 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Designing Unobtrusive Modulated Electrotactile Feedback on Fingertip Edge to Assist Blind and Low Vision (BLV) People in Comprehending ChartsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642546(1-20)Online publication date: 11-May-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media