Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Investigating the Use of AR Glasses for Content Annotation on Mobile Devices

Published: 14 November 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases.

    Supplementary Material

    Teaser (iss22main-id3155-p-teaser.mp4)
    Teaser video, 1min59sec. H.264/AAC (MP4 container)

    References

    [1]
    Maneesh Agrawala and Michael Shilman. 2005. DIZI: A Digital Ink Zooming Interface for Document Annotation. In Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction (INTERACT’05). Springer-Verlag, 69–79. isbn:3-540-28943-7, 978-3-540-28943-2 https://doi.org/10.1007/11555261_9
    [2]
    Eric Bahna and Robert J. K. Jacob. 2005. Augmented Reading: Presenting Additional Information without Penalty. In CHI ’05 Extended Abstracts. ACM, 1909–1912. isbn:1595930027 https://doi.org/10.1145/1056808.1057054
    [3]
    Sunyoung Bang, Hyunjin Lee, and Woontack Woo. 2020. Effects of Augmented Content’s Placement and Size on User’s Search Experience in Extended Displays. In IEEE International Symposium on Mixed and Augmented Reality Adjunct. 184–188. https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00056
    [4]
    Patrick Baudisch and Ruth Rosenholtz. 2003. Halo: A Technique for Visualizing off-Screen Objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03). ACM, 481–488. isbn:1581136307 https://doi.org/10.1145/642611.642695
    [5]
    Blaine Bell, Tobias Höllerer, and Steven Feiner. 2002. An Annotated Situation-Awareness Aid for Augmented Reality. In Proceedings of the Symposium on User Interface Software and Technology (UIST ’02). ACM, 213–216. isbn:1581134886 https://doi.org/10.1145/571985.572017
    [6]
    Eugenie Brasier, Olivier Chapuis, Nicolas Ferey, Jeanne Vezien, and Caroline Appert. 2020. ARPads: Mid-air Indirect Input for Augmented Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 332–343. https://doi.org/10.1109/ISMAR50242.2020.00060
    [7]
    Eugenie Brasier, Emmanuel Pietriga, and Caroline Appert. 2021. AR-enhanced Widgets for Smartphone-centric Interaction. In Proceedings of the Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’21). ACM, 19 pages. isbn:9781450336529 https://doi.org/10.1145/1122445.1122456
    [8]
    Nicki Brower. 2021. Apple - Adopt Quick Note. https://developer.apple.com/videos/play/wwdc2021/10264/
    [9]
    A. J. Bernheim Brush, David Bargeron, Anoop Gupta, and J. J. Cadiz. 2001. Robust Annotation Positioning in Digital Documents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’01). ACM, 285–292. isbn:1-58113-327-8 https://doi.org/10.1145/365024.365117
    [10]
    Rahul Budhiraja, Gun. A. Lee, and Mark Billinghurst. 2013. Using a HHD with a HMD for mobile AR interaction. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 1–6. https://doi.org/10.1109/ISMAR.2013.6671837
    [11]
    Yun Suk Chang, Benjamin Nuernberger, Bo Luan, and Tobias Höllerer. 2017. Evaluating gesture-based augmented reality annotation. In IEEE Symposium on 3D User Interfaces (3DUI). 182–185. https://doi.org/10.1109/3DUI.2017.7893337
    [12]
    M. A. Chatti, T. Sodhi, M. Specht, R. Klamma, and R. Klemke. 2006. u-Annotate: An Application for User-Driven Freeform Digital Ink Annotation of E-Learning Content. In IEEE International Conference on Advanced Learning Technologies (ICALT’06). 1039–1043. issn:2161-3761 https://doi.org/10.1109/ICALT.2006.1652624
    [13]
    Jaybie A. De Guzman, Kanchana Thilakarathna, and Aruna Seneviratne. 2019. Security and Privacy Approaches in Mixed Reality: A Literature Survey. ACM Comput. Surv., 52, 6 (2019), Article 110, oct, 37 pages. issn:0360-0300 https://doi.org/10.1145/3359626
    [14]
    Tamara Denning, Zakariya Dehlawi, and Tadayoshi Kohno. 2014. In Situ with Bystanders of Augmented Reality Glasses: Perspectives on Recording and Privacy-Mediating Technologies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA. 2377–2386. isbn:9781450324731 https://doi.org/10.1145/2556288.2557352
    [15]
    Anna Eiberger, Per Ola Kristensson, Susanne Mayr, Matthias Kranz, and Jens Grubert. 2019. Effects of Depth Layer Switching between an Optical See-Through Head-Mounted Display and a Body-Proximate Display. In Symposium on Spatial User Interaction (SUI ’19). ACM, Article 15, 9 pages. isbn:9781450369756 https://doi.org/10.1145/3357251.3357588
    [16]
    Gene Golovchinsky and Laurent Denoue. 2002. Moving Markup: Repositioning Freeform Annotations. In Proceedings of the Symposium on User Interface Software and Technology (UIST ’02). ACM, 21–30. isbn:1-58113-488-6 https://doi.org/10.1145/571985.571989
    [17]
    Gene Golovchinsky, Morgan N. Price, and Bill N. Schilit. 1999. From Reading to Retrieval: Freeform Ink Annotations As Queries. In Proceedings of the SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’99). ACM, 19–25. isbn:1-58113-096-1 https://doi.org/10.1145/312624.312637
    [18]
    Jens Grubert, Matthias Heinisch, Aaron Quigley, and Dieter Schmalstieg. 2015. MultiFi: Multi Fidelity Interaction with Displays On and Around the Body. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15). ACM, 3933–3942. isbn:9781450331456 https://doi.org/10.1145/2702123.2702331
    [19]
    Sean Gustafson, Patrick Baudisch, Carl Gutwin, and Pourang Irani. 2008. Wedge: Clutter-Free Visualization of off-Screen Locations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). ACM, 787–796. isbn:9781605580111 https://doi.org/10.1145/1357054.1357179
    [20]
    Mark Harrower and Cynthia Brewer. 2003. ColorBrewer.org: an online tool for selecting colour schemes for maps. The Cartographic Journal, 40, 1 (2003), 27–37.
    [21]
    Ken Hinckley, Ken Hinckley, Shengdong Zhao, Raman Sarin, Patrick Baudisch, Edward Cutrell, Michael Shilman, and Desney Tan. 2007. InkSeine: In Situ Search for Active Note Taking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07). ACM, 251–260. isbn:978-1-59593-593-9 https://doi.org/10.1145/1240624.1240666
    [22]
    Brett Jones, Rajinder Sodhi, David Forsyth, Brian Bailey, and Giuliano Maciocci. 2012. Around Device Interaction for Multiscale Navigation. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’12). Association for Computing Machinery, New York, NY, USA. 83–92. isbn:9781450311052 https://doi.org/10.1145/2371574.2371589
    [23]
    Marion Koelle, Swamy Ananthanarayan, Simon Czupalla, Wilko Heuten, and Susanne Boll. 2018. Your Smart Glasses’ Camera Bothers Me! Exploring Opt-in and Opt-out Gestures for Privacy Mediation. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (NordiCHI ’18). Association for Computing Machinery, New York, NY, USA. 473–481. isbn:9781450364379 https://doi.org/10.1145/3240167.3240174
    [24]
    Marion Koelle, Abdallah El Ali, Vanessa Cobus, Wilko Heuten, and Susanne CJ Boll. 2017. All about Acceptability? Identifying Factors for the Adoption of Data Glasses. ACM, 295–300. isbn:9781450346559 https://doi.org/10.1145/3025453.3025749
    [25]
    Marion Koelle, Matthias Kranz, and Andreas Möller. 2015. Don’t Look at Me That Way! Understanding User Attitudes Towards Data Glasses Usage. In Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15). ACM, 362–372. isbn:9781450336529 https://doi.org/10.1145/2785830.2785842
    [26]
    Marion Koelle, Katrin Wolf, and Susanne Boll. 2018. Beyond LED Status Lights - Design Requirements of Privacy Notices for Body-Worn Cameras. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’18). Association for Computing Machinery, New York, NY, USA. 177–187. isbn:9781450355681 https://doi.org/10.1145/3173225.3173234
    [27]
    Ricardo Langner, Marc Satkowski, Wolfgang Büschel, and Raimund Dachselt. 2021. MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Article 468, 17 pages. isbn:9781450380966 https://doi.org/10.1145/3411764.3445593
    [28]
    Sarah M. Lehman, Abrar S. Alrumayh, Kunal Kolhe, Haibin Ling, and Chiu C. Tan. 2022. Hidden in Plain Sight: Exploring Privacy Risks of Mobile Augmented Reality Applications. ACM Trans. Priv. Secur., 25, 4 (2022), Article 26, jul, 35 pages. issn:2471-2566 https://doi.org/10.1145/3524020
    [29]
    Zhen Li, Michelle Annett, Ken Hinckley, Karan Singh, and Daniel Wigdor. 2019. HoloDoc: Enabling Mixed Reality Workspaces That Harness Physical and Digital Content. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1–14. isbn:9781450359702 https://doi.org/10.1145/3290605.3300917
    [30]
    Catherine C. Marshall. 1997. Annotation: From Paper Books to the Digital Library. In Proceedings of the Second ACM International Conference on Digital Libraries (DL ’97). ACM, 131–140. isbn:0-89791-868-1 https://doi.org/10.1145/263690.263806
    [31]
    Fabrice Matulic and Moira C. Norrie. 2012. Supporting Active Reading on Pen and Touch-operated Tabletops. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI ’12). ACM, 612–619. isbn:978-1-4503-1287-5 https://doi.org/10.1145/2254556.2254669
    [32]
    Alexandre Millette and Michael J. McGuffin. 2016. DualCAD: Integrating Augmented Reality with a Desktop GUI and Smartphone Interaction. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). 21–26. https://doi.org/10.1109/ISMAR-Adjunct.2016.0030
    [33]
    Peter Mohr, Markus Tatzgern, Tobias Langlotz, Andreas Lang, Dieter Schmalstieg, and Denis Kalkofen. 2019. TrackCap: Enabling Smartphones for 3D Interaction on Mobile Head-Mounted Displays. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–11. isbn:9781450359702 https://doi.org/10.1145/3290605.3300815
    [34]
    M. R. Morris, A. J. B. Brush, and B. R. Meyers. 2007. Reading Revisited: Evaluating the Usability of Digital Display Surfaces for Active Reading Tasks. In IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP’07). 79–86. https://doi.org/10.1109/TABLETOP.2007.12
    [35]
    Erwan Normand and Michael J. McGuffin. 2018. Enlarging a Smartphone with AR to Create a Handheld VESAD (Virtually Extended Screen-Aligned Display). In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 123–133. https://doi.org/10.1109/ISMAR.2018.00043
    [36]
    Kenton O’Hara and Abigail Sellen. 1997. A Comparison of Reading Paper and On-line Documents. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI ’97). ACM, 335–342. isbn:0-89791-802-9 https://doi.org/10.1145/258549.258787
    [37]
    Beryl Plimmer, Samuel Hsiao-Heng Chang, Meghavi Doshi, Laura Laycock, and Nilanthi Seneviratne. 2010. iAnnotate: Exploring Multi-user Ink Annotation in Web Browsers. In Proceedings of the Australasian Conference on User Interface (AUIC ’10). 52–60. isbn:978-1-920682-87-3 https://doi.org/10.5555/1862280.1862289
    [38]
    Jing Qian, Qi Sun, Curtis Wigington, Han L. Han, Tong Sun, Jennifer Healey, James Tompkin, and Jeff Huang. 2022. Dually Noted: Layout-Aware Annotations with Smartphone Augmented Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA. Article 552, 15 pages. isbn:9781450391573 https://doi.org/10.1145/3491102.3502026
    [39]
    Donghao Ren, Tibor Goldschwendt, YunSuk Chang, and Tobias Höllerer. 2016. Evaluating wide-field-of-view augmented reality with mixed reality simulation. In IEEE Virtual Reality. 93–102. https://doi.org/10.1109/VR.2016.7504692
    [40]
    Hugo Romat, Nathalie Henry Riche, Ken Hinckley, Bongshin Lee, Caroline Appert, Emmanuel Pietriga, and Christopher Collins. 2019. ActiveInk: (Th)Inking with Data. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1–13. isbn:9781450359702 https://doi.org/10.1145/3290605.3300272
    [41]
    Hugo Romat, Emmanuel Pietriga, Nathalie Henry-Riche, Ken Hinckley, and Caroline Appert. 2019. SpaceInk: Making Space for In-Context Annotations. In Proceedings of the Symposium on User Interface Software and Technology (UIST ’19). ACM, 871–882. isbn:9781450368162 https://doi.org/10.1145/3332165.3347934
    [42]
    Bill N. Schilit, Gene Golovchinsky, and Morgan N. Price. 1998. Beyond Paper: Supporting Active Reading with Free Form Digital Ink Annotations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’98). ACM Press/Addison-Wesley Publishing Co., 249–256. isbn:0-201-30987-4 https://doi.org/10.1145/274644.274680
    [43]
    Marcos Serrano, Dale Hildebrandt, Sriram Subramanian, and Pourang Irani. 2014. Identifying Suitable Projection Parameters and Display Configurations for Mobile True-3D Displays. In Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices & Services (MobileHCI ’14). ACM, 135–143. isbn:9781450330046 https://doi.org/10.1145/2628363.2628375
    [44]
    Craig S. Tashman and W. Keith Edwards. 2011. LiquidText: A Flexible, Multitouch Environment to Support Active Reading. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, 3285–3294. isbn:978-1-4503-0228-9 https://doi.org/10.1145/1978942.1979430
    [45]
    Dirk Wenig, Johannes Schöning, Alex Olwal, Mathias Oben, and Rainer Malaka. 2017. WatchThru: Expanding Smartwatch Displays with Mid-Air Visuals and Wrist-Worn Augmented Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, 716–721. isbn:9781450346559 https://doi.org/10.1145/3025453.3025852
    [46]
    Jason Wither, Stephen DiVerdi, and Tobias Höllerer. 2009. Annotation in outdoor augmented reality. Computers & Graphics, 33, 6 (2009), 679–689. issn:0097-8493 https://doi.org/10.1016/j.cag.2009.06.001
    [47]
    Dongwook Yoon, Nicholas Chen, and François Guimbretière. 2013. TextTearing: Opening White Space for Digital Ink Annotation. In Proceedings of the Symposium on User Interface Software and Technology (UIST ’13). ACM, 107–112. isbn:978-1-4503-2268-3 https://doi.org/10.1145/2501988.2502036
    [48]
    Dongwook Yoon, Nicholas Chen, François Guimbretière, and Abigail Sellen. 2014. RichReview: Blending Ink, Speech, and Gesture to Support Collaborative Document Review. In Proceedings of the Symposium on User Interface Software and Technology (UIST ’14). ACM, 481–490. isbn:978-1-4503-3069-5 https://doi.org/10.1145/2642918.2647390
    [49]
    Fengyuan Zhu and Tovi Grossman. 2020. BISHARE: Exploring Bidirectional Interactions Between Smartphones and Head-Mounted Augmented Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–14. isbn:9781450367080 https://doi.org/10.1145/3313831.3376233

    Cited By

    View all

    Index Terms

    1. Investigating the Use of AR Glasses for Content Annotation on Mobile Devices

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue ISS
      December 2022
      746 pages
      EISSN:2573-0142
      DOI:10.1145/3554337
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 November 2022
      Published in PACMHCI Volume 6, Issue ISS

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. annotation
      2. augmented reality
      3. mobile device

      Qualifiers

      • Research-article

      Funding Sources

      • ANR

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 324
        Total Downloads
      • Downloads (Last 12 months)134
      • Downloads (Last 6 weeks)7
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media