Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

NaLa-Search: : A multimodal, interaction-based architecture for faceted search on linked open data

Published: 01 December 2021 Publication History

Abstract

Mobile devices are the technological basis of computational intelligent systems, yet traditional mobile application interfaces tend to rely only on the touch modality. That said, such interfaces could improve human–computer interaction by combining diverse interaction modalities, such as visual, auditory and touch. Also, a lot of information on the Web is published under the Linked Data principles to allow people and computers to share, use and/or reuse high-quality information; however, current tools for searching for, browsing and visualising this kind of data are not fully developed. The goal of this research is to propose a novel architecture called NaLa-Search to effectively explore the Linked Open Data cloud. We present a mobile application that combines voice commands and touch for browsing and searching for such semantic information through faceted search, which is a widely used interaction scheme for exploratory search that is faithful to its richness and practical for real-world use. NaLa-Search was evaluated by real users from the clinical pharmacology domain. In this evaluation, the users had to search and navigate among the DrugBank dataset through voice commands. The evaluation results show that faceted search combined with multiple interaction modalities (e.g. speech and touch) can enhance users’ interaction with semantic knowledge bases.

References

[1]
Jackson P and Moulinier I. Natural language processing for online applications: text retrieval, extraction and categorization, vol. 5. 2nd ed. Wolverhampton: John Benjamins Publishing, 2007.
[2]
Allen J. Natural language understanding. 2nd ed. Redwood City, CA: Benjamin-Cummings Publishing, 1995.
[3]
White RW and Roth RA. Exploratory search: beyond the query-response paradigm. Synth Lect Inf Concepts Retrieval Serv 2009; 1(1): 1–98.
[4]
Tunkelang D. Faceted search. Synth Lect Inf Concepts Retrieval Serv 2009; 1(1): 1–80.
[5]
English J, Hearst M, and Sinha R, et al. Flexible search and navigation using faceted metadata, 2002, http://bailando.sims.berkeley.edu/papers/flamenco02.pdf
[6]
Berners-Lee T. and Linked Data – design issues. Linked Data, 2006. https://www.w3.org/DesignIssues/LinkedData.html
[7]
Cutugno F, Leano VA, and Rinaldi R, et al. Multimodal framework for mobile interaction. In: Proceedings of the international working conference on advanced visual interfaces, Capri Island, 21–25 May 2012, pp. 197–203. New York: ACM.
[8]
Blattner MM and Glinert EP. Multimodal integration. IEEE Multimed 1996; 3(4): 14–24.
[9]
Turk M. Multimodal interaction: a review. Pattern Recognit Lett 2014; 36: 189–195.
[10]
Rafailidis D, Manolopoulou S, and Daras P. A unified framework for multimodal retrieval. Pattern Recognit 2013; 46(12): 3358–3370.
[11]
Lazaridis M, Axenopoulos A, and Rafailidis D, et al. Multimedia search and retrieval using multimodal annotation propagation and indexing techniques. Signal Process Image Commun 2013; 28(4): 351–367.
[12]
Revuelta-Martínez A, Rodríguez L, and García-Varea I, et al. Multimodal interaction for information retrieval using natural language. Comput Stand Interfaces 2013; 35(5): 428–441.
[13]
Griol D, Molina JM, and Callejas Z. A proposal for the development of adaptive spoken interfaces to access the Web. Neurocomputing 2015; 163: 56–68.
[14]
Novak D. Multi-modal similarity retrieval with distributed key-value store. Mob Networks Appl 2015; 20(4): 521–532.
[15]
Han Y-J, Park S-B, and Park S-Y. A natural language interface concordant with a knowledge base. Comput Intell Neurosci 2016; 2016: 9174683.
[16]
Heck L, Hakkani-Tur D, and Chinthakunta M, et al. Multimodal conversational search and browse. In: IEEE workshop on speech, language and audio in multimedia (SLAM) workshop, Marseille, 22–23 August 2013, pp. 96–101. New York: IEEE.
[17]
Zanibbi R and Orakwue A. Math search for the masses: multimodal search interfaces and appearance-based retrieval, 2015, https://arxiv.org/abs/1505.02798
[18]
Habernal I and Konopík M. SWSNL: semantic Web search using natural language. Expert Syst Appl 2013; 40(9): 3649–3664.
[19]
Paredes-Valverde MA, Rodríguez-García MÁ, and Ruiz-Martínez A, et al. ONLI: an ontology-based system for querying DBpedia using natural language paradigm. Expert Syst Appl 2015; 42(12): 5163–5176.
[20]
Chooralil VS and Gopinathan E. A semantic web query optimization using resource description framework. Procedia Comput Sci 2015; 70: 723–732.
[21]
Khokale R and Atique M. Intelligent interface for Web information retrieval with document understanding. In: Kurosu M (ed.) Human-computer interaction: applications and services (SE – 3), vol. 8512. Berlin: Springer, 2014, pp. 21–31.
[22]
Tablan V, Bontcheva K, and Roberts I, et al. Mímir: an open-source semantic search framework for interactive information seeking and discovery. Web Semant Sci Serv Agents World Wide Web 2015; 30: 52–68.
[23]
Xu B and Zhuge H. Automatic faceted navigation. Futur Gener Comput Syst 2014; 32: 187–197.
[24]
Jindal V, Bawa S, and Batra S. A review of ranking approaches for semantic search on Web. Inf Process Manag 2014; 50(2): 416–425.
[25]
McCrae JP. The Linked Open Data Cloud 2017. Retrieved June 13, 2020, from https://lod-cloud.net/
[26]
Koren J, Zhang Y, and Liu X. Personalized interactive faceted search. In: Proceeding of the 17th international conference on World Wide Web – WWW’08, Beijing, China, 21–25 April 2008, p. 477. New York: ACM.
[27]
Paredes-Valverde MA, Valencia-García R, and Rodríguez-García MÁ, et al. A semantic-based approach for querying linked data using natural language. J Inf Sci 2016; 42(6): 851–862.
[28]
Marie N, Gandon F, and Ribière M. Exploratory search on the top of DBpedia chapters with the discovery hub application. In: Cimiano P, Fernández M, and Lopez V, et al. (eds) The semantic Web: ESWC 2013 satellite events (SE – 45), vol. 7955. Berlin: Springer, 2013, pp. 287–288.
[29]
Stadler C, Martin M, and Auer S. Exploring the Web of spatial data with Facete. In: Proceedings of the companion publication of the 23rd international conference on World Wide Web companion, Seoul, Korea, 7–11 April 2014, pp. 175–178. New York: ACM.
[30]
Kobilarov G and Dickinson I. Humboldt: exploring linked data. Context 2008; 6: 7.
[31]
Stefaner M and Muller B. Elastic lists for facet browsers. In: 18th international workshop on database and expert systems applications DEXA’07, Regensburg, 3–7 September 2007, pp. 217–221. New York: IEEE.
[32]
Tzitzikas Y, Manolis N, and Papadakos P. Faceted exploration of RDF/S datasets: a survey. J Intell Inf Syst 2017; 48(2): 329–364.
[33]
Brooke J. SUS-A Quick and Dirty Usability Scale. Usability Eval Ind 1996; 189(194): 4–7.
[34]
Ansari JA, Sathyamurthy A, and Balasubramanyam R. An open voice command interface kit. IEEE Trans Human Mach Syst 2015; 46(3): 467–473.
[35]
Ngo T-L, Nguyen V-H, and Vuong T-H-Y, et al. Identifying user intents in Vietnamese spoken language commands and its application in smart mobile voice interaction. Berlin: Springer, 2016, pp. 190–201.
[36]
Sidiq M, Budi WTA, and Sa’adah S. Vomma: android application launcher using voice command. In: 2015 3rd international conference on information and communication technology (ICoICT), Nusa Dua, Indonesia, 27–29 May 2015, pp. 49–53. New York: IEEE.
[37]
Brooke J. SUS: a retrospective. J Usability Stud 2013; 8(2): 29–40.
[38]
Ferré S and Hermann A. Reconciling faceted search and query languages for the semantic Web. Int J Metadata Semant Ontol 2012; 7(1): 37–54.
[39]
Bhikne B, Joshi A, and Joshi M, et al. How much faster can you type by speaking in Hindi? Comparing keyboard-only and keyboard+speech text entry. In: Proceedings of the 9th Indian conference on human computer interaction, Bangalore, India, 16–18 December 2018, pp. 20–28. New York: ACM.
[40]
Ruan S, Wobbrock JO, and Liou K, et al. Comparing speech and keyboard text entry for short messages in two languages on touchscreen phones. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018; 1(4): 1591:1–159:23.
[41]
Ruan S, Wobbrock JO, and Liou K, et al. Speech is 3x faster than typing for english and mandarin text entry on mobile devices. arXiv Prepr.arXiv1608.07323, 2016.
[42]
Hazrina S, Sharef NM, and Ibrahim H, et al. Review on the advancements of disambiguation in semantic question answering system. Inf Process Manag 2017; 53(1): 52–69.
[43]
Smith CL, Gwizdka J, and Field H. The use of query auto-completion over the course of search sessions with multifaceted information needs. Inf Process Manag 2017; 53(5): 1139–1155.
[44]
Broder A. A taxonomy of Web search. ACM SIGIR Forum 2002; 36(2): 3–10.
[45]
Dai H, Zhao L, and Nie Z, et al. Detecting online commercial intention (OCI). In: Proceedings of the 15th international conference on World Wide Web, Edinburgh, 23–26 May 2006, pp. 829–837. New York: ACM.

Index Terms

  1. NaLa-Search: A multimodal, interaction-based architecture for faceted search on linked open data
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Journal of Information Science
      Journal of Information Science  Volume 47, Issue 6
      Dec 2021
      139 pages

      Publisher

      Sage Publications, Inc.

      United States

      Publication History

      Published: 01 December 2021

      Author Tags

      1. Faceted search
      2. linked open data
      3. voice command recognition

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 28 Dec 2024

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media