Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3542954.3543014acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccaConference Proceedingsconference-collections
research-article

Explainable NLQ-based Visual Interactive System: Challenges and Objectives

Published: 11 August 2022 Publication History

Abstract

Nowadays, visual interactive systems (Vis) are attracting more attention in research and industries because of their effectiveness in conveying information. Additionally, to make rational decisions based on extracted data, Vis is critical for identifying and comprehending trends, outliers, and patterns in data. Existing research has employed a broad range of methodologies to yield visualization insights into certain decision-making systems, allowing participants to perceive a specific problem from a wide range of viewpoints. However, there are still enough scopes to design a new Vis where some systematic techniques are required to visualize the data with proper explanations. In this regard, we analyze several existing works and observe a surge of research interest in the new realm of explainable and NLQ-based Vis. In this paper, our main goal is to present a novel idea for designing an explainable NLQ-based Vis named-ExNLQVis. Therefore, (i) we aim to discuss a proposed NLQ-based Vis that will follow a deep learning-based NLP approach to extract necessary information from user inputs, make visual-respective decisions, and generate appropriate visualizations based on the preceding decisions. (ii) we extend our prior model to an explainable visualization model that not only accurately visualizes data but also explains why it appears depending on the natural language query (NLQ). To accomplish this system, we consider several challenges and objectives and briefly discuss our proposed method accordingly. We also provide the implementation and evaluation guidelines to establish our system.

References

[1]
R. Amar, J. Eagan, and J. Stasko. 2005. Low-level components of analytic activity in information visualization. In IEEE Symposium on Information Visualization, 2005. INFOVIS 2005.IEEE, Minneapolis, MN, USA, 111–117. https://doi.org/10.1109/INFVIS.2005.1532136
[2]
Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3 Data-Driven Documents. IEEE Transactions on Visualization and Computer Graphics 17, 12(2011), 2301–2309. https://doi.org/10.1109/TVCG.2011.185
[3]
Imran Chowdhury, Abdul Moeid, Enamul Hoque, Muhammad Ashad Kabir, Md. Sabir Hossain, and Mohammad Mainul Islam. 2020. MIVA: Multimodal Interactions for Facilitating Visual Analysis with Multiple Coordinated Views. In 2020 24th International Conference Information Visualisation (IV). IEEE, Melbourne, Australia, 714–717. https://doi.org/10.1109/IV51561.2020.00124
[4]
Kenneth Cox, Rebecca E Grinter, Stacie L Hibino, Lalita Jategaonkar Jagadeesan, and David Mantilla. 2001. A Multi-Modal Natural Language Interface to an Information Visualization Environment. International Journal of Speech Technology 4, 3 (2001), 297–314. https://doi.org/10.1023/A:1011368926479
[5]
Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Suzhou, China, 447–459. https://aclanthology.org/2020.aacl-main.46
[6]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
[7]
Martin J Eppler and Markus Aeschimann. 2009. A systematic framework for risk visualization in risk management and communication. Risk Management 11, 2 (2009), 67–89. https://doi.org/10.1057/rm.2009.4
[8]
Siwei Fu, Kai Xiong, Xiaodong Ge, Siliang Tang, Wei Chen, and Yingcai Wu. 2020. Quda: Natural Language Queries for Visual Data Analytics. CoRR abs/2005.03257(2020), 17. arXiv:2005.03257https://arxiv.org/abs/2005.03257
[9]
Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, and Karrie G. Karahalios. 2015. DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology(UIST ’15). Association for Computing Machinery, New York, NY, USA, 489–500. https://doi.org/10.1145/2807442.2807478
[10]
Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. (2017). To appear.
[11]
Enamul Hoque, Vidya Setlur, Melanie Tory, and Isaac Dykeman. 2018. Applying Pragmatics Principles for Interaction with Visual Analytics. IEEE Transactions on Visualization and Computer Graphics 24, 1(2018), 309–318. https://doi.org/10.1109/TVCG.2017.2744684
[12]
Jin Hu. 2018. Explainable Deep Learning for Natural Language Processing. Ph.D. Dissertation. KTH Royal Institute of Technology. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254886
[13]
Md Rafiqul Islam, Shanjita Akter, Md Rakybuzzaman Ratan, Abu Raihan M. Kamal, and Guandong Xu. 2021. Deep Visual Analytics (DVA): Applications, Challenges and Future Directions. Human-Centric Intelligent Systems 1 (2021), 3–17. Issue 1-2. https://doi.org/10.2991/hcis.k.210704.003
[14]
Md Rafiqul Islam, Shaowu Liu, Xianzhi Wang, and Guandong Xu. 2020. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining 10, 1 (2020), 1–20. https://doi.org/10.1007/s13278-020-00696-x
[15]
Md Rafiqul Islam, Jiaming Zhang, Md. Hamjajul Ashmafee, Imran Razzak, Jianlong Zhou, Xianzhi Wang, and Guandong Xu. 2021. ExVis: Explainable Visual Decision Support System for Risk Management. In 2021 8th International Conference on Behavioral and Social Computing (BESC). IEEE, Doha, Qatar, 1–5. https://doi.org/10.1109/BESC53957.2021.9635491
[16]
Md Tariqul Islam, Md Rafiqul Islam, Sanjita Akter, and Mohsina Kawser. 2020. Designing Dashboard for Exploring Tourist Hotspots in Bangladesh. In 2020 23rd International Conference on Computer and Information Technology (ICCIT). IEEE, Dhaka, Bangladesh, 1–6. https://doi.org/10.1109/ICCIT51783.2020.9392708
[17]
Dae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. 2020. Answering Questions about Charts and Generating Visual Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376467
[18]
Akshi Kumar, Shubham Dikshit, and Victor Hugo C Albuquerque. 2021. Explainable Artificial Intelligence for Sarcasm Detection in Dialogues. Wireless Communications and Mobile Computing 2021 (2021), 13. https://doi.org/10.1155/2021/2939334
[19]
Can Liu, Yun Han, Ruike Jiang, and Xiaoru Yuan. 2021. ADVISor: Automatic Visualization Answer for Natural-Language Question on Tabular Data. In 2021 IEEE 14th Pacific Visualization Symposium (PacificVis). IEEE, Tianjin, China, 11–20. https://doi.org/10.1109/PacificVis52677.2021.00010
[20]
Xiaoyu Luo. 2021. Efficient English text classification using selected Machine Learning Techniques. Alexandria Engineering Journal 60, 3 (2021), 3401–3409. https://doi.org/10.1016/j.aej.2021.02.009
[21]
Juliane Müller, Matthaeus Stoehr, Alexander Oeser, Jan Gaebel, Marc Streit, Andreas Dietz, and Steffen Oeltze-Jafra. 2020. A visual approach to explainable computerized clinical decision support. Computers & Graphics 91(2020), 1–11. https://doi.org/10.1016/j.cag.2020.06.004
[22]
Arpit Narechania, Arjun Srinivasan, and John Stasko. 2021. NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries. IEEE Transactions on Visualization and Computer Graphics 27, 2(2021), 369–379. https://doi.org/10.1109/TVCG.2020.3030378
[23]
Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2017. Vega-Lite: A Grammar of Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics 23, 1(2017), 341–350. https://doi.org/10.1109/TVCG.2016.2599030
[24]
Arvind Satyanarayan, Ryan Russell, Jane Hoffswell, and Jeffrey Heer. 2016. Reactive Vega: A Streaming Dataflow Architecture for Declarative Interactive Visualization. IEEE Transactions on Visualization and Computer Graphics 22, 1(2016), 659–668. https://doi.org/10.1109/TVCG.2015.2467091
[25]
Vidya Setlur, Sarah E. Battersby, Melanie Tory, Rich Gossweiler, and Angel X. Chang. 2016. Eviza: A Natural Language Interface for Visual Analysis. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology(UIST ’16). Association for Computing Machinery, New York, NY, USA, 365–377. https://doi.org/10.1145/2984511.2984588
[26]
Vidya Setlur, Melanie Tory, and Alex Djalali. 2019. Inferencing Underspecified Natural Language Utterances in Visual Analysis. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 40–51. https://doi.org/10.1145/3301275.3302270
[27]
Leixian Shen, Enya Shen, Zhiwei Tai, Yiran Song, and Jianmin Wang. 2021. TaskVis: Task-oriented Visualization Recommendation. In EuroVis 2021 - Short Papers, Marco Agus, Christoph Garth, and Andreas Kerren (Eds.). The Eurographics Association, Zürich, Switzerland, 91–95. https://doi.org/10.2312/evs.20211061
[28]
Thilo Spinner, Udo Schlegel, Hanna Schäfer, and Mennatallah El-Assady. 2020. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. IEEE Transactions on Visualization and Computer Graphics 26, 1(2020), 1064–1074. https://doi.org/10.1109/TVCG.2019.2934629
[29]
Arjun Srinivasan and John Stasko. 2018. Orko: Facilitating Multimodal Interaction for Visual Exploration and Analysis of Networks. IEEE Transactions on Visualization and Computer Graphics 24, 1(2018), 511–521. https://doi.org/10.1109/TVCG.2017.2745219
[30]
Yiwen Sun, Jason Leigh, Andrew Johnson, and Sangyoon Lee. 2010. Articulate: A Semi-automated Model for Translating Natural Language Queries into Meaningful Visualizations. In Smart Graphics, Robyn Taylor, Pierre Boulanger, Antonio Krüger, and Patrick Olivier (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 184–195. https://doi.org/10.1007/978-3-642-13544-6_18
[31]
Jalaj Thanaki. 2017. Python natural language processing. Packt Publishing Ltd, Birmingham, UK.
[32]
Bowen Yu and Cláudio T. Silva. 2020. FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System. IEEE Transactions on Visualization and Computer Graphics 26, 1(2020), 1–11. https://doi.org/10.1109/TVCG.2019.2934668
[33]
Joshua Zerafa, Md Rafiqul Islam, Ashad Kabir, and Guandong Xu. 2021. ExTraVis: Exploration of Traffic Incidents Using Visual Interactive System. In 25th International Conference Information Visualisation (IV 2021). IEEE, Institute of Electrical and Electronics Engineers, IEEE, Sydney, Australia, 6. https://doi.org/10.1109/IV53921.2021.00018

Cited By

View all
  • (2022)SleepExplain: Explainable Non-Rapid Eye Movement and Rapid Eye Movement Sleep Stage Classification from EEG Signal2022 25th International Conference on Computer and Information Technology (ICCIT)10.1109/ICCIT57492.2022.10055956(248-253)Online publication date: 17-Dec-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICCA '22: Proceedings of the 2nd International Conference on Computing Advancements
March 2022
543 pages
ISBN:9781450397346
DOI:10.1145/3542954
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Explainability
  2. Natural Language Query (NLQ)
  3. Visual Analytics.
  4. Visualization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICCA 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)21
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2022)SleepExplain: Explainable Non-Rapid Eye Movement and Rapid Eye Movement Sleep Stage Classification from EEG Signal2022 25th International Conference on Computer and Information Technology (ICCIT)10.1109/ICCIT57492.2022.10055956(248-253)Online publication date: 17-Dec-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media