Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Predicting Visual Search Task Success from Eye Gaze Data as a Basis for User-Adaptive Information Visualization Systems

Published: 20 May 2021 Publication History

Abstract

Information visualizations are an efficient means to support the users in understanding large amounts of complex, interconnected data; user comprehension, however, depends on individual factors such as their cognitive abilities. The research literature provides evidence that user-adaptive information visualizations positively impact the users’ performance in visualization tasks. This study attempts to contribute toward the development of a computational model to predict the users’ success in visual search tasks from eye gaze data and thereby drive such user-adaptive systems. State-of-the-art deep learning models for time series classification have been trained on sequential eye gaze data obtained from 40 study participants’ interaction with a circular and an organizational graph. The results suggest that such models yield higher accuracy than a baseline classifier and previously used models for this purpose. In particular, a Multivariate Long Short Term Memory Fully Convolutional Network shows encouraging performance for its use in online user-adaptive systems. Given this finding, such a computational model can infer the users’ need for support during interaction with a graph and trigger appropriate interventions in user-adaptive information visualization systems. This facilitates the design of such systems since further interaction data like mouse clicks is not required.

References

[1]
John R. Anderson and Gordon H. Bower. 1974. Human Associative Memory. Psychology Press, New York.
[2]
Anthony Bagnall, Jason Lines, Aaron Bostrom, James Large, and Eamonn Keogh. 2017. The great time series classification bake off: A review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery 31, 3 (2017), 606–660.
[3]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15). https://arxiv.org/pdf/1409.0473
[4]
Oswald Barral, Manuel J. A. A. Eugster, Tuukka Ruotsalo, Michiel M. Spapé, Ilkka Kosunen, Niklas Ravaja, Samuel Kaski, and Giulio Jacucci. 2015. Exploring peripheral physiology as a predictor of perceived relevance in information retrieval. In Proceedings of the International Conference on Intelligent User Interfaces (IUI’15). ACM, New York, 389–399.
[5]
Oliver Bergamin and Randy H. Kardon. 2003. Latency of the pupil light reflex: Sample rate, stimulus intensity, and variation in normal subjects. Investigative Ophthalmology and Visual Science 44, 4 (2003), 1546–1554.
[6]
Tanja Blascheck, Lindsay MacDonald Vermeulen, Jo Vermeulen, Charles Perin, Wesley Willett, Thomas Ertl, and Sheelagh Carpendale. 2019. Exploration strategies for discovery of interactivity in visualizations. IEEE Transactions on Visualization and Computer Graphics 25, 2 (2019), 1407–1420.
[7]
Daria Bondareva, Cristina Conati, Reza Feyzi-Behnagh, Jason M. Harley, Roger Azevedo, and François Bouchet. 2013. Inferring learning from gaze data during interaction with an environment to support self-regulated learning. In Artificial Intelligence in Education, H. C. Lane, K. Yacef, J. Mostow, and Pavlik P. (Eds.). Springer, Berlin, 229–238.
[8]
Matt Canham and Mary Hegarty. 2010. Effects of knowledge and display design on comprehension of complex graphics. Learning and Instruction 20, 2 (2010), 155–166.
[9]
Patricia A. Carpenter and Priti Shah. 1998. A model of the perceptual and conceptual processes in graph comprehension. Journal of Experimental Psychology: Applied 4, 2 (1998), 75–100.
[10]
Michael J. Cole, Jacek Gwizdka, Chang Liu, Nicholas J. Belkin, and Xiangmin Zhang. 2013. Inferring user knowledge level from eye movement patterns. Information Processing & Management 49, 5 (2013), 1075–1091.
[11]
Cristina Conati, Sébastien Lallé, Md Abed Rahman, and Dereck Toker. 2020. Comparing and combining interaction data and eye-tracking data for the real-time prediction of user cognitive abilities in visualization tasks. ACM Transactions on Interactive Intelligent Systems 10, 2 (2020), 1–41.
[12]
Cristina Conati and Heather Maclaren. 2008. Exploring the role of individual differences in information visualization. In Proceedings of the Working Conference on Advanced Visual Interfaces (AVI’08). ACM, New York, 199–206.
[13]
Çağla Çığ Karaman and Tevfik Metin Sezgin. 2018. Gaze-based predictive user interfaces: Visualizing user intentions in the presence of uncertainty. International Journal of Human-Computer Studies 111 (2018), 78–91. https://scholar.google.com/citations?user=DoQNz2EAAAAJ&hl=de&oi=sra.
[14]
Leandro L. Di Stasi, Mauro Marchitto, Adoracíon Antolí, Thierry Baccino, and José J. Cañas. 2010. Approximation of on-line mental workload index in ATC simulated multitasks. Journal of Air Transport Management 16, 6 (2010), 330–333.
[15]
Leandro L. Di Stasi, Rebekka Renner, Peggy Staehr, Jens R. Helmert, Boris M. Velichkovsky, José J. Cañas, Andrés Catena, and Sebastian Pannasch. 2010. Saccadic peak velocity sensitivity to variations in mental workload. Aviation, Space, and Environmental Medicine 81, 4 (2010), 413–417.
[16]
Tom Dietterich. 1995. Overfitting and undercomputing in machine learning. ACM Computing Surveys 27, 3 (Sept. 1995), 326–327.
[17]
Geoffrey B. Duggan and Stephen J. Payne. 2009. Text skimming: The process and effectiveness of foraging through text under time pressure. Journal of Experimental Psychology: Applied 15, 3 (2009), 228–242.
[18]
Philippe Esling and Carlos Agon. 2012. Time-series data mining. ACM Computing Surveys 45, 1 (2012), 1–34.
[19]
H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller. 2019. Deep neural network ensembles for time series classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN’19), 1–6.
[20]
Richard Feinberg and Edward Podolak. 1965. Latency of Pupillary Reflex to Light Stimulation and Its Relationship to Aging. Charles C. Thomas, Springfield, IL, 326–339.
[21]
Matthias Feurer and Frank Hutter. 2019. Hyperparameter optimization. In Automated Machine Learning: Methods, Systems, Challenges, Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren (Eds.). Springer International Publishing, Cham, 3–33.
[22]
Eric G. Freedman and Priti Shah. 2002. Towards a model of knowledge-based graph comprehension. In Diagrammatic Representation and Inference, Mary Hegarty, Bernd Meyer, and N. Hari Narayanan (Eds.). Lecture Notes in Computer Science, Vol. 2317. Springer, Berlin, 18–30.
[23]
Joseph Goldberg and Jonathan Helfman. 2011. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Information Visualization 10, 3 (2011), 182–195.
[24]
Joseph H. Goldberg and Jonathan I. Helfman. 2010. Comparing information graphics. In Proceedings of the Workshop on Beyond Time and Errors: Novel Evaluation Methods for Information Visualization (BELIV’10). ACM, New York, 71–78.
[25]
David Gotz and Zhen Wen. 2009. Behavior-driven visualization recommendation. In Proceedings of the International Conference on Intelligent User Interfaces (IUI’09). ACM, New York, 315–324.
[26]
Beate Grawemeyer. 2006. Evaluation of ERST: An external representation selection tutor. In Diagrammatic Representation and Inference, D. Barker-Plummer, R. Cox, and N. Swoboda (Eds.). Springer, Berlin, 154–167.
[27]
Günther Greiner and Kai Hormann. 1997. Interpolating and approximating scattered 3D-data with hierarchical tensor product B-splines. Surface Fitting and Multiresolution Methods 3 (1997), 163–172.
[28]
Vladimir Guchev, Paolo Buono, and Cristina Gena. 2018. Towards intelligible graph data visualization using circular layout. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI’18). ACM, New York, Article 63, 3 pages.
[29]
Jacek Gwizdka, Rahilsadat Hosseini, Michael Cole, and Shouyi Wang. 2017. Temporal dynamics of eye-tracking and EEG during reading and relevance decisions. Journal of the Association for Information Science and Technology 68, 10 (2017), 2299–2312.
[30]
Roy S. Hessels, Diederick C. Niehorster, Chantal Kemner, and Ignace T.C. Hooge. 2017. Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC). Behavior Research Methods 49, 5 (2017), 1802–1823.
[31]
Md Zakir Hossain, Tom Gedeon, Sabrina Caldwell, Leana Copeland, Richard Jones, and Christopher Chow. 2018. Investigating differences in two visualisations from observer’s fixations and saccades. In Proceedings of the Australasian Computer Science Week Multiconference (ACSW’18). ACM, New York, 1–4.
[32]
Md Zakir Hossain, Tom Gedeon, and Ramesh Sankaranarayana. 2019. Using temporal features of observers’ physiological measures to distinguish between genuine and fake smiles. IEEE Transactions on Affective Computing 11, 1 (2019), 163–173.
[33]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre Alain Muller. 2019. Deep learning for time series classification: A review. Data Mining and Knowledge Discovery 33, 4 (2019), 917–963.
[34]
Anthony Jameson. 2007. Adaptive interfaces and agents. In The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (2nd ed.), Andrew Sears and Julie A. Jacko (Eds.). CRC Press, Boca Raton, FL, 433–458.
[35]
Samad Kardan and Cristina Conati. 2012. Exploring gaze data for determining user learning with an interactive simulation. In User Modeling, Adaptation, and Personalization, Judith Masthoff, Bamshad Mobasher, Michel C. Desmarais, and Roger Nkambou (Eds.). Springer, Berlin, 126–138.
[36]
Fazle Karim, Somshubra Majumdar, Houshang Darabi, and Shun Chen. 2017. LSTM fully convolutional networks for time series classification. IEEE Access 6 (2017), 1662–1669.
[37]
Fazle Karim, Somshubra Majumdar, Houshang Darabi, and Samuel Harford. 2019. Multivariate LSTM-FCNs for time series classification. Neural Networks 116 (2019), 237–245.
[38]
Daniel Keim, Jorn Kohlhammer, Geoffrey Ellis, and Florian Mansman (Eds.). 2010. Mastering the Information Age: Solving Problems with Visual Analytics. Eurographics Association, Goslar, Germany.
[39]
Suzanne Kieffer and Université Catholique de Louvain. 2017. ECOVAL: Ecological validity of cues and representative design in user experience evaluations. Association for Information Systems Transactions on Human-Computer Interaction 9, 2 (2017), 149–172.
[40]
Gary King and Langche Zeng. 2001. Logistic regression in rare events data. Political Analysis 9, 2 (2001), 137–163.
[41]
Walter Kintsch. 1988. The role of knowledge in discourse comprehension: A construction-integration model.Psychological Review 95, 2 (1988), 163–182.
[42]
Roger Kirk. 2013. Experimental Design: Procedures for the Behavioral Sciences (4th ed.). Sage, Thousand Oaks, CA.
[43]
Christof Körner, Margit Höfler, Barbara Tröbinger, and Iain D. Gilchrist. 2014. Eye movements indicate the temporal organisation of information processing in graph comprehension. Applied Cognitive Psychology 28, 3 (2014), 360–373.
[44]
Sébastien Lallé and Cristina Conati. 2019. The role of user differences in customization: A case study in personalization for infovis-based content. In Proceedings of the International Conference on Intelligent User Interfaces (IUI’19). ACM, New York, 329–339.
[45]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Predicting confusion in information visualization from eye tracking and interaction data. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’16). AAAI Press, 2529–2535.
[46]
Chang Liu, Ying-Hsang Liu, Tom Gedeon, Yu Zhao, Yiming Wei, and Fan Yang. 2019. The effects of perceived chronic pressure and time constraint on information search behaviors and experience. Information Processing & Management 56, 5 (2019), 1667–1679.
[47]
Tomasz D. Loboda and Peter Brusilovsky. 2010. User-adaptive explanatory program visualization: Evaluation and insights from eye movements. User Modeling and User-Adapted Interaction 20, 3 (2010), 191–226.
[48]
Jianjie Lu and Kai-Yu Tong. 2019. Robust single accelerometer-based activity recognition using modified recurrence plot. IEEE Sensors Journal 19, 15 (2019), 6317–6324.
[49]
Sebastiaan Mathot, Edwin Dalmaijer, Jonathan Grainger, and S. Van der Stigchel. 2014. The pupillary light response reflects exogenous attention and inhibition of return. Journal of Vision 14, 14 (2014).
[50]
Sebastiaan Mathôt, Jasper Fabius, Elle Van Heusden, and Stefan Van der Stigchel. 2018. Safe and sensible preprocessing and baseline correction of pupil-size data. Behavior Research Methods 50, 1 (2018), 94–106.
[51]
Robert B. Miller. 1968. Response time in man-computer conversational transactions. In Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I (AFIPS’68)). ACM, New York, 267–277.
[52]
Enrique Garcia Moreno-Esteva, Sonia White, Joanne Wood, and Alexander Black. 2017. Identifying key visual-cognitive processes in students’ interpretation of graph representations using eye-tracking data and math/machine learning based data analysis. In CERME 10. https://hal.archives-ouvertes.fr/hal-01950548
[53]
Enrique Garcia Moreno-Esteva, Sonia L. J. White, Joanne M. Wood, and Alex A. Black. 2018. Application of mathematical and machine learning techniques to analyse eye-tracking data enabling better understanding of children’s visual-cognitive behaviours. Frontline Learning Research 6, 3 (2018), 72–84.
[54]
Youness Moukafih, Hakim Hafidi, and Mounir Ghogho. 2019. Aggressive driving detection using deep learning-based time series classification. In 2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA’19). IEEE, Piscataway, NJ, 1–5.
[55]
Manuel Oliva and Andrey Anikin. 2018. Pupil dilation reflects the time course of emotion recognition in human vocalizations. Scientific Reports 8 (2018), 4871.
[56]
Flavio T. P. Oliveira, Anne Aula, and Daniel M. Russell. 2009. Discriminating the relevance of web search results with measures of pupil size. In Proceedings of the SIGCHI Conference (CHI’09). ACM, New York, 2209–2212.
[57]
David L. Olson and Dursun Delen. 2008. Advanced Data Mining Techniques. Springer, Berlin.
[58]
Alvitta Ottley. 2020. Adaptive and personalized visualization. Synthesis Lectures on Visualization 7, 1 (2020), 1–117.
[59]
Steven Pinker. 1990. A theory of graph comprehension. In Artificial Intelligence and the Future of Testing, R. Freedle (Ed.). Lawrence Erlbaum Associates, Hillsdale, NJ, 73–126.
[60]
Stanislav Popelka, Zdeněk Stachoň, Čeněk Šašinka, and Jitka Doležalová. 2016. EyeTribe tracker data accuracy evaluation and its interconnection with hypothesis software for cartographic purposes. Computational Intelligence and Neuroscience 2016 (2016).
[61]
B. Pyakillya, N. Kazachenko, and N. Mikhailovsky. 2017. Deep learning for ECG classification. Journal of Physics: Conference Series 913 (2017), 012004.
[62]
George E. Raptis, Christina Katsini, Marios Belk, Christos Fidas, George Samaras, and Nikolaos Avouris. 2017. Using eye gaze data and visual activities to infer human cognitive styles: Method and feasibility studies. In Proceedings of the Conference on User Modeling, Adaptation and Personalization (UMAP’17). ACM, New York, 164–173.
[63]
Keith Rayner. 2009. Eye movements and attention in reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology 62, 8 (2009), 1457–1506.
[64]
Keith Rayner, Kathryn H. Chace, Timothy J. Slattery, and Jane Ashby. 2006. Eye movements as reflections of comprehension processes in reading. Scientific Studies of Reading 10, 3 (2006), 241–255.
[65]
Joni Salminen, Ying-Hsang Liu, Sercan Şengün, João M. Santos, Soon-gyo Jung, and Bernard J. Jansen. 2020. The effect of numerical and textual information on visual engagement and perceptions of AI-driven persona interfaces. In Proceedings of the International Conference on Intelligent User Interfaces (IUI’20). ACM, New York, 357–368.
[66]
Johannes Schwerdt, Michael Kotzyba, and Andreas Nurnberger. 2018. Inferring user’s search activity using interaction logs and gaze data. In 2017 International Conference on Companion Technology (ICCT’17). 1–6.
[67]
Julia Sheidin, Joel Lanir, Cristina Conati, Dereck Toker, and Tsvi Kuflik. 2020. The effect of user characteristics in time series visualizations. In Proceedings of the International Conference on Intelligent User Interfaces (IUI’20). ACM, New York, 380–389.
[68]
Ben Steichen, Helen Ashman, and Vincent Wade. 2012. A comparative survey of personalised information retrieval and adaptive hypermedia techniques. Information Processing & Management 48, 4 (2012), 698–724.
[69]
Ben Steichen, Giuseppe Carenini, and Cristina Conati. 2013. User-adaptive information visualization. In Proceedings of the 2013 International Conference on Intelligent User Interfaces (IUI’13). ACM, New York, 317–328.
[70]
Ben Steichen, Cristina Conati, and Giuseppe Carenini. 2014. Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data. ACM Transactions on Interactive Intelligent Systems 4, 2 (2014), 1–29.
[71]
Christoph Strauch, Lukas Greiter, and Anke Huckauf. 2018. Pupil dilation but not microsaccade rate robustly reveals decision formation. Scientific Reports 8, 1 (2018), 13165.
[72]
Johannes Titz, Agnes Scholz, and Peter Sedlmeier. 2018. Comparing eye trackers by correlating their eye-metric data. Behavior Research Methods 50, 5 (2018), 1853–1863.
[73]
Dereck Toker, Cristina Conati, Giuseppe Carenini, and Mona Haraty. 2012. Towards adaptive information visualization: On the influence of user characteristics. In User Modeling, Adaptation, and Personalization, Judith Masthoff, Bamshad Mobasher, Michel C. Desmarais, and Roger Nkambou (Eds.). Springer, Berlin, 274–285.
[74]
Dereck Toker, Cristina Conati, Ben Steichen, and Giuseppe Carenini. 2013. Individual user characteristics and information visualization. In Proceedings of the SIGCHI Conference (CHI’13). ACM, New York, 295–304.
[75]
Dereck Toker, Sébastien Lallé, and Cristina Conati. 2017. Pupillometry and head distance to the screen to predict skill acquisition during information visualization tasks. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI’17). ACM, New York, 221–231.
[76]
Dereck Toker, Robert Moro, Jakub Simko, Maria Bielikova, and Cristina Conati. 2019. Impact of English reading comprehension abilities on processing magazine style narrative visualizations and implications for personalization. In Proceedings of the ACM Conference on User Modeling, Adaptation and Personalization (UMAP’19). ACM, New York, 309–317.
[77]
M. C. Velez, D. Silver, and M. Tremaine. 2005. Understanding visualization through spatial ability differences. In IEEE Visualization, (VIS’05). IEEE, 511–518.
[78]
Z. Wang, W. Yan, and T. Oates. 2017. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN’17). 1578–1585.
[79]
Peter Wittek, Ying-Hsang Liu, Sándor Darányi, Tom Gedeon, and Ik Soo Lim. 2016. Risk and ambiguity in information seeking: Eye gaze patterns reveal contextual behavior in dealing with uncertainty. Frontiers in Psychology 7 (2016), 1790.
[80]
Yingying Wu, Yiqun Liu, Yen-Hsi Richard Tsai, and Shing-Tung Yau. 2019. Investigating the role of eye movements and physiological signals in search satisfaction prediction using geometric analysis. Journal of the Association for Information Science and Technology 70, 9 (2019), 981–999.

Cited By

View all
  • (2024)Predictive Gaze Analytics: A Comparative Case Study of the Foretelling Signs of User Performance during Interaction with Visualizations of Ontology Class HierarchiesMultimodal Technologies and Interaction10.3390/mti81000908:10(90)Online publication date: 12-Oct-2024
  • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
  • (2024)The State of the Art in User‐Adaptive VisualizationsComputer Graphics Forum10.1111/cgf.15271Online publication date: 4-Dec-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 11, Issue 2
June 2021
267 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3465444
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 May 2021
Accepted: 01 January 2021
Revised: 01 December 2020
Received: 01 June 2020
Published in TIIS Volume 11, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Eye tracking
  2. user-adaptation
  3. time series classification
  4. individual differences

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • Australian Government through the Australian Research Council’s Linkage Projects

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)256
  • Downloads (Last 6 weeks)22
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Predictive Gaze Analytics: A Comparative Case Study of the Foretelling Signs of User Performance during Interaction with Visualizations of Ontology Class HierarchiesMultimodal Technologies and Interaction10.3390/mti81000908:10(90)Online publication date: 12-Oct-2024
  • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
  • (2024)The State of the Art in User‐Adaptive VisualizationsComputer Graphics Forum10.1111/cgf.15271Online publication date: 4-Dec-2024
  • (2024)The application of eye-tracking technology in chemistry education research: a systematic reviewResearch in Science & Technological Education10.1080/02635143.2024.2435343(1-20)Online publication date: 4-Dec-2024
  • (2024)Gaze analysisImage and Vision Computing10.1016/j.imavis.2024.104961144:COnline publication date: 1-Apr-2024
  • (2024)Visualizing blockchain in construction projects: Status quo, challenges, and a guideline for implementationFrontiers of Engineering Management10.1007/s42524-024-4034-6Online publication date: 5-Jul-2024
  • (2024)AdaptLIL: A Real-Time Adaptive Linked Indented List Visualization for Ontology MappingThe Semantic Web – ISWC 202410.1007/978-3-031-77850-6_1(3-22)Online publication date: 11-Nov-2024
  • (2024)Computational Methods to Infer Human Factors for Adaptation and Personalization Using Eye TrackingA Human-Centered Perspective of Intelligent Personalized Environments and Systems10.1007/978-3-031-55109-3_7(183-204)Online publication date: 1-May-2024
  • (2023)User Evaluation of Conversational Agents for Aerospace DomainInternational Journal of Human–Computer Interaction10.1080/10447318.2023.223954440:19(5549-5568)Online publication date: 2-Aug-2023
  • (2022)Chronometry of distractor views to discover the thinking process of students during a computer knowledge testBehavior Research Methods10.3758/s13428-021-01743-x54:5(2463-2478)Online publication date: 7-Feb-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media