- Sponsor:
- sigchi
Visualization has shown its ability to produce powerful tools for analyzing, understanding, and communicating data and making it accessible for several different tasks and purposes. Impact of visualization to everyday work and personal lives is demonstrated by many successes stories---such as the increasing prevalence of Tableau, the interactive visualizations produced by the New York Times, or toolkits like VTK/Paraview to name just a few. A large community of casual and professional users are increasingly consuming and producing both interactive and static visualizations.
While interactive visualizations move from research into practice at an increasing rate, it still remains an important challenge to find appropriate methods to evaluate their utility and usability. There is a growing need in the community to develop special approaches and metrics for evaluation at all stages of the development life cycle that address specific needs in visualization. This need is reflected, for example, in the increasing number of papers on visualization evaluation---not just at BELIV but also in other venues such as the IEEE VIS conferences and EuroVis. The goal of the BELIV workshop is to continue to provide a dedicated event for discussing visualization evaluation and to spread the word on alternative and novel evaluation methods and methodologies in our community.
Proceeding Downloads
Visualizing dimensionally-reduced data: interviews with analysts and a characterization of task sequences
We characterize five task sequences related to visualizing dimensionally-reduced data, drawing from data collected from interviews with ten data analysts spanning six application domains, and from our understanding of the technique literature. Our ...
User tasks for evaluation: untangling the terminology throughout visualization design and development
User tasks play a pivotal role in evaluation throughout visualization design and development. However, the term 'task' is used ambiguously within the visualization community. In this position paper, we critically analyze the relevant literature and ...
Considerations for characterizing domain problems
The nested blocks and guidelines model is a useful template for creating design and evaluation criteria, because it aligns design to need [17]. Characterizing the outermost block of the nested model---the domain problem---is challenging, mainly due to ...
Navigating reductionism and holism in evaluation
In this position paper, we enumerate two approaches to the evaluation of visualizations which are associated with two approaches to knowledge formation in science: reductionism, which holds that the understanding of complex phenomena is based on the ...
Evaluation methodology for comparing memory and communication of analytic processes in visual analytics
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective ...
Just the other side of the coin?: from error- to insight-analysis
To shed more light on data explorers dealing with complex information visualizations in real world scenarios, new methodologies and models are needed which overcome existing explanatory gaps. Therefore, a novel model to analyze users' errors and ...
Evaluating user behavior and strategy during visual exploration
Visualization practitioners have traditionally focused on evaluating the outcome of the visual analytic process, as opposed to studying how that process unfolds. Since user strategy would likely influence the outcome of visual analysis and the nature of ...
Value-driven evaluation of visualizations
Existing evaluations of data visualizations often employ a series of low-level, detailed questions to be answered or benchmark tasks to be performed. While that methodology can be helpful to determine a visualization's usability, such evaluations ...
Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuli
For the analysis of eye movement data, an increasing number of analysis methods have emerged to examine and analyze different aspects of the data. In particular, due to the complex spatio-temporal nature of gaze data for dynamic stimuli, there has been ...
Evaluating visual analytics with eye tracking
The application of eye tracking for the evaluation of humans' viewing behavior is a common approach in psychological research. So far, the use of this technique for the evaluation of visual analytics and visualization is less prominent. We investigate ...
Towards analyzing eye tracking data for evaluating interactive visualization systems
Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel ...
Gamification as a paradigm for the evaluation of visual analytics systems
The widespread web-based connectivity of people all over the world has yielded new opportunities to recruit humans for visual analytics evaluation and for an abundance of other tasks. Known as crowdsourcing, humans typically receive monetary incentives ...
Crowdster: enabling social navigation in web-based visualization using crowdsourced evaluation
Evaluation is typically seen as a validation tool for visualization, but the proliferation of web-based visualization is enabling a radical new approach that uses crowdsourced evaluation for emergent collaboration where one user's efforts facilitate a ...
Repeated measures design in crowdsourcing-based experiments for visualization
Crowdsourcing platforms, such as Amazon's Mechanical Turk (MTurk), are providing visualization researchers with a new avenue for conducting empirical studies. While such platforms offer several advantages over lab-based studies, they also feature some "...
Evaluation of information visualization techniques: analysing user experience with reaction cards
The paper originates from the idea that in the field of information visualization, positive user experience is extremely important if we wish to see users adopt and engage with the novel information visualization tools. Suggesting the use of product ...
Toward visualization-specific heuristic evaluation
This position paper describes heuristic evaluation as it relates to visualization and visual analytics. We review heuristic evaluation in general, then comment on previous process-based, performance-based, and framework-based efforts to adapt the method ...
Experiences and challenges with evaluation methods in practice: a case study
The development of information visualizations for companies poses specific challenges, especially for evaluation processes. It is advisable to test these visualizations under realistic circumstances. Because of various constraints, this can be quite ...
More bang for your research buck: toward recommender systems for visual analytics
We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we ...
Sanity check for class-coloring-based evaluation of dimension reduction techniques
Dimension Reduction techniques used to visualize multidimensional data provide a scatterplot spatialization of data similarities. A widespread way to evaluate the quality of such DR techniques is to use labeled data as a ground truth and to call the ...
Oopsy-daisy: failure stories in quantitative evaluation studies for visualizations
Designing, conducting, and interpreting evaluation studies with human participants is challenging. While researchers in cognitive psychology, social science, and human-computer interaction view competence in evaluation study methodology a key job skill, ...
Pre-design empiricism for information visualization: scenarios, methods, and challenges
Empirical study can inform visualization design, both directly and indirectly. Pre-design empirical methods can be used to characterize work practices and their associated problems in a specific domain, directly motivating design choices during the ...
Field experiment methodology for pair analytics
This paper describes a qualitative research methodology developed for experimental studies of collaborative visual analysis. In much of this work we build upon Herbert H. Clark's Joint Activity Theory to infer cognitive processes from field experiments ...
Utility evaluation of models
In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three ...
Cited By
-
Smuc M (2016). Just the other side of the coin? From error to insight analysis, Information Visualization, 10.1177/1473871615598641, 15:4, (312-324), Online publication date: 1-Oct-2016.
- Lingyun Yu , Efstathiou K, Isenberg P and Isenberg T (2016). CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds, IEEE Transactions on Visualization and Computer Graphics, 22:1, (886-895), Online publication date: 31-Jan-2016.
Index Terms
- Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization