- Sponsor:
- sigchi
Information visualization systems allow users to produce insights, innovations, and discoveries. However, to understand these complex behaviors, evaluation efforts must be targeted at the component level, the system level, and the work environment level.
Some components can be evaluated with metrics that can be observed or computed but many others require empirical user evaluations. Usability studies still tend to be designed in an ad hoc manner, focusing on particular systems, addressing only time and errors issues, or failing to produce reusable and robust results. Intrinsic quality metrics are rare despite their necessity for true comparative evaluations. Controlled experiments are the most common evaluation technique but there is a growing sense in the community that information visualization systems need new approaches to evaluation, such as longitudinal field studies, insight based evaluation and metrics adapted to the exploratory nature of discovery. We can conclude that while the overall use of information visualizations is accelerating, the growth of techniques for the evaluation of these systems has been slow.
Our community is confronted with questions such as:
• "For this set of task, which visualization is best?"
• "How can I measure the utility of a visualization?"
• "Does the visualization I developed meet the target users' needs?"
An initial workshop, BELIV'06, was conducted at the Advanced Visual Interfaces (AVI) conference to address these questions. The workshop was well attended, featuring lively discussions about the limits of current practices and several novel exploratory techniques for evaluation were presented.
Attendees have repeatedly expressed to us the wish to repeat the workshop. For BELIV'08 our aim was twofold: we would like (1) to continue the exploration of novel evaluation methods and (2) structure the knowledge on evaluation in information visualization around a schema, where researchers can easily identify unsolved problems and research gaps.
The scientific program of BELIV'08 is based on two different contributions: research papers and position papers. Research papers have been presented together with position papers, in order to produce a more animated discussion. Continuing the line of BELIV'06, research papers are published in the ACM Digital Library.
Proceeding Downloads
Productivity as a metric for visual analytics: reflections on e-discovery
Because visual analytics is not used in a vacuum, there are no cut-and-dry metrics which can accurately evaluate visual analytic tools. These tools are used inside of existing business processes, thus metrics to evaluate these tools must measure the ...
Increasing the utility of quantitative empirical studies for meta-analysis
Despite the long history and consistent use of quantitative empirical methods to evaluate information visualization techniques and systems, our understanding of interface use remains incomplete. While there are inherent limitations to the method, such as ...
Beyond time and error: a cognitive approach to the evaluation of graph drawings
Time and error are commonly used to measure the effectiveness of graph drawings. However, such measures are limited in providing more fundamental knowledge that is useful for general visualization design. We therefore apply a cognitive approach in ...
Understanding and characterizing insights: how do people gain insights using information visualization?
Even though "providing insight" has been considered one of the main purposes of information visualization (InfoVis), we feel that insight is still a not-well-understood concept in this context. Inspired by research in sensemaking, we realized the ...
Internalization, qualitative methods, and evaluation
Information Visualization (InfoVis) is at least in part defined by a process that occurs within the subjective internal experience of the users of visualization tools. Hence, users' interaction with these tools is seen as an 'experience'. Relying on ...
Grounded evaluation of information visualizations
We introduce grounded evaluation as a process that attempts to ensure that the evaluation of an information visualization tool is situated within the context of its intended use. We discuss the process and scope of grounded evaluation in general, and ...
Qualitative analysis of visualization: a building design field study
We conducted an ethnographic field study examining the ways in which building design teams used visual representations of data to coordinate their work. Here we describe our experience with this field study approach, including both quantitative and ...
Creating realistic, scenario-based synthetic data for test and evaluation of information analytics software
We describe the Threat Stream Generator, a method and a toolset for creating realistic, synthetic test data for information analytics applications. Finding or creating useful test data sets is difficult for a team focused on creating solutions to ...
Using multi-dimensional in-depth long-term case studies for information visualization evaluation
Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. ...
The long-term evaluation of Fisherman in a partial-attention environment
Ambient display is a specific subfield of information visualization that only uses partial visual and cognitive attention of its users. Conducting an evaluation while drawing partial user attention is a challenging problem. Many normal information ...
Cited By
-
Wong P, Kao D, Hao M, Chen C, Silva S and Teixeira A (2013). A framework for analysis of the upper airway from real-time MRI sequences IS&T/SPIE Electronic Imaging, 10.1117/12.2042081, , (901703), Online publication date: 23-Dec-2013.
- Mayr E, Smuc M and Risku H (2011). Many roads lead to Rome, Information Visualization, 10:3, (232-247), Online publication date: 1-Jul-2011.