Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1377966acmconferencesBook PagePublication PageschiConference Proceedingsconference-collections
BELIV '08: Proceedings of the 2008 Workshop on BEyond time and errors: novel evaLuation methods for Information Visualization
ACM2008 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CHI '08: CHI Conference on Human Factors in Computing Systems Florence Italy 5 April 2008
ISBN:
978-1-60558-016-6
Published:
05 April 2008
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN
Next Conference
April 26 - May 1, 2025
Yokohama , Japan
Reflects downloads up to 05 Feb 2025Bibliometrics
Skip Abstract Section
Abstract

Information visualization systems allow users to produce insights, innovations, and discoveries. However, to understand these complex behaviors, evaluation efforts must be targeted at the component level, the system level, and the work environment level.

Some components can be evaluated with metrics that can be observed or computed but many others require empirical user evaluations. Usability studies still tend to be designed in an ad hoc manner, focusing on particular systems, addressing only time and errors issues, or failing to produce reusable and robust results. Intrinsic quality metrics are rare despite their necessity for true comparative evaluations. Controlled experiments are the most common evaluation technique but there is a growing sense in the community that information visualization systems need new approaches to evaluation, such as longitudinal field studies, insight based evaluation and metrics adapted to the exploratory nature of discovery. We can conclude that while the overall use of information visualizations is accelerating, the growth of techniques for the evaluation of these systems has been slow.

Our community is confronted with questions such as:

• "For this set of task, which visualization is best?"

• "How can I measure the utility of a visualization?"

• "Does the visualization I developed meet the target users' needs?"

An initial workshop, BELIV'06, was conducted at the Advanced Visual Interfaces (AVI) conference to address these questions. The workshop was well attended, featuring lively discussions about the limits of current practices and several novel exploratory techniques for evaluation were presented.

Attendees have repeatedly expressed to us the wish to repeat the workshop. For BELIV'08 our aim was twofold: we would like (1) to continue the exploration of novel evaluation methods and (2) structure the knowledge on evaluation in information visualization around a schema, where researchers can easily identify unsolved problems and research gaps.

The scientific program of BELIV'08 is based on two different contributions: research papers and position papers. Research papers have been presented together with position papers, in order to produce a more animated discussion. Continuing the line of BELIV'06, research papers are published in the ACM Digital Library.

Skip Table Of Content Section
SESSION: What to measure and how
research-article
Productivity as a metric for visual analytics: reflections on e-discovery
Article No.: 1, Pages 1–6https://doi.org/10.1145/1377966.1377968

Because visual analytics is not used in a vacuum, there are no cut-and-dry metrics which can accurately evaluate visual analytic tools. These tools are used inside of existing business processes, thus metrics to evaluate these tools must measure the ...

research-article
Increasing the utility of quantitative empirical studies for meta-analysis
Article No.: 2, Pages 1–7https://doi.org/10.1145/1377966.1377969

Despite the long history and consistent use of quantitative empirical methods to evaluate information visualization techniques and systems, our understanding of interface use remains incomplete. While there are inherent limitations to the method, such as ...

research-article
Beyond time and error: a cognitive approach to the evaluation of graph drawings
Article No.: 3, Pages 1–8https://doi.org/10.1145/1377966.1377970

Time and error are commonly used to measure the effectiveness of graph drawings. However, such measures are limited in providing more fundamental knowledge that is useful for general visualization design. We therefore apply a cognitive approach in ...

research-article
Understanding and characterizing insights: how do people gain insights using information visualization?
Article No.: 4, Pages 1–6https://doi.org/10.1145/1377966.1377971

Even though "providing insight" has been considered one of the main purposes of information visualization (InfoVis), we feel that insight is still a not-well-understood concept in this context. Inspired by research in sensemaking, we realized the ...

SESSION: Qualitative methods and logging
research-article
Internalization, qualitative methods, and evaluation
Article No.: 5, Pages 1–8https://doi.org/10.1145/1377966.1377973

Information Visualization (InfoVis) is at least in part defined by a process that occurs within the subjective internal experience of the users of visualization tools. Hence, users' interaction with these tools is seen as an 'experience'. Relying on ...

research-article
Grounded evaluation of information visualizations
Article No.: 6, Pages 1–8https://doi.org/10.1145/1377966.1377974

We introduce grounded evaluation as a process that attempts to ensure that the evaluation of an information visualization tool is situated within the context of its intended use. We discuss the process and scope of grounded evaluation in general, and ...

research-article
Qualitative analysis of visualization: a building design field study
Article No.: 7, Pages 1–8https://doi.org/10.1145/1377966.1377975

We conducted an ethnographic field study examining the ways in which building design teams used visual representations of data to coordinate their work. Here we describe our experience with this field study approach, including both quantitative and ...

SESSION: Methodology and case studies
research-article
Creating realistic, scenario-based synthetic data for test and evaluation of information analytics software
Article No.: 8, Pages 1–9https://doi.org/10.1145/1377966.1377977

We describe the Threat Stream Generator, a method and a toolset for creating realistic, synthetic test data for information analytics applications. Finding or creating useful test data sets is difficult for a team focused on creating solutions to ...

research-article
Using multi-dimensional in-depth long-term case studies for information visualization evaluation
Article No.: 9, Pages 1–7https://doi.org/10.1145/1377966.1377978

Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. ...

research-article
The long-term evaluation of Fisherman in a partial-attention environment
Article No.: 10, Pages 1–6https://doi.org/10.1145/1377966.1377979

Ambient display is a specific subfield of information visualization that only uses partial visual and cognitive attention of its users. Conducting an evaluation while drawing partial user attention is a challenging problem. Many normal information ...

Contributors
  • Northeastern University
  • Carnegie Mellon University
  • University of Maryland, College Park
  • Sapienza University of Rome

Recommendations

Acceptance Rates

BELIV '08 Paper Acceptance Rate 10 of 16 submissions, 63%;
Overall Acceptance Rate 45 of 64 submissions, 70%
YearSubmittedAcceptedRate
BELIV '14302377%
BELIV '10181267%
BELIV '08161063%
Overall644570%