Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

A Tale of Two Studies

Bowen, J., Reeves, S. and Schweer, A.

    Running user evaluation studies is a useful way of getting feedback on partially or fully implemented software systems. Unlike hypothesis-based testing (where specific design decisions can be tested or comparisons made between design choices) the aim is to find as many problems (both usability and functional) as possible prior to implementation or release. It is particularly useful in small-scale development projects that may lack the resources and expertise for other types of usability testing. Developing a user-study that successfully and efficiently performs this task is not always straightforward however. It may not be obvious how to decide what the participants should be asked to do in order to explore as many parts of the system�s interface as possible. In addition, ad hoc approaches to such study development may mean the testing is not easily repeatable on subsequent implementations or updates, and also that particular areas of the software may not be evaluated at all. In this paper we describe two (very different) approaches to designing an evaluation study for the same piece of software and discuss both the approaches taken, the differing results found and our comments on both of these.
Cite as: Bowen, J., Reeves, S. and Schweer, A. (2013). A Tale of Two Studies. In Proc. User Interfaces 2013 (AUIC 2013) Melbourne, Australia. CRPIT, 139. Smith, R.T. and Wunsche, B.C. Eds., ACS. 81-90
pdf (from crpit.com) pdf (local if available) BibTeX EndNote GS