This Interaction Metrics OER consists of two group projects focused on teaching students how to create validated metrics for measuring human-computer interactions. If we want to measure how good a team is at teamwork, we might count communication utterances by members and see if they're equally distributed. But is that measure predictive of team success? Probably not. If we want to measure how much a person likes an app, we might count number of uses per day or number of taps per usage session. While these metrics are countable, there're not accurate predictors of fondness for an app. These two projects ask students to create objective, useful metrics for real-world human-technology interactions and to validate them with predictive models and collected data. I tell students these projects are about "developing metrics for things that are hard to measure" and ask them to consider whether the proliferation of inexpensive sensors, AI, and IoT might make fuzzy constructs like "team trust" or being a "good leader" more measurable.
The first project, Game Analysis, is a "warm-up" project to get students used to these concepts and the methodology. They're asked to choose a single-player video game to analyze. They are usually excited about this but don't realize how much work is involved. By the end of this project, when they've documented the timing of all video game entities, established players' attentional zones on the screen, recorded data from novice and expert players and compared them, students are often exhausted but proud of their work.
In the second, larger, project, the Interaction Metrics Project, students are asked apply the same techniques to a real-world workplace environment. They define a work task, the employee roles involved, and the interactions that occur. They define at least three new metrics of interest that don't already exist. E.g., one student team analyzed hospital shift changes, when one set of nurses switches to a new set, and established a metric for the quality of the shift change. Another team analyzed English Second Language professionals' interactions with Google Translate in the workplace and established a metric for fluency. Students gather data from the workplace that is required for their metrics, build a simple statistical predictive model, and then discuss with their contacts to see whether interactions rated high by their model are perceived as high by workplace experts. Students have the option, if they are unable to gain access to a workplace to model, of programming a realistic task simulation of a workplace activity and using that simulation. In this approach, they have to ground their design decisions in real world details of the workplace.
Educational-resources Downloads
This zip contains the supplemental materials for this article.
Index Terms
- Interaction Metrics Projects for Human Computer Interaction
Recommendations
Are Slice-Based Cohesion Metrics Actually Useful in Effort-Aware Post-Release Fault-Proneness Prediction? An Empirical Study
Background. Slice-based cohesion metrics leverage program slices with respect to the output variables of a module to quantify the strength of functional relatedness of the elements within the module. Although slice-based cohesion metrics have been ...
A Critical Analysis of Current OO Design Metrics
Chidamber and Kemerer (C&K) outlined some initial proposals for language-independent OO design metrics in 1991. This suite is expanded on by C&K in 1994 and the metrics were tested on systems developed in C++ and Smalltalk™. The six metrics making up ...
Metrics for class cohesion and similarity between methods
ACM-SE 44: Proceedings of the 44th annual Southeast regional conferenceClass cohesion is one of the desirable properties in object oriented designs. But, designers and managers need a good metric for this property to help them evaluate, compare and choose among various possible solutions to a given problem. In this paper, ...