AI & Soc (2003) 17: 322–339
DOI 10.1007/s00146-003-0286-6
ORIGINAL ARTICLE
Satinder P. Gill Æ Jan Borchers
Knowledge in co-action: social intelligence
in collaborative design activity
Received: 29 January 2003 / Accepted: 21 July 2003 / Published online: 11 October 2003
Ó Springer-Verlag London Limited 2003
Abstract Skilled cooperative action means being able to understand the communicative situation and know how and when to respond appropriately for the
purpose at hand. This skill is of the performance of knowledge in co-action and
is a form of social intelligence for sustainable interaction. Social intelligence,
here, denotes the ability of actors and agents to manage their relationships with
each other. Within an environment we have people, tools, artefacts and technologies that we engage with. Let us consider all of these as dynamic representations of knowledge. When this knowledge becomes enacted, i.e., when we
understand how to use it to communicate effectively, such that it becomes
invisible to us, it becomes knowledge in co-action. A challenge of social intelligence design is to create mediating interfaces that can become invisible to us,
i.e., as an extension of ourselves. In this paper, we present a study of the way
people use surfaces that afford graphical interaction, in collaborative design
tasks, in order to inform the design of intelligent user interfaces. This is a
descriptive study rather than a usability study, to explore how size, orientation
and horizontal and vertical positioning, influences the functionality of the surface in a collaborative setting.
Keywords Coordinated autonomy Æ Graphical interaction Æ Knowledge in
co-action Æ Parallel coordinated moves Æ Social intelligence
S.P. Gill (&)
Center for the Study of Language and Information (CSLI), Stanford University, USA
E-mail: sgill@csli.stanford.edu
J. Borchers
Department of Computer Science, ETH Zentrum, Switzerland
This paper is a revised version of the paper ‘‘Knowledge in co-action: social intelligence in using
surfaces for collaborative Design TasksÕ, presented at the International Workshop on Social
Intelligence Design, Royal Holloway College, London, 5–7 July 2003
323
1 Introduction: knowledge in co-action
In this paper we focus our analysis on how we engage with the representations of anotherÕs actions and move with these representations when interacting in a joint design activity using various types of surfaces. Particular
emphasis is on the rhythmic patterns in this coordination, and observations of
how the use of artefacts can influence these. The motivation behind this focus
is to understand the formation and transformation of knowledge in communication. It is located within a framework that sees cognition as a dynamic
system that co-evolves and emerges through the interaction of mind, body
and environment. This co-evolution includes body moves (kinesics and
kinaesthetics) that give the cognitive dynamic system meaning. Previous work
on the metacommunicative1 body dynamics of the engagement space (Gill,
Kawamori, Katagiri, Shimojima 2000) showed that composite dialogue acts,
of gesture, speech and silence (body moves), play an important role in the
flow of information in interaction. Body moves are forms of behavioural
alignments.
Skilled cooperative action seems to depend on specific types of behavioural
alignments between actors in an environment. These alignments allow for a
degree of coordination or resonance between actors that constitutes the
knowledge they have available to them, both explicitly and implicitly.
Knowledge is seen here as a process that is dynamically represented in actorsÕ
behaviours and the tools, technologies and other artefacts within an environment. These behaviours involve the sense of touch, sound, smell and vision. We will focus on touch, sound and vision in this study. Coordinated
structures of these behaviours and artefacts—structures which can be extended and transformed through technology—form the interactional space in
which the process of knowledge creation through co-action takes place. This
analysis of coordinated structures in skilled cooperative action involves
understanding what constitutes the socially intelligent behaviour that we takes
so much for granted in our everyday behaviour. As a framework, the idea of
social intelligence has been defined2 as the ability of a collection of actors/
agents to learn and to solve problems as a function of social structure, and to
manage their relationships with each other through intelligent actions. Our
study of such intelligent action is presented in this paper, through a study of
body moves of designers collaborating and negotiating. It lies within the
scope of social intelligence design that involves the detailed understanding of
interpersonal actions exhibited during the course of computer-mediated or
conventional communication environments in search for better design of
communication media.
1
Scheflen (1975) on the relation between kinesics (movements) and language: the former can
‘‘qualify or give instructions’’ about the latter in a relation that Bateson (1955) called metacommunicative, whereby the ‘‘movement of the body helps in clarifying meaning by supplementing features of the structure of language. Body moves do just this, and contribute further
to the idea that the structure of language lies in its performance.
2
Toyoaki NishidaÕs summary of the idea of social intelligence in his paper at the close of the
Social Intelligence Design Conference, at the Royal Holloway, London, 2003.
324
Fig. 1 Physical contact
enabled at surfaces
In the analysis that follows, we have found that certain behavioural alignments (e.g., the body move showing acknowledgement,3 and suggestions (Fig. 1)
operate upon the surface of the interfaces, e.g., by touching, pointing actions
and suggest that touch influences the implicit formation of ideas. This haptic
dimension of knowledge is part of the body move and other salient behavioural
alignments that we will present here. The relation between touch and implicit
knowledge has been investigated by Reiner (2003, 1999). She studies, for
example, how the touch language learnt by surgeons who acquire this through
experience, linking patterns of touch with interpretations in a non-symbolic
manner. Related to this, she also studies how gestures are part of physics
learning.
Our research also shows that design activities take place in zones of interaction within the configurations of the space of engagement around the surface(s),
to reflect, negotiate and act. We note how these zones are blurred when physical
contact is enabled at surfaces (as in Fig. 1), but demarcated when they are not
enabled. This is not always evident in speech, but is visible in the spatial and
movement orientation around the surface(s) and between participants.
2 The studies
The analysis in this paper is taken from an experiment and an observation study,
to explore how people coordinate their interaction in a shared task. Students in
pairs (dyads) and groups were asked to undertake collaborative conceptual
drawing tasks, and student design projects, using the following affordances:
3
‘‘Acknowledge (Ack). The acknowledge move gives an idea of the attitude of the response,
i.e., how the person hears and understands and perceives, what is being discussed. It shows
continued attention. In the discourse act (DA) ‘‘acknowledge’’, this aspect of acknowledge,
which was raised by Clark and Schaefer (1989), has not been included because it leaves no trace
in a dialogue transcript to be coded. However, it is a part of the body move. The hearer or
listener demonstrates, with his gesture, how he is acknowledging the otherÕs proposal or request
for agreement. The body move occurs in response to the otherÕs verbal information reference or
suggestion, and their body release-turn or bodily placeholder. Its associated DA is the speech
act ‘‘ack’’ or ‘‘accept. The movement creates a change in the degree of contact which indicates
the nature of the acknowledgement or acceptance.’’ (Gill, Kawamori, Katagiri and Shimojima
2000)
325
a) paper and pens b) a shared white board c) SmartBoards (electronic whiteboards for Web browsing, typing up session notes, drawing mock-ups, and
viewing their work). In the student projects using the multiple SmartBoards a
range of activities were recorded and some salient categories identified as follows: device interface, interaction dance, attention control, team communication
and personal work.4 We have sought to capture the management of both the
body and speech spaces within a task where you need to produce something
together and agree upon it, i.e., in the performance of knowledge in co-action.
The experiment involved two drawing surfaces, used by different sets of
subjects, a whiteboard and a SmartBoard. This is a large-scale computer-based
graphical user interface, and is touch-sensitive (an electronic whiteboard).
‘‘Smart’’ technology does not permit two people to touch the screen at the same
time. The contrast between drawing at the ‘‘smart’’ and the whiteboard was
expected to reveal whether or not there are particular differences in body moves
and gesture and speech coordination at these interfaces. Preliminary analyses5
reveal that the participantsÕ commitment, politeness and attention to each other
becomes reduced at a single SmartBoard, showing behaviours that are in
marked contrast to those of users at a whiteboard. Furthermore, the quality of
the resulting design is lower when using the SmartBoard.6
Acting in parallel, e.g., drawing on the surface at the same time, involves a
degree of autonomy that is coordinated, i.e., where the designers are aware of
where and how they are in relation to the other. The management of coordinated
autonomy is culturally determined. In our study we observe patterns of movement from sequential (e.g., turn taking) to parallel actions, as part of this design
activity, and suggest that coordinated autonomous action is part of sustainable
collaborative activity.7 The role of autonomy within collaboration is an idea that
lies at the heart of this research and has implications for any kind of collaborative activity, including cross-cultural collaboration. For instance, in the study,
when a group of four users has three SmartBoards available to them, there
appears to be a transposition of the patterns of autonomy and cooperation that
one finds between a pair of users working on a whiteboard.
The focus of our analysis is of the gestures participants use to manage their
interactions with each other and the interface during collaborative activity. This
activity takes place in the ‘‘iRoom’’, which is the laboratory of the Stanford
Interactive Workspaces project8 (Guimbretiere, Stone and Winograd 2001). The
study will raise issues for designing mediating interfaces that could support
collaborative human activities to involve sustainable and committed engagement of the self and the interpersonal self.
4
Some examples of behaviours are: device interface: right clicking, marker fixing, dragging,
reaching, tapping, comparing, referring to paper, erasing; interaction dance: side stepping,
pacing, stepping back, hand waving, cutting in, waiting, bumping, peeking, touching (others);
attention control: pointing, asking permission, directing, teaching, concurrent control, posturing, calling out; team communication: reading aloud, catching up, filling in, talking, swaggering, sharing, taking turns, role-playing; personal work: reading, note-taking, scanning,
browsing, watching, concentrating, viewing, drawing and pointing.
5
For a preliminary discussion of this research, see: Borchers, Gill and To (2002).
6
This could in part be due to the awkwardness of the interface for producing smooth drawings.
7
The relationship between autonomy and sustainable collaboration is being developed in a
forthcoming paper by the author.
8
http://graphics.stanford.edu/projects/iwork/
326
3 Parallel coordinated moves and collaborative activity
In 2001, whilst exploring the affordances of SmartBoard technology, we found
that a SmartBoardÕs inability to allow for parallel action at the surface of the
task being undertaken by participants, made for a useful case bed of data to
analyse how such action affords collaborative activity. In the process of our
observations, we also discovered that the use of multiple SmartBoards produced
similar patterns of behavioural alignments of body moves and parallel actions
(see Borchers, Gill and To 2002), but in a very different form, to those undertaken at a whiteboard or over a sheet of paper on a horizontal surface (table).
When one has to wait oneÕs turn to act at the surface, it may (a) take longer to
build up the experiential knowledge of that surface than if one could move onto
it when one needs to, and (b) there is a time lag for the other person working
with you to experience with you, in a manner of speaking, your experience of the
situation, i.e., there is an awareness lag. The former and the latter difficulties are,
we suggest, linked because of this experiential dimension of tacit or implicit
knowledge. With multiple boards in parallel use, awareness of the experience
seemed more fluid than that of a dyad at the one board, evident in the movements around the boards to gather and disperse where rhythms in behavioural
alignment were halted.
The interest in parallel coordinated action first arose during a study of
landscape architects working on a conceptual design plan. The study focused on
analysing the metacommunicative dynamics of body and speech coordination,
and identified various body moves (Gill, Kawamori, Katagiri and Shimojima
2000). These are a kind of interactional synchrony (Birdwhistle 1970). One of
these body moves was coded as the parallel coordinated move (PCM). This
contrasts in rhythm to the other body moves whose action-reaction rhythms
maintain the flow of information in the communication situation by participants. It was further analysed (Gill 2002) to understand its quality. During a
five-minute video excerpt of architects working on a conceptual design plan, this
PCM occurred only once and lasted 1.5 seconds. It was salient because it was the
first time in that session that the disagreement between two architects was able
to find a resolution, and it involved both their bodies acting on the surface at the
same time, even whilst presenting alternative design plans. One of them was
silent and the other speaking. It enabled the grounding in the communication to
come into being (see Gill 2002 for a deeper analysis of the PCM) by enabling an
open space for the negotiation of differences and possibilities for creative coconstruction. The opening and closure of the PCM is by action-response body
moves. For example, the ‘‘focus’’ move involves a movement of the body towards the area the ‘‘speaker’’ or ‘‘actor’’ is attending to, i.e., space of bodily
attention, and in response causes the listener or other party to move his or her
body towards the same focus.
In order to understand the PCM further, and to gather more examples for
analysis, a number of configurations for a similar task were set up to collect
further video data. These are the studies reported here. One task set was for
dyads of students to design shared dorm living spaces. We also collected data
and made a preliminary analysis of group activity where students are using
multiple large-scale surfaces, i.e., SmartBoards (Borchers, Gill and To 2002).
327
The PCM is explored as a category of body move that has its own set of
variable configurations, where the basic common defining feature is that participants act at the same time. In this paper, we concentrate on such actions
taking place upon the surface of the interface, and the contexts within which
these occur. Such actions, for example, can be to indicate ideas or proposals with
a pen or finger or hand. We also consider the cases where only one participant
has physical contact with the surface in order to glean some understanding of
what the function of touching is, and reflect on that back to the case of parallel
action.
4 Examples from the case studies
We will consider examples from two settings: a) using felt pens and standing at a
whiteboard, b) using electronic pens, fingers and hands (tools that can produce
representations on the surface) and standing at a SmartBoard, and marginally
draw upon two other settings for reflection, namely: c) using paper and pen, and
seated at a table; d) using paper and pen, and standing at a table. Using these
examples, we will compare alignments and indicate salient phenomena that need
deeper exploration for an understanding of how the configuration and affordances of surfaces influences collaborative activity.
4.1 The haptic connection
There is something about the contact that is made when standing at a horizontal
surface that may alter the possible modes of configuration to communicate
information, from the situation of standing at a vertical surface. When standing
at a horizontal surface, the eye contact between participants is available without
departing from the particulars in the locus of the surface. In the following
examples (Figs. 2, 3, 4), we consider a case where one person is looking at the
other. In Fig. 2 the person at the whiteboard (A) has to turn his head from the
surface to see the person (B) who is standing back from it. There is neither a
haptic connection via the artefact nor is the representation visually available to
him, at that moment. (A) is discussing the location of a fireplace in the design of
a room, which he has touched with a pointing hand gesture before he turns his
head away. The use of deictics and his contact with the surface, enables (A) to
locate the idea both for himself and for (B) who maintains a gaze on it, such that
he (A) does not need it in front of him at that moment. This pattern is repeated
by other dyads at whiteboards.
In the case of the architects (Fig. 3) standing at the drawing-table, the person
to the left (P) has his hand on the drawing surface and is holding a pen (in this
case, he is holding a position or idea), whilst his speech acknowledges (‘‘yeh’’)
what the person to his right (W) is saying). (W) is pointing to another particular
place on the drawing and making a design suggestion. P is holding an idea and
W proposing an idea. The contact on the surface allows for both acts simultaneously.
At the SmartBoard, pointing or touching acts at the surface, operate upon it
causing marks. Hence, when such acts do occur, it is to explicitly suggest an idea
by drawing it, or if it is not intended as an operation the result is an unintended
328
Fig. 2 A touches the
surface, then turns his
head to see B
Fig. 3 Surface contacts allowing for simultaneous acts
Fig. 4 Pointing over the
surface combined with
looking at another to
communicate an idea
mark or two.9 Acting on the surface seems to be to make things clear, e.g., in
seeking confirmation or to emphasise a position. In this case, we find that instead of pointing directly on the surface, pointing combined with looking at the
other person to whom one is communicating an idea, is done by keeping the
hand hovered over the place one is referring to (as in Fig. 4).
9
Work at Berkeley on Transient Ink is a potential solution, leaving marks that disappear after
a while (Everitt, Klemmer, Lee, and Landay, 2003)
329
The haptic dimension in these examples seems to be important in its combination with vision, for touch surfaces that permit non-operational contact.
Compensation for not being able to use the surface of the SmartBoard is to hold
oneÕs hand over the specific point or idea being proposed for a specific location.
We do know from work by Gregory (1963) on touch and creativity, that there is a
direct relation to being able to create conceptual designs and having the ability to
tactilely feel our world. Although that work primarily concerns the individualÕs
experience, it bears upon the collaborative experience in these design tasks given
the physical role of the body, and where physical autonomy within coordinated
parallel action constitutes a form of joint action. This cohesion between the
virtual or imaginary, the tactile and the physical connection of coordinated
movement involves social intelligence, i.e., it requires us to be able to use our
bodies in rhythmic coordination with speech to share and co-create knowledge.
So far, in Figs. 2, 3, 4 we have considered three surfaces, all involving
standing positions, and the making or holding of a specific design idea by a
designer(s) at a specific location on a conceptual design plan. We made observations about how looking at another person and away from the surface, whilst
touching it (the drawing-table), and after touching it (the whiteboard), involves
certain kinds of behavioural alignments that differ from the use of pointing
gestures held above the surface during the period of looking away from it (the
SmartBoard).
4.2 Narrative and virtual space
Let us consider the case of zones, which were introduced earlier in the paper. In
Fig. 3 (the drawing-table) one architect is acting on the surface and the other
reflecting on the surface. The narrative space lies in between the bodies and the
surface, where the surface provides a mediating point of contact within it.10 In
Fig. 2, neither are acting upon the surface of the whiteboard, but are negotiating
and reflecting. The narrative space lies in a virtual space between the bodies, and
the point on the surface that is being discussed, has been identified with a prior
touch. The person to whom it was indicated (B) can look at it and at the
proposer (A). In the case of the SmartBoard (Fig. 4), neither are acting on its
surface, but are negotiating and reflecting with a hand held over the part on the
surface being discussed. The hand position is needed whilst the idea is clarified in
the mind of the person proposing it,11 and they do not move into a virtual
narrative space away from the surface until the idea becomes clear enough for
such discussion.12
4.3 Engagement space
At this juncture, it may be helpful to situate this discussion about zones, and the
use of hands and eyes to make contact, in the context of the engagement space
(Gill et al. 2000; Gill 2002). The engagement space has been defined as the arena
within which coordinated body movements take place in interactive settings.
10
11
12
This is work in progress in a forthcoming paper by Satinder Gill.
This could be another category of body move.
This is a hypothesis and not a statement of fact, but is under analysis.
330
These movements take place within shifting spaces of engagement. An engagement space may be defined as the aggregate of the participantsÕ body fields of
engagement. An engagement field is based on some commitment in being bodily
together. Hence we can call the engagement space, the body field of engagement.
In defining this space, it was most useful to draw upon Allwood et al.Õs theory of
communicative acts (1991) where they speak of participants signalling and
indicating their orientation to each other.13 In so doing they can increase
commitment by increasing contact.
The body field of engagement is set as the communication opens and the
bodies indicate and signal a willingness to co-operate. The body field of
engagement is a variable field and changes depending on the participants being
comfortable or uncomfortable with each other. For instance, in the case where
one person moves their hand over into the otherÕs space, and that person
withdraws their hand, this indicates that the ÔcontactÕ between these persons is
disturbed. The degree of contact and the nature of distance are expressed in
terms of commitment and attitude. Hence an immediate space of engagement
involves a high degree of contact and commitment to the communication situation, whereas a passive distance is less involved and committed, disagreement, is
very distanced and commitment is withheld.
Disagreement or discrepancy can necessitate a reconfiguration of the body
field of engagement due to a disturbance in the relationship between the
speakers, so that a feeling of sharing an engagement space is re-established. This
reconfiguration is a rhythmic bodily reshaping of the field of engagement. This
category of action occurs because there is a problem in the overlap in one bodyÕs
field of engagement with the other bodyÕs field. Note that if there is no problem
in the overlap of their respective fields, the participants can undertake parallel
co-ordinated moves.
4.4 Engagement space and zones of interaction
The spatial fluidity of the zones (reflection, negotiation and action) of interaction makes it challenging to provide a definitive demarcation of their boundaries
in terms of simply physical space, as such a demarcation does not help to explain
how the zones work together when the designers are in more than one zone at
the same time. Further, we need a framework to understand the movements
from one zone to another. The zones were initially observed as being in distinct
physical areas as depicted in Fig. 5: from left to right—reflection, negotiation,
and action:14
13
Allwood, Nivre and Ahlsen (1991) pay special attention to the context sensitivity of feedback.
Aspects of their theory were adapted for the body situation, such as ÔcontactÕ. The four basic
communicative functions are: 1) Contact; 2) Perception, 3) Understanding; 4)Attitudinal reactions. Winograd and Flores (1986)emphasise Ôthe need for continued mutual recognition of
commitmentÕ (p.63) that we find expressed in the maintenance of the Ôengagement spaceÕ and
they speak of communication as ÔdanceÕ, a metaphor suited for Body Moves. Further, their
argument for Ôsufficient couplingÕ to ensure freguent breakdowns and a standing commitment
by [participants] to enter into dialogue in the face of breakdown, is helpful for understanding
the role of parallel coordinated motion for sustaining collaborative activity.
14
This emerged when reflecting out loud with Renate Fruchter, Stanford University, in the
Spring of 2002, about Donald SchonÕs work, in relation to this study.
331
Fig. 5 Reflection, negotiation and action
1. Reflection zone: if one person is acting at the surface, and the other person is
standing further back and silently observing this action then he or she is
reflecting. If both are standing back and looking at the surface, then both are
reflecting. This state does not involve any immediate intent to act.
2. Negotiation zone: If both are engaging about an idea and there is some
movement or indication to access the surface, then this occurs in the negotiation zone.
3. Action zone: this takes place upon the surface and involves direct physical
contact with it.
The categorisation of zones does, however, provide a helpful description of a
base level pragmatic activity above that of the metacommunicative (e.g., body
moves) and we can see how the latter work in relation to them. By drawing upon
the framework of the engagement space, we can see how the metacommunicative
level of communication carries the fluidity of the zones, as it operates within
movement, time, and space.
In Figs. 2, 3, 4, when looking away from the surface, the engagement space
alters and bodily contact increases with the act of (A) opening the frontal part of
his body and eye contact to the other person (B). In the case of the whiteboard,
engaging (B) involves two steps on (A)Õs part; first to touch the surface to
indicate the idea and to hold its location for himself, and then turning to look at
(B) and engage him in a discussion about it. In the case of the drawing-table, the
(P) increases contact in two steps. He enters the floor space of the drawing by
placing one hand down upon it, holding a pen, with his gaze on the place he has
an idea about. His fingers and pen are moving in his hand in rhythm with the
speech and motion of the other architect (W), but his hand is in a fixed position.
After touching the surface, (P) looks at (W) to acknowledge his action and gauge
it. The increase in contact using the SmartBoard, involves two steps. First, the
designer (C) locates the idea with his hand gesture, and then opens his frontal
body and eye contact to the other designer (D) whilst holding his hand gesture.
All three examples indicate or invite increased commitment in the engagement
space, and make for increased contact. The body fields of the designers are not
overlapping.
When a designer is making contact with the surface to act upon it, whilst the
other person is doing so too, there is an attempt to engage with the body field of
the other person, as in the case of the architects. It also happens in the example
that Fig. 1 is taken from. In Fig. 6 the designer on the right side, close to us (E),
332
Fig. 6 E enters the body field of F
enters the body field of the other one (F) who is currently drawing (the action
zone), and uses his index finger to trace out a shape that denotes a bed. He is
proposing this idea to (F) who is drawing and getting his opinion (the negotiation zone). Both zones are operating at the surface.
The body field of the person drawing (F) is not disturbed, and as we know
from the discussion of the engagement space, this indicates a high degree of
contact and is identifiable as a PCM.
(F) acknowledges (E)Õs proposal, after tracing the proposed idea above the
surface of the board with his pen whilst (E) taps a position of one bed with the
back of his hand on the surface. After tracing (E) continues to draw, and his pen
touches the surface at the same time as (E) begins to lift his hand away. There is
no break in the fluidity of the rhythm of the coordination between them (of body
and speech).
4.5 Parallel coordinated actions
The following examples are of more parallel coordinated actions. During the
studies, we note many instances of parallel actions taking place at the surfaces of
the table and whiteboard, and attempts to do so at the SmartBoard when only
one such board is available.
333
Fig. 7 A silent parallel shift
Fig. 8 Waiting for D to
end his turn
In Fig. 7, (E) is standing back, watching and talking, and (F) is drawing on
the whiteboard. (F) has his body positioned to accommodate (E) by slightly
opening it, slanted to the right, to share the engagement space with (E). At some
point, (E) looks to the left of (F) to an area on the whiteboard and moves
towards it. He picks up another felt pen and begins to draw as well. As (E)
touches the surface, (F) shifts his body and alters his posture so that it is now
open slanted to the left, and increases contact with (E). Both are now acting in
parallel. This shift occurs in silence.
At the SmartBoard (Fig. 8), C is standing back whilst D is drawing. He looks
and moves to a position to the right of D, on the SmartBoard. He leans into the
surface but cannot draw because he has to first wait for D to end his turn.
D, without looking up, speaks, and his utterance causes C to turn his body
back to look at him. As he cannot yet act, C moves back from the surface and
waits, and as he is doing so, he breathes in deeply in frustration. C notices him,
pauses his drawing, turns to look at him and moves back from the zone of
action, allowing D to move into it (Fig. 9).
Once C is acting, i.e., drawing, D continues with his drawing on the SmartBoard (Fig. 10). The result is a disturbance on the board, and a jagged line cuts
across from DÕs touch point to CÕs, causing them both surprise and laughter
(Fig. 10). D momentarily forgot that you cannot touch the surface at the same
time. The need to act whilst another is acting is not a conscious one. This
autonomy in co-action seems to be part of the coordinated collaborative process
but at a metacommunicative level.
In Figs. 8 and 9 we see an attempt to act, causing frustration until the need to
act is noticed, at which point the turn to act is offered to C by D. It is significant
334
Fig. 9 Allowing D to move into the zone of action
Fig. 10 A disturbance on the
board
that they recognise each otherÕs need to act, and signal this need (moving body
away, distancing) and respond to it (speech and body), and further, that they
forget the limitations of the surface to afford them this need. In contrast, the
whiteboard permitted (Fig. 5) a more fluid movement around the surface, as
there was no enforced pause by the surface, and no turn-taking required on one
designerÕs part to permit the other person to act.
These examples are of parallel coordinated actions that involve autonomy,
where autonomy involves awareness of and attendance to the state of engagement
in the space between participants and the surface(s). When a designer at the
SmartBoard does not easily give the turn to the other one, we see various
strategies to force it. These include, moving close to the board and inside the
visual locus of the drawing space in a quick motion, or moving back and forth,
or reaching for a pen, or looking at the pen, or simply reaching out and asking
for the pen the other person is currently using, or just moving right in front of
the body of the person currently drawing, thereby forcing them back, and taking
a pen from the pen holder. As either person can act at the whiteboard, there is
no need for such strategies.
In contrast to the SmartBoard, at the whiteboard an autonomous performance by one person that is not occurring in co-action can bring a reaction to
regain co-action. In an example below, (Fig. 11) (E) looks up and stands to draw
something higher up on the board, just after (F) has knelt down to draw beside
him. (E) altered his position such that the contact within the engagement space
became too low for (F) to be aligned with him in order to act.
(F) attempts to regain contact so that he can work with (E), first by speech
and when that fails, by using body moves to attempt contact and focus (Gill et al.
2000).
335
Fig. 11 An autonomous performance by one person that is not occurring in co-action bringing
a reaction to regain co-action
5 Discussion
The ability to engage at the surface results in strategies for managing autonomous behaviour through various body moves. These strategies differ when using
the SmartBoard and the whiteboard. At the former surface, for example, body
moves such as take-turn are used, where the body field of the person acting is
disturbed by the other oneÕs entering it, and a reconfiguration of the engagement
space is required. At the whiteboard, body moves such as attempt contact and
focus are used, to increase contact.
In all the dyads working at the SmartBoard we observed moments where one
person stands back in silence waiting (in contrast to the whiteboard where the
person standing back sometimes speaks); or turning away to look at other parts
of the surface and looking back at the person drawing because they cannot act;
looking at the pens indicating an interest to act; or moving around the person
drawing, and using the body field disturbance strategies listed above to intercept
and take the turn or force it. If the SmartBoard were a horizontal surface there
would be further inhibitors to natural actions such as that of bringing oneÕs
hands into the drawing space to increase contact, as in the example of the
landscape architects (Fig. 3).
The SmartBoard makes those actions that are invisible, or are extensions of
ourselves, when acting at a whiteboard, visible. Winograd and Flores (1986)
speak of how ÔstructureÕ in communication Ôbecomes visible only when there is
some kind of breakdownÕ (p.68). For instance, acts of rubbing something out
whilst another is drawing, checking something by pointing on it, or touching the
surface with a finger or hand to think about an idea, etc. When all these aspects
are inhibited or have to be negotiated, the fluidity of sharing an engagement
336
space in an interactive drawing task becomes altered by the kinds of communication strategies available to participants to achieve collaborative activity.15
The simultaneous synchrony of PCM in drawing or being able to touch the
surface together provides for a certain kind of awareness of states of contact
within an engagement space. This synchrony allows for the multi-dimensional
expression of ideas in combinations of zones of activity using a pen, hand or
finger, to sketch ideas.
These ideas can be rubbed out, located in oneÕs self and made clear for the
other person. Contact with each otherÕs ideas can be made with gestures as well
as speech. This is tacit bodily knowledge of self and intra-self.
We have noticed that in group activity using multiple SmartBoards, the
patterns of rhythmic coordination of sequential movements and parallel coordinated actions, bear similarity to those in using a single whiteboard, with
additional characteristics due to the increased complexity of the task, the larger
numbers of participants and more surfaces. The locus of action in our study
takes place around three SmartBoards, seen in the figures, and around the table
in the centre of the room. A common pattern of motion activity between the
boards is to have all three in use, or two boards in use, with one or two persons
at each. When problems are noticed, either by pauses in body action at a surface,
or by someone saying something, the other group members migrate towards the
problem space, and try to help resolve it. As they cluster around a board, they
frequently take turns by moving in and out of the problem focal spot to the
outer rim, and once the problem is solved, they disperse to the separate boards.
This interactional dance happens when all the participants are at the boards,
as this enables an awareness that allows them the fluidity of movement within
each otherÕs problem spaces. The fact that there are four students to three
boards, may help this fluidity as one person will, at any time, not be acting on
the surface, and hence have a sense of actions occurring elsewhere. However,
other patterns occur in this configured room. Take Fig. 12, where two students
at the centre SmartBoard are having some difficulty in erasing something they
have drawn over the photographic image that they are working on by using
Photoshop.
The student to the left of the centre SmartBoard has asked his partner to try
to ‘‘take it (the pen stroke) away’’ using the rubber. He is overheard by another
student who is standing at the table. (facing us in the picture). She breaks off her
communication, stands straight and turns to look at the dyad, observes and asks
them if they are having problems ‘‘deleting from the image’’. The dyad attempt
to open the image from another board but that proves temporarily unfruitful.
The ‘‘research observers’’, who are standing in the background behind the
camera, hint to the student who is watching the pair at the board, to try using
the third SmartBoard on the right. This hint is acted upon when she is gestured
at with a pointing arm directed at it. All three SmartBoards are then used.
This is a different dynamic to the sequential-parallel movement transitions as
the participants undertake different activities in spaces that separate them.
Hence there is a lag in awareness. In the case above the observer has intervened
15
The whiteboard is slightly unsteady if one person moves heavily upon its surface, but that is
well handled and managed by the participants. In the example, where E taps the surface with
the back of his hand, F has to momentarily lift his pen, yet the rhythmic coordination between
them is maintained.
337
Fig. 12 A different dynamic to the sequential-parallel movement transitions: the overhearer
and the mediator
to help because of difficulties in using the surface functionalities. We have discussed the limitations of the desktop metaphor for such design surfaces and
design activity in Borchers, Gill and To (2002). The proximity of participants to
each other within the space facilitates overhearing,16 which is significant in
helping the awareness and maintenance of the engagement space of the group.
The analysis being undertaken in this paper is work in progress. As part of a
design effort to better understand how providing more contact affordances at
the surface can improve collaboration in joint activity, we have been designing
the software to permit the simultaneous operation of multiple functions at the
surface.
6 Conclusions
Collaboration and cooperation in joint activities is analysed as having three
basic elements: the skill to grasp and respond to the representations of the tacit
dimension of our actions (e.g., in body moves, gestures, sounds), the ability to
coordinate this grasping and responding in a rhythmic synchrony of sequential
16
The SANE Project at Royal Holloway, London, shows that people who overhear others
talking in work environments are participants of a kind within that space and constitute part of
the organisational knowledge.
338
and parallel actions and coordinated autonomy that occurs within parallel
coordinated movements, and involves awareness and attendance to the state of
engagement in the space between us and interfaces.
The analysis of PCMs shows the importance of coordinated autonomous
behaviour for sustainable collaborative activity, as it facilitates cooperative
behaviour. Without it, the designers use disturbance strategies or behaviours.
These ensure that the design task is completed, but with less engagement at the
conceptual level.
The disruption of parallel coordinated action makes it problematic for participants to achieve a tacit awareness of their state of contact within the
engagement space at the surface of the board. The fluid coordination of their
rhythmic synchrony of body and speech, that would normally be invisible, i.e.,
not something they are consciously aware of doing, is made visible to them and
they have difficulty in getting it back.
Parallel coordinated actions occurring at the surface have a physical touch
dimension. There is an additional value to being able to touch the surface. We
have found that touch enables the designers to create narrative spaces between
their bodies and the surface of the interface, where the surface becomes a
mediating point of contact within that space as virtual and imaginary. Where we
cannot touch the surface to indicate an idea that we are talking about, for
example, deciding on the location of a fireplace in the room we are drawing, we
use our hands and arms to hold that place, above the surface, until we are ready
to shift away from the surface to the virtual narrative space.
This study of the complexity of body moves of pairs and groups of designers
collaborating and cooperating on design tasks shows how they learn and solve
problems as a function of communicative and social structure, and manage their
relationships with each other through intelligent actions. By intelligence, we
mean the skill of grasping and responding to the representations of the tacit
dimension of our actions and knowledge, appropriately for the purpose at hand.
We call this the performance of knowledge in co-action. A challenge for
designing mediating interfaces is for them to afford us our human skills of
engaging with each other, communicating information and forming knowledge.
Acknowledgements Thanks and acknowledgements to Ramit Sethi and Tiffany To for their help
in this research, and to Terry Winograd for his support of this work in the iSpaces Project at
Stanford University. Thanks also to Syed Shariq for his encouragement to develop the
framework of ‘‘knowledge in co-action’’, originally as a theoretical frame for the Real Time
Venture Design Lab (ReVeL) at Stanford University. Thanks also to Renate Fruchter, Duska
Rosenberg and Toyoaki Nishida for their comments on the paper.
References
Allwood J, Nivre J and Ahlsen E (1991) On the semantics and pragmatics of linguistic feedback.
Gothenburg Papers. Theoretical Linguistics, 64
Bateson G (1955) The Message. ÔThis is the PlayÕ. In: B Schaffner (ed) Group Processes. Vol. II.
New York: Macy
Bavelas JB (1994) Gestures as part of speech: methodological implications. Res Lang Soc Inter
27(3):201–221
Birdwhistle RL (1970) Kinesics and context. University of Pennsylvania Press, Philadelphia, PA
Borchers J, Gill S and To T (2002) Multiple large-scale displays for collocated team work: study
and recommendations. Technical Report. Stanford University
339
Clark HH, Schaefer EF (1989) Contributing to discourse. Cog Sci 13:259–294
Everitt KM, Klemmer SR, Lee R, Landay JA (2003) Two Worlds Apart: Bridging the Gap
Between Physical and Virtual Media for Distributed Design Collaboration. Proceedings of
CHI 2003, ACM Conference on Human Factors in Computing Systems
Gill SP (2002) The parallel coordinated move: case of a conceptual drawing task. Published
Working Paper: CKIR, Helsinki
Gill SP, Kawamori M, Katagiri Y and Shimojima A (2000). The role of body moves in dialogue. RASK 12:89–114
Gregory R (1963) Recovery from blindness. A case study. Experimental Psychology Society
Monograph No 2
Guimbretiere F, Stone M, Winograd T (2001) Stick it on the wall: a metaphor for interaction
with large displays. Submitted to Computer Graphics (SIGGRAPH 2001 Proceedings)
Reiner M (1999) Conceptual Construction of Fields with a Tactile Interface. Interactive
Learning Environments 6 (X), 1–25
Reiner M and Gilbert J (in press) The Symbiotic Roles of Empirical Experimentation and
Thought Experimentation in the Learning of Physics. International Journal of Science
Education
Scheflen AE (1975) How behaviour means. Anchor Books, New York
Winograd T and Flores F (1986) Understanding Computers and Cognition. A New Foundation
for Design. Norwood NJ, Ablex Corporation