Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Mapping

Download as pdf or txt
Download as pdf or txt
You are on page 1of 354

xCoAx 2013

Proceedings of the
first conference
on Computation
Communication
Aesthetics and X
xCoAx2013 Bergamo, Italy
Proceedings of the first conference on
Computation, Communication, Aesthetics and X
xCoAx: Proceedings of the conference on Computation, Communication, Aesthetics and X.
xCoAx 2013, Bergamo

Organizing committee:
Mario Verdicchio, Jason Reizner, Andr Rangel, Pedro Tudela & Miguel Carvalhais.
Local organization:
Maria Grazia Castaldo, Giuseppe Cattaneo, Alessandro Pavoni & Cesare Resta.

Proceedings editors:
Mario Verdicchio & Miguel Carvalhais.
With the collaboration of:
Jason Reizner, Andr Rangel & Pedro Tudela.

Proceedings design and layout:


Mariana Owen & Miguel Carvalhais.
Images:
Lias Sum05 (http://www.liaworks.com).
Photography:
Pedro Tudela.

Published by:
Universidade do Porto
Praa Gomes Teixeira
4099-002 Porto, Portugal
ISBN: 978-989-746-017-3
ISSN: 2183-9069
Semptember 2013

Volunteers:
Giorgia Bianchi, Carlotta Maironi Da Ponte, Gaia Meris, Emina Mijatovic, AndreaMoretti,
GloriaMosconi & Maria Cristina Nuti.

Special thanks to:


Mariana Owen, Sam Baron, Isabel Pacheco & Andrea Azzini.

1
Scientific Committee and Reviewers
Alan Dix, Talis and University of Birmingham
Alessandro Ludovico, Academy of Art Carrara
Alice Eldridge, CRiSAP, University of the Arts, London
Alison Clifford, University of the West of Scotland
lvaro Barbosa, University of Saint Joseph, Macao, China
Andr Rangel, CITAR / Portuguese Catholic University
Andreas Muxel
Antonio Camurri, University of Genova
Carlos Guedes, ESMAE-IPP
Carlos Sena Caires, Escola das Artes da UCP
Chandler McWilliams
Cretien Van Campen, Netherlands Institute for Social Research
Cristina S, Escola das Artes da UCP
Daniel Schorno, STEIM
David Rokeby, Toronto
Diemo Schwarz, IRCAM
Domenico Quaranta, Accademia di Belle Arti di Brera, Milano
Fabio Cleto, Universit degli Studi di Bergamo
Francesca Pasquali, Universit degli Studi di Bergamo
Golan Levin, Carnegie Mellon University School of Art
Heitor Alvelos, ID+ / University of Porto
Jan Edler, Realities United
Jason Reizner, Faculty of Computer Science and Languages, Anhalt University of Applied
Sciences
Joo Cordeiro, CITAR / Portuguese Catholic University
Johannes Deich, Fakultt Medien, Bauhaus-Universitt Weimar
Jorge Cardoso, CITAR / Portuguese Catholic University
Kasia Glowicka, Royal Conservatory Brussels
Lusa Ribas, ID+ / University of Lisbon
Lus Gustavo Martins, CITAR / Portuguese Catholic University
Lus Sarmento
Mario Verdicchio, Universit degli Studi di Bergamo

4
Martin Kaltenbrunner, Kunstuniversitt Linz
Miguel Carvalhais, ID+ / School of Fine Arts, University of Porto
Miguel Leal, i2ADS, School of Fine Arts, University of Porto
Mitchell Whitelaw, Faculty of Arts and Design, University of Canberra
Nathan Wolek, Stetson University
Nina Waisman, Lucas Artists Program Visual Arts Fellow
Nina Wenhart
Paulo Ferreira Lopes, CITAR / UCP / ZKM
Pedro Cardoso, ID+ / Faculdade de Belas Artes, Universidade do Porto
Pedro Patrcio, CITAR / Portuguese Catholic University
Pedro Tudela, i2ADS, School of Fine Arts, University of Porto
Penousal Machado, University of Coimbra
Philip Galanter, Texas A&M University
Ricardo Lafuente, Faculdade de Belas Artes, Universidade do Porto
Roxanne Leito, The Cultural Communication and Computing Research Institute,
Sheffield Hallam University
Rui Torres, Faculty of Human and Social Sciences, University Fernando Pessoa, Porto
Saskia Bakker, Eindhoven University of Technology
Teresa Dillon, Ireland
Thor Magnusson, University of Brighton / ixi audio
Tim Boykett, Times Up
Tim Edler, Realities United

5
6
Contents

11 Foreword

15 Papers

17 Audiovisual Dynamics: An approach to Sound and Image Relations in


Digital Interactive Systems
Lusa Ribas

29 Found Data: Generating Natural Looking Shapes by Appropriating


Scientific Data
Andres Wanner & Ruth Beer

39 Geometries of Flight: Remix as Nodal Practice


Monty Adkins & Julio dEscrivn

51 Traversal Hermeneutics: The Emergence of Narrative in Ergodic Media


Miguel Carvalhais

61 Space and Time in Ergodic Works


Sofia Figueiredo

71 Representation and Mimesis in Generative Art: Creating Fifty Sisters


Jon McCormack

81 The Textural X
Alex McLean

89 Are Luminous Devices Helping Musicians to Produce Better Aural


Results, or Just Helping Audiences Not To Get Bored?
Vitor Joaquim & lvaro Barbosa

107 The Human Fingerprint in Machine Generated Music


Arne Eigenfeldt

117 Formalization Using Organic Systemization in Musical Applications


Jingyin He & Ajay Kapur

129 What Are You Telling Me? How Objects Communicate Through
Dynamic Features
Sara Colombo, Lucia Rampino & Sara Bergamaschi

139 Recursive Digital Fabrication of TransPhenomenal Artifacts


Stephen Barrass

7
153 Rhythm Apparatus For the Overhead Projector: aMetaphorical Device
Christian Faubel

163 Between Thinking and Actuation in Video Games


Pedro Cardoso & Miguel Carvalhais

173 Photography in Video Games: the Artistic Potential of Virtual Worlds


Andr Carita

183 The Design of Horacle: Inducing Serendipity onthe Web


Ricardo Melo & Miguel Carvalhais

193 Sudthuringer-Wald-Institut: Knowledge Sharing for the End of the


World
Jason M. Reizner

203 Making Online Face-to-Face Interaction Easier for Older People with
Constructive Design Research
Marianne Markowski

215 Innovation, Collaboration, Education: Histories and Perspectives on


Living Labs
Gabriella Arrigoni

225 On the Notion of Code Convergence in Vilm Flussers Work


Rainer Guldin

233 Short Papers

235 Transients: a Transit Visualization


David Bouchard

241 Exploring Open Hardware in the Image Field


Lus Eustquio, Miguel Carvalhais & Ricardo Lafuente

249 Nevermore: Pretext Machine


Bruno Figueiredo & Susana Loureno Marques

255 Profilography
Pablo Garcia

263 Heimlichkeit des Berhrens: Exploring the Correlation of Perception


and Intimacy
Alexander Mller-Rakow, Oscar Palou Rib & Michael Pogorzhelskiy

8
267 Null By Morse: Performing Optical Communication with Smart Phones
Tom Schofield

271 The Lonely Tail


Giselle Stanborough

275 Funkschatten: a Creative Collaboration Experience


Michael Trnkner & Theresa Schnell

283 The Robot Quartet: a Drawing Installation


Andres Wanner

287 Geometries of Flight


Monty Adkins & Julio dEscrivn

289 A Bridge From Nowhere (844)


Alba Francesca Battista

293 Impetus Cascading Chaos


Vilbjrg Broch

297 Improvising With Self-Observing Systems: a Duet For Cellist and


Adaptive Delay Network
Alice Eldridge

301 Drive Mind


Hideyuki Endo & Hideki Yoshioka

305 Decomposing Electric BrainPotentials for Audification on a Matrix of


Speakers
Titus von der Malsburg & Christoph Illing

309 Cmara Neuronal: a Neuro/Visual/Audio Performance


Joo Martinho Moura, Adolfo Luxria Canibal,
MiguelPedroGuimares & Pedro Branco

313 Keynote

315 Post Digital Publishing, Hybrid and Processual Objects in Print


Alessandro Ludovico

323 Biographies

9
10
Foreword
Mario Verdicchio

Welcome to the proceedings of the first edition of Computation, Communication,


Aesthetics, and X.
Beginnings are always accompanied by excitement and enthusiasm, which in our
case were furtherly fed by the overwhelming response that we got from researchers and
artists from all over the world shortly after we issued the call for papers and works.
On the other hand, we cannot ignore the little chill that runs down our spines every
time it comes back to us that, when all is so fresh and new, much of it is still unknown
to us: who knows for sure what is going to happen?
The Unknown is indeed a concept traditionally represented with an X, and that is one
of the reasons why our conference features an X in its name.
Still, this is not the only meaning associated with this letter: oftentimes X has been
used to represent a prohibition. When we see a big black X over the symbol of a photo-
camera in a museum, we know what we are not allowed to do. However, we also know
that taking a picture in such circumstances is still a task that we may successfully ac-
complish, provided that we are stealthy enough.
When it comes to Computation, researchers have historically had to deal with much
more than simple prohibition: they were told that some goals were impossible to achieve,
and that such impossibility was intrinsically connected with the computational nature
of the devices they were envisioning. According to the naysayers, it was not a simple
matter of being quick with the shutter of a camera: some things were simply out of reach.
Here is the second meaning of X: Impossibility. Interestingly, many thought and still
think that Communication and Aesthetics are among the impossible endeavors.
Obviously, Communication is not to be meant as the transmission of encoded bits
from one point to another, a computational task par excellence, but as the generation and
conveying of meaning, whether it is with words, images, shapes, or sounds.
The criticisms against computational approaches boil down to two main posi-
tions: computational devices are not conscious, and they are strictly governed by rules.
Assuming that consciousness is necessary for human beings to acquire the meaning of
words through their experiences, any consciousness-less device is then unable to learn

11
and elaborate meaning, and hence must be considered as a simple symbol-processing
mechanism, like in the famous Chinese Room thought experiment.
Moreover, given the determinism of computational rules, any result obtained by one
of these devices is apparently devoid of any novelty or originality, since every characteris-
tic of the outcome is already established in the program governing the relevant operations.
If computational devices are precluded from meaningful Communication with words,
what about the use of shapes, colors, and sounds? In other words, what about Aesthetics?
The response seems to be the same: without the moto-sensory and mental endowments
that bless human beings with the possibility to enjoy and manipulate such means of
expression, computational devices can do only so much. But how much is exactly that
much?
We must not forget that the aforementioned criticism emerged in the wake of the
birth of Artificial Intelligence in the 1950s with Alan Turings visionary ideas on the the-
oretical possibility of computers fluently conversing with people or being creative. The
X of impossibility seems to rise when an artificial substitute is envisioned for activities
that have been traditionally considered as typically human, but if Computation is ad-
opted in a different role, not as a substitute, but as an aide, everything seems to proceed
much more smoothly.
Computation has indeed played an increasingly important role in both Communication
and Aesthetics and nobody, not even the harshest critics, can deny the immense contribu-
tion of computers in these fields: other than the already mentioned worldwide telecom-
munication infrastructure through which these very proceedings are distributed, several
products, including the exciting artworks and performances presented in this book, are
created also by means of computational devices.
Here we are at the final and maybe most important meaning of X: it is a Crossroads
where two different worlds meet and complement each other, where the rules govern-
ing the computational devices are not seen as constraints, but as a way to channel the
creative force that inspires human beings, and to organize it into templates that can be

12
replicated, altered, and evaluated in unprecedented ways, thanks to the speed and pre-
cision of the technology used in these devices.
With computers in the toolbox, next to our pens, paintbrushes, chisels, strings and
so on, we are enabled to explore a much, much wider landscape than ever before, but
without fear of the Unknown, only excitement.
Let us begin.

13
Papers
16
Audiovisual Dynamics: An approach to Sound and
Image Relations in Digital Interactive Systems

Lusa Ribas
lribas@fba.ul.pt
ID+/ Faculty of Fine-Arts, University of Lisbon, Portugal

Keywords: Sound-Image Relations, Audiovisuality, Digital, Art, Design, Interaction.

Abstract: This paper outlines an approach to the study of sound and image relations in
digital interactive systems. It starts by addressing these relations and their different con-
ceptions, and then centers its attention on aesthetic artifacts that use software as their
medium and propose interactive experiences articulated through image and sound. It dis-
cusses the principles behind their creative shaping as possibilities inherent to the digital
computational medium, and conceptually frames the nature of sound-image relations
as procedurally enacted dynamic articulations of visual and auditory modes subjected
to interaction. Finally, it focuses on these systems surface analyzing distinctive features

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


of their audiovisual dynamics.

17
1.
Introduction

While much has been written on the multiple histories of sound and image relations, this
study responds to our belief that there is still room and need to resume the topic regarding
its contemporary reinterpretations. In particular, concerning practices that explore the
possibilities of software, inviting the audience to interact with dynamic audiovisual con-
figurations. These practices do not necessarily claim the dominant or historical themes
of audiovisuality. Rather, they creatively reshape it within the digital computational me-
dium, demanding renewed concepts and forms of consideration. They place this study
in the intersection of audiovisuality and interactivity, as themes of creative exploration,
and as viewpoints from which to approach its subject matter.
This direction of inquiry was pursued in an exploratory manner, by examining and
articulating complementary perspectives on audiovisuality, its digital computational
nature, and its interactive forms. We traced the evolution of the topic of sound-image
relations towards the contemporary context of digital interactive systems. We then ap-
proach these systems audiovisual surface as a site for interaction. The specificity of soft-
ware-based audiovisuality is addressed in light of its underlying principles, as creative
possibilities of its medium. As the procedural nature of these systems is highlighted, we
focus on characterizing their dynamics, or the variable, and often indeterminable, nature
of their audiovisual behavior and responses to interaction.
These viewpoints structured the research on which this paper is based (Ribas 2012),
from which we now underline the ideas that emerge as contributions to the understand-
ing and description of sound-image relations in interactive systems.
1. W
 e trace this history back to
Edisons machines and Wagners
aesthetic ideal of synthesis 2.
Sound-Image Relations and Interactive Systems
that inspired both an operatic
simultaneity and a parallelism
between the musical and the We begin by establishing an open conception of sound and image relations, and what
visual arts. While these analo-
they may encompass, in light of a convergence between artistic forms of expression and
gies moved towards a transfer
of structural methods of creative media technologies, while also considering the perceptual and receptive implications of
production, the simultaneous
this evolution. Their foundations and models range from sensory, structural or conceptu-
inscription of sound and image
in the film medium yields al analogies, to the coupling, transformation, or direct manipulation of sound and image
their coupling (synchronization
through technological means, which points towards the process-based and interactive
and montage) as well as new
possibilities for synthesis and nature of contemporary forms of audiovisuality (Ribas 2012, 31-79).1
transformation. Two tendencies
In the contemporary context, rather than confining our view to a specific typology
then emerge on a conceptual
and technical basis: exploring or genre of interactive systems, we chose to encompass a diversity of aesthetic artifacts.
film as a perception device, and
They are defined as software-driven or computational systems, whose surface (outputs
the analog electronic unicity of
sound and image, paving the and interfaces) is audiovisual, and whose interactions specifically include the audience
way for interaction. We then
(as user). Surfaces are the faces that works turn to their audiences [] as a result of their
focus on two intersecting topics:
software-driven audiovisuality implemented processes working with their data whose structures, as algorithms carried
and interactivity (Ribas 2012,
out by computers, are often unavailable to the user (Wardrip-Fruin 2006, 216).
31-79). In its contemporary
manifestations, audiovisuality We consider the works processes, or the procedures that structure their behavior,
becomes ubiquitous and multi-
from the point of view of the users phenomenology, while taking into account this con-
farious as the ideal of synthesis
finds a counterpart in media ceptual reality of the work and the principles that drive its creation. We then focus on the
technologies as a digital fusion
audiovisual surface they make available for interpretation and interaction.
of sound and image (Daniels
and Naumann 2010, 8; Znouda
2006, 174).

18
3.
Audiovisual Surface and Interaction

In order to study the ways in which interaction reshapes audio-vision (Chion 1994), we
address this perceptual mode of reception and the cross-modal mechanisms that consti-
tute its foundations. We can then distinguish perceptual phenomena from audio-visual
objects of perception that eventually promote the binding and synchresis (perceptual
synthesis) of associated stimuli.2 2. Audio-visual forms often follow
design strategies that try to em-
Devised with the aid of technological means, these artificially constructed relations
ulate, or play with, our basic
correspond to different methods and concepts, for linking the visual and auditory, or for mechanisms of cross-modal
processing and integration of
correlating them to other (often intangible) realms. Sound and image become abstract
different sensory modalities
manifestations of their synchronic and diachronic relation or correlation. (Whitelaw 2008b). These relate
to cross-modal interactions as
well as to analogies we form
3.1.
Interaction: New Roles of Sound and Image upon amodal dimensions or
qualities, which, in contrast
Interaction reshapes audio-vision, through an active (sensorimotor) implication of the
to the interpersonal variance
user, involving the haptic capture of the visual and auditory modalities, as a form of per- of synesthesia, are common
phenomena of human percep-
ception that arises from action (Mangen 2006, 410). Interaction implies that both entities
tion (Shimojo and Shams 2001;
are able to act and influence each other. The system may incorporate human activity Daurer 2010).

into the way images and sounds are presented, and thus perform differently (Candy and
Edmonds 2002, 2002). The user is no longer dealing with a self-contained audiovisual ob-
ject, but rather with processes and events that are brought into existence, as dynamic
outputs of real-time computations (Hayles 2006, 181).
Consequently, and beyond the intrinsic value of audio and visual elements or the
added value effects of their combination as cinematic manifestations, the audiovisual
analysis turns towards the new roles that sound and image as means and as products
of interaction.

3.2.
Strategies of Articulation
In this context, their relations can also be considered at different levels, as they are speci-
fied within the system (as mappings between data), or as surface configurations of visual
and auditory modes that the user actually accesses and interacts with. We can therefore
approach sound-image relations by distinguishing interfaces, the user actions they pro-
mote, and their possible outcomes, as suggested by Levin (2010) or Kwastek (2010). By doing
so, rather than defining relations, we are describing different strategies of sound-image
articulation, according to the operative and productive possibilities of each system.

3.3.
Interactivity and Performativity
In order to circumscribe the scope of interactive systems we can use the notion of perfor-
mativity to address works that explore how a feedback loop can be established between
the system and its user(s) allowing them to explore the possibility-space of an open
work, and thereby to discover their own potential as actors (Levin 2010, 271).3 We can also 3. This notion highlights the
performative dimension of the
view these artifacts as apparatuses (comparable but different from instruments) whose
experience of a work, as jouable
functionality as production devices is potentially unique and novel to the user, thus (playable), as performed by its
spectators (Boissier 2004).
inciting creative exploration (Kwastek 2011, 157).

19
However, this view emphasizes an instrumental nature, to which the interactive
systems considered do not necessarily correspond. This entails examining alternative
strategies of sound-image articulation, as well as other possibilities or principles that
govern their creation.

4.Principles and Medium

In order to further scrutinize the audiovisual surface, we provide an alternative perspec-


tive by resorting to the principles that, according to Levin (2010), motivate the develop-
ment of software artworks that are concerned with (or articulated through) relationships
between sound and image. They comprise sound and music visualization, the transmut-
ability of digital data, generative autonomy and interactive performativity.

4.1.
Visualization, Sonification and Transmutability
4. Which display either time- While the common traits to sound or music visualization or notations practices are the
based representations of per-
development of expressive visual languages in relation to sound4, or the aim to provide
ceptual phenomena, like pitch,
loudness, and other relatively insight into the structure of a signal or composition (Levin 2010, 272), the concept of visu-
instantaneous auditory features
alization encompasses a multiplicity of methods and aesthetic strategies.5 In this sense,
(Levin 2010)
sonification is its parallel, as the use of acoustic means to convey information or concepts,
5. M
 oreover, it can be extended often used as an alternative or supplement to visualization. It is used artistically, as an
to visualizations of the human
aesthetic concept and method, namely as a means to make the environment audible
voice or other user produced
sounds, as well as an algorith- (Grond and Schubert-Minski 2010, 284).
mically defined connection
The principle of transmutability relies on the premise that any kid of input data can
between sound and image,
entailing their simultaneous be algorithmically visualized or sonified. While mostly used as a means to an end, in
generation or submission to
enabling some real-world data signal or data stream of interest to be understood, expe-
similar parameters.
rienced, or made perceptible in a new way, it can also be an end in itself, as the starting
point for a conceptual transformation and/or aesthetic experience (Levin 2010, 274). This
highlights the inherent translatability of data as raw material that transmutes into any
chosen visual or auditory form (Whitelaw 2008a, 4554).

4.2.Performativity and Generativity


The notion of performativity concerns systems that entail the mapping of human data
or human performances to images and sounds, as open works or meta-artworks
which are only experienced properly when used interactively to produce sound and/or im-
agery (Levin 2010, 275). They emphasize an interactive performativity as subject matter,
rather than interaction as a mere possibility or attribute of a system.
In turn, the principle of generativity refers to the potential autonomy of a system
to produce animations and/or sound from its own intrinsic rule-sets (Levin 2010, 277). It
draws attention to the rules of creation of the work, as artistic constraints (Bootz 2005);
as recipes for autonomous processes (Galanter 2006) that develop in time, in a self-or-
6. T
 he work occurs while running
ganizing manner, potentially leading to unforeseeable results, which are not completely
as a unique performance whose
rules of creation, or procedur- predictable neither by artists nor user (Boden and Edmonds 2009, 24).6 What becomes
al logic, can only be grasped
relevant then, is how this generative autonomy is manifested and may be perceived by
through careful observation and
interaction. the audience.

20
These principles draw attention to the specificity of software-driven systems and to
their heterogeneity as aesthetic artifacts that explore distinct possibilities of their me-
dium. They correspond to different ways of exploring the mapping of a given input data
or source information into visual and auditory form, and to the possibility of devising
dynamic audiovisual behaviors and responses to interaction. As such, we can extend their
discussion to other notions that are used to address these creative possibilities, and to
define themes or aesthetic qualities of these systems.

5.
Possibilities and Qualities

The artifacts considered in this study use computers not only as storage and transmis-
sion media but require computation in order to be themselves, during the time of their
experience. They are computationally variable works in which processes are defined in
a manner that varies the works behavior (randomly or otherwise), either without input
from outside the works material, with input from external data or processes, or with
human input; the latter meaning audience interactive (Wardrip-Fruin 2006, 38999).
These factors of variation again highlight the creative possibilities of a medium, where
data and process are the major site of authoring (Wardrip-Fruin 2006, 381). In fact, the
principles mentioned correspond to a rephrasing of aesthetic possibilities that, accord-
ing to Levin, stress the self-referential nature of computational works that address as
their subject matter the structures, materials and processes by which they are created,
namely: interactivity; processuality; generativity; transmediality (Levin 2003; 2007).7 7. T
 he author also mentions
connectivity and dynamism,
According to this, transmediality is linked to audiovisuality, multimodality and thus
adding that naturally, these are
to transmutability, which stresses the inherent polymorphism of digital data. While not the only principles, but they
outline aspects that really have
these terms accent the translation processes performed on non-process elements of the
much more to do with features
work (data and its audiovisual forms), the principles of generativity and interactivity of the medium and how it oper-
ates in relation to people (Levin
bring to the fore the processes, as operations carried out by the work (defining the sur-
2003; 2007).
face and supporting interaction).

5.1.
Processuality and Performativity
Processuality concerns the algorithmically structured operations carried out by a proce-
dural system (that computationally executes rules), potentially leading to variable out-
comes. As Jaschko (2010, 130) asserts, process is a central aesthetic paradigm of genera-
tive and interactive artworks, since live processes generate unique configurations and
dynamics, performed either by the system, or by system and user. Process then refers 8. A
 s Broeckmann (2005) argues,
processuality is one of the
to the time-based evolution of sequences of events as results of ongoing computations,
essential aesthetic q
ualities of
that conflates with performativity as a term designating both the quality of a techno- electronic and digital artworks,
whose aesthetic experience
logical artifact in operation (an execution) and the live dimension of a presentation
hinges, to a large extent,
(Broeckmann 2005).8 Hence, the expression and experience of these works is shaped by on non-visual aspects or
machinic qualities manifested
their modes of liveness (temporal simultaneity) and presence (spatial co-attendance),
at the level of movements,
together with their visual and auditory realization (Kwastek 2009, 93). of processes, of dynamics, of
change.

5.2.
Surface vs. Procedural Expression
Implied in these notions is the idea that beyond the retinal beauty of audiovisual sen-
sory perceivable results (Jaschko 2005), the iconographic level (Broeckmann 2005) or

21
beyond the rhetoric of the surface (Bootz 2005), digital computational works entail a
conceptual level tied to the cognitive recognition of the formal processes they carry out
(cf. Jaschko 2005; Whitelaw 2010, 158). This emphasizes the procedurality that Murray or
Bogost characterize as the principal value of the computer in relation to other media,
or its defining ability to execute rules that model the way things behave (Murray 1997,
71). We then move towards an aesthetic level that is tied to their procedural rhetoric or
the practice of using processes expressively (Bogost 2008, 12224).
Therefore, an analysis of the audiovisual surface cannot be limited to its sensorial
qualities of expression, but include the expressive qualities of the procedures that govern
its behavior. In other words, these works content is their behavior and not merely the
output that streams out (Hunicke, LeBlanc and Zubek 2004, 1).

5.3.
Dynamics of the Work-as-System
These notions highlight the subordination of audiovisuality to procedurality, and ulti-
mately, how sound and image as aesthetic materials, subsume to the processual and per-
formative aesthetic qualities of works that occur while running, as processes performed
in real-time, with the participation of the audience. This provides the conceptual ground
for our approach.
On one level, what is emphasized is the possibility to create behaviorwhether
autonomous, reactive or interactive. In this sense, we address artifacts whose subject
matter is not necessarily tied to relations between the visual and auditory. However,
by exploring the possibilities of the medium, they propose potentially unique, dynamic
configurations of images and sounds. Our attention indirectly diverges from practices
concerned with the mapping or translation of any kind of information or content into
visual and/or auditory form, as we shift the focus towards systems where sound and
image are the tangible expression and consequence of a dynamic process (emphasizing
processuality and interactivity).
On another level, what becomes defined as the distinctive quality of these systems
9. The notion of dynamics refers is the dynamics of their behavior.9 In contrast to other time-based forms of audiovisual-
to the observable run-time
ity, they not only have a transient, but also a variable nature, that entails the temporal
behavior of the work-as-system
as part of a framework proposed simultaneity and spatial co-attendance of the user. Liveness, immediacy and presence,
by LeBlanc to understanding
become characteristic aspects of the experience of these process-based and participatory
computational systems where
the interaction between coded forms of audiovisuality (Jaschko 2010).
subsystems creates complex,
Consequently, our study is then dedicated to characterizing the observable dynamics
dynamic (and often unpredict-
able) behavior. Mechanics, of the work-as-process (as an activity performed in time), and of the work-as-system
Dynamics and Aesthetics are
(that includes the user).
causally linked levels of the
work, as aesthetics is born out
in observable dynamics and
eventually, operable mechanics
6.
Perspectives on Audiovisual Interactive Systems
or the underlying rules that for-
mally specify the work at the
Drawing on the previous views on the audiovisual surface, the principles behind its cre-
level of data representation and
algorithms (Hunicke, LeBlanc ative shaping, and the qualities of these systems behavior, we propose an approach to
and Zubek 2004).
audiovisual interactive systems that articulates different viewpoints: it considers their
heterogeneity as aesthetic artifacts, and addresses both their audiovisual and interactive
dimensions under the perspective of the dynamics that defines their experience. Having
applied these perspectives to four case-studies, while also relating their characteristics

22
to those of other systems (Ribas 2012, 271319), we now summarize its main points. In
order to contrast different audiovisual configurations as well as contexts and possibili-
ties for interaction we chose two online works and two installations: Antoine Schmitts
Worldensemble (2002), Peter Luinings 360 rotatable (2003), Manual Input Workstation
(2004) by Levin & Lieberman (Tmema) and Se Mi Sei Vicino (2006) by Sonia Cillari.

6.1.
Systems as Aesthetic Artifacts
We begin by contextualizing their themes and principles according to their self-referential
nature as works that are prospective in exploring the possibilities of software, with dif-
ferent aesthetic intents. These artifacts are considered abstract, or non-representational,
since the audiovisual surface is a product of the works operations and interactions. Sound
and image, in their dynamic articulations, express the subject matter of these works, be
it their potential autonomy (as endless audiovisual rhythms), reactivity to human ac-
tions (as audiovisual abstractions of interaction) or even as translations or expressions
of specific aspects (e.g. gestural expression or proxemic relations) of human participation.

6.2.
Audiovisual Dynamics and Interaction
We then describe their audiovisual surface behavior addressing the nature of its elements
(predefined or generated), the ways they appear associated (correlated or responding to
different factors), and related to user actions or input. We therefore approach increasingly
complex articulations between human input and audiovisual outputs, as well as custom
interfaces and physical forms of interaction. As the behavior of these systems may be tied
to different factors, a perspective on interaction is not solely focused on action-reaction
patterns, but on the overall variable behavior of the work, in each occurrence and in
response to interaction.
6.2.1.
Interaction and Agency
In order to develop this analysis, we revisit the notion of interaction according to the
roles of user and system as agents determining the audiovisual outcomes. Rather than
focusing on instrumental distinctions such as types, degrees or levels of interaction, we
aim at characterizing the aesthetic processes encouraged by interactive works (Kwastek
2008, 22). To this end, it becomes useful to consider the aesthetic pleasure of agency, as
proposed by Murray (1997), which depends on the ways our actions are aligned with tan-
gible effects. Agency is linked to the possibility to access different spaces, as a pattern
of exploration and discovery, and to the constructive role the users may assume when
they can build in some way the very content of the work.
We discuss the ways in which the user may explore or configure the audiovisual sur-
face, resorting to derivations of Aarseths (1997) user functions. Nonetheless, they do not
necessarily correspond to an alignment between action and effect. The users may not
realize that they are affecting the artwork, nor (if they do) just what behavior leads to just
which changes (Boden and Edmonds 2009, 35),10 since there may be additional factors of 10. T
 hese effects may be partial
or divided between sound and
influence, other than those explicitly related to user input or actions.
image, ephemeral, not clearly
An alternative way of putting this is considering that agency, rather than pertaining to perceptible or even not percep-
tible at all.
the user, is attributed to the system, in the very sense that Murray ascribes to ittaking
meaningful action leading to observable results. Just as a human being has the capacity
to sense its environment, operate on it, and make decisions, a system can be imbued with

23
these properties. Agency can be understood as the property of an autonomous entity that
is its capacity to act in or upon the world (Jones 2011). Interaction becomes a means of
testing the behavior of systems that potentially run autonomously, in a self-organizing,
and often unpredictable, manner.
6.2.2.Surface Dynamics and Determinability
Having examined the variable behavior of these systems as governed by different factors
we describe their surface dynamics in terms of changes in the number, arrangement
or creation of surface instances over time. The works behavior is also characterized by
its determinability, or the degree to which it operates predictably in the production of
surface elements or configurations, in each occurrence, and in response to interaction.
However, the audio and visual dimensions may not necessarily assume a correlated be-
havior, and the same applies to its determinability. The latter also leaves open what can
be considered an exact repetition of the same experience, thus questioning the degree to
which one can grasp, or control, the factors that define the precise configuration of the
audiovisual outputs (Ribas 2012, 24765).

6.3.
Discussion
This description goes beyond the previous view on sound and image as means and prod-
ucts of interaction, and on their relations as mapping to user input, in revealing how
each of the artifacts considered devises a specific way of governing the behavior or of
generating visual and auditory elements, and in this process, include (or even depend)
on the user. So rather than aiming at generalizations of their sound-image relations (as
data mappings), we seek to underline distinctive features of their dynamics. We empha-
size how sound and image acquire meaning through action, as the products of processes
(performed by the system, with the participation of the user).
This approach also reveals how interaction entails different forms of engagement
with the work as a means of exploring its (variable) behavior or its productive possibilities,
or as a form of influencing, or of defining, its audiovisual outcomes.

7.
Conclusion

This study addressed a topic of audiovisuality that is reshaped in reference to its medium.
But rather than resolving this topic, it provides a point of departure for further investi-
gating dynamic interactive audiovisuality. Namely, we envisage the study of a wider set
of artifacts in order to refine an analysis of the characteristics of their behavior. While
we have focused on describing the works dynamics, future research also contemplates
how the audience experiences its features, namely through structured observations of the
interaction process. In particular, we can further examine its determinability (in relation
to each modality), and the degree to which it is perceived by the user as a significant
aspect of the experience of the work.
We approached a segment of contemporary practices that, in their diversity, often
move ahead of theory. They reshape the very conception of sound-image relations beyond
its dominant themes or approaches. Acknowledging this variance, this work responds
to its demands, by conceptually framing the nature of these sound-image relations, as
procedurally enacted dynamic articulations of visual and auditory modes, subjected to

24
interaction. In this manner, it provides a direction for researching the constant creative
reformulations of this topic. One that embraces the diversified nature of audiovisual
systems as aesthetic artifacts, their principles, and themes, and what they propose as
interactive experiences. It respects this diversity by describing sound and image, and their
relations, according to the distinctive dynamics of these systems, or the variable (and
often indeterminable) behavior, that defines their meaning and experience.

Acknowledgements: The research had the financial aid of the Foundation for Science and
Technology (SFRH / BD / 42260 / 2007). We are indebted to Heitor Alvelos and Emlio Vilar
for their guidance, and to Miguel Carvalhais for his invaluable insight.

References

Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Baltimore: The John


Hopkins University Press, 1997.
Boden, Margaret, and Ernest Edmonds. What Is Generative Art?. Digital Creativity 20,
no. 12 (2009): 2146.
Bogost, Ian. The Rhetoric of Video Games. In The Ecology of Games: Connecting Youth,
Games, and Learning, edited by Salen, Katie. Digital Media and Learning, 11740.
Cambridge, Massachusetts: MIT Press, 2008.
Boissier, Jean Louis. Jouable. In Jouable: Art, Jeu Et Interactivit., edited by Boissier,
Jean Louis, Patrick Raynaud and Victor Durschei. 1520: Genve: HEAD; Paris: ENSAD;
Saint-Denis: Universit Paris 8; Saint-Gervais Genve: Centre pour limage, 2004.
Bootz, Philippe. The Problematic of Form. Transitoire Observable, a Laboratory for
Emergent Programmed Art. In, Dichtung-digital 1, (2005). http://www.dichtung-
digital.com/2005/1/Bootz.
Broeckmann, Andreas. Image, Process, Performance, Machine. Aspects of a Machinic
Aesthetics. In Refresh! 2005. International conference on the histories of media art,
science and technology. Canada: Media Art Histories Archive, 2005.
Candy, Linda, and Ernest Edmonds. Interaction in Art and Technology. Crossings:
eJournal of Art and Technology 2, no. 1 (2002).
Chion, Michel. Audio-Vision: Sound on Screen. Translated by Gorbman, Claudia. New
York: Columbia University Press, 1994. 1990.
Daniels, Dieter, and Sandra Naumann. Introduction. In Audiovisuology:
Compendium, edited by Daniels, Dieter and Sandra Naumann. 516. Cologne: Verlag
der Buchhandlung Walther Knig, 2010.
Daurer, Gerard. Audiovisual Perception. In Audiovisuology: Compendium, edited by
Daniels, Dieter and Sandra Naumann. 32947. Cologne: Verlag der Buchhandlung
Walther Knig, 2010.
Galanter, Philip. Generative Art and Rules-Based Art. Vague Terrain 3, no. Generative
Art (2006).
Grond, Florian, and Theresa Schubert-Minski. Sonification: Scientific Method and
Artistic Practice. In Audiovisuology: Compendium, edited by Daniels, Dieter and
Sandra Naumann. 28595. Cologne: Verlag der Buchhandlung Walther Knig, 2010.

25
Hayles, Katherine. The Time of Digital Poetry: From Object to Event. In New Media
Poetics: Contexts, Technotexts, and Theories, edited by Morris, Adalaide and Thomas
Swiss. 181209. Cambridge, Massachusetts: MIT Press, 2006.
Hunicke, Robin, Marc LeBlanc, and Robert Zubek. Mda: A Formal Approach to Game
Design and Game Research. In Proceedings of the Challenges in Games AI Workshop,
Nineteenth National Conference of Artificial Intelligence, 15. San Jose, California:
AAAI Press, 2004.
Jaschko, Susanne. Process as Aesthetic Paradigm: A Nonlinear Observation
of Generative Art. In, Presented at Generator.X Conference on Generative Art
Oslo, Norway: Atelier Nord, (2005). http://www.sujaschko.de/downloads/170/
generatortalk.
. Performativity in Art and the Production of Presence. In Process as Paradigm:
Art in Development, Flux and Change, edited by Corral, Ana B. D., 13035. Gjon:
Laboral Centro de Arte y Creatin Industrial, 2010.
Jones, Stephen. Towards a Taxonomy of Interactivity. In Interantional Symposium on
Electronic Arts. Istanbul, 2011.
Kwastek, Katja. Interactivitya Word in Process.. In The Art and Science of Interface
and Interaction Design, edited by Sommerer, Christa, Laurent Mignonneau and
Lakhmi C. Jain. Studies in Computational Intelligence, 1526.
Berlin: Springer Verlag, 2008.
. Your Number Is 96Please Be Patient: Modes of Liveness and Presence
Investigated through the Lens of Interactive Artworks. In Re:live09 Media Art
Histories, edited by Cubitt, Sean and Paul Thomas, 8994. Melbourne, Australia:
University of Melbourne & Victorian College of the Arts and Music, 2009.
. Sound-Image Relations in Interactive Art. In Audiovisuology: Compendium,
edited by Daniels, Dieter and Sandra Naumann. 16375. Cologne: Verlag der
Buchhandlung Walther Knig, 2010.
. Audiovisual Interactive Art: From the Artwork to the Device and Back. In
Audiovisuology 2: Essays, edited by Daniels, Dieter, Sandra Naumann and Jan
Thoben. See This Sound, 14871. Cologne: Verlag der Buchhandlung Walther Knig,
2011.
Levin, Golan. Essay for Creative Code. In, Flong (2003). http://www.flong.com/texts/
essays/essay_creative_code/.
. Within, Without: New Media and the White Cube. Interview by Alexandra
Nemerov. In, CUREJ: College Undergraduate Research Electronic Journal (2007).
http://repository.upenn.edu/curej/71.
. Audiovisual Software Art . In Audiovisuology: Compendium, edited by
Daniels, Dieter and Sandra Naumann. 27183. Cologne: Verlag der Buchhandlung
WaltherKnig, 2010.
Mangen, Anne. New Narrative Pleasures? A Cognitive-Phenomenological Study of
the Experience of Reading Digital Narrative Fictions. PhD, Norwegian University of
Science and Technology, Faculty of Arts, 2006.
Murray, Janet H. Hamlet on the Holodeck, the Future of Narrative in Cyberspace.
Cambridge, Massachusetts: The MIT Press, 1997.

26
Ribas, Lusa. The Nature of Sound-Image Relations in Digital Interactive Systems. PhD,
University of Porto, 2012.
Shimojo, Shinsuke, and Ladan Shams. Sensory Modalities Are Not Separate
Modalities: Plasticity and Interactions. Current Opinion in Neurobiology 11, no. 4
(2001): 50509.
Wardrip-Fruin, Noah. Expressive Processing: On Process-Intensive Literature and
Digital Media. PhD, Brown University, 2006.
Whitelaw, Mitchell. Art against Information: Case Studies in Data Practice. The
Fibreculture Journal, no. 11: Digital Arts and Culture Conference (2008a).
. Synesthesia and Cross-Modality in Contemporary Audiovisuals. Senses and
Society. 3, no. 3 (2008b): 25976.
. Space Filling and Self-Constraint: Critical Case Studies in Generative Design.
Architectural Theory Review 15, no. 2 (2010): 15765.
Znouda, Herv. Images et Sons dans les Hypermdias: de la Correspondance la
Fusion. Thse de doctorat, Universit Paris XIII, 2006.

27
28
Found Data: Generating Natural Looking Shapes by
Appropriating Scientific Data

Andres Wanner
andres_wanner@sfu.ca
Simon Fraser University, Vancouver, Canada

Ruth Beer
rbeer@ecuad.ca
Emily Carr University of Art and Design, Vancouver, Canada

Keywords: Installation Art, Generative Art, Interactive Art, Data Visualization, Public Data,
Appropriation, Oceanography, Sustainability.

Abstract: The installation Breathe/Live/Speak utilizes oceanic data to generate an organic


distribution of screen elements.
This paper describes the installation as part of the Catch and Release research/creation

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


project. We introduce our approach of Found Data, derived from artistic practices of Found
Object and Readymade, as an alternative to the widely used Perlin Noise for generating
natural looking shapes.
The approach is demonstrated in detail, and some examples are presented. We outline
how these are implemented in the installation, and conclude by arguing for the relevance
of this method in a time of increasingly available data.

29
1.
Introduction

Practitioners in Generative Art have access to a wide range of techniques and methods
for generating organic, natural looking shapes. In The Nature Of Code, Shiffman presents
Cellular Automata, Koch-Curve Fractals, L-systems and Genetic Algorithms as a variety of
methods for representing nature computationally (Shiffman). Our approach is different:
we represent nature by directly using data from nature.
Manovich has suggested an analogy between visual arts and information visualization,
and compares the choice of data with an artists selection of a visual motif: Figurative
artists express their opinions about the world by choosing what they paint Now artists
can also talk about our world by choosing which data to visualize. (Manovich, 13)
Our aim in this paper lies in presenting our Found Data approach producing natural
looking patterns. We refrain from offering a metric or criterion to evaluate how natural
a pattern looks, and leave this to the reader to assess. Instead we will discuss a thematic
motivation for using a non-computer-generated randomness, and present the results
we achieved.

Fig 1. A Found Data plot with forms based on scientific data.

2.
Catch and Release and the installation Breathe/Live/Speak

Catch and Release: Mapping geographic and cultural transitions is a research/creation


project with the goal of raising awareness about current issues of cultural and environ-
mental sustainability. This government-funded 3-year initiative acts as an umbrella for
interdisciplinary art projectsinteractive storyscapes that engage viewers with these
issues through the immersive experience mediated by multimedia installations.
The interactive installation Breathe/Live/Speak is one of these projectsa dynam-
ic composition of oceanic elements on projection screens. These elementsplankton
organisms, bubbles, and typographic contentrepresent oceanic life, their motions

30
suggest particles floating in underwater currents. The installation visualizes numerical
empirical data from the NEPTUNE Canada regional cabled ocean network, which gathers
live data from undersea environments in the Northeastern Pacific. An online platform
makes these marine data publicly availableusers find data on oxygen concentration,
salinity, temperature, current, and other variables (Neptune).
By referencing the oceanic context, and through the subtle interactivity with the
organisms and other screen-elements, the installation aims at raising awareness and
reminding viewers of their impact on a fragile oceanic ecosphere.

3.
Context

3.1.
Situation between Generative Art and Data Visualization
Our work implements aspects of both generative art and data visualization. We borrow
the autonomous generative process from generative art (Galanter), and the emphasis
on representation of abstract data from data visualization. However, in contrast to the
clear and effective communication Friedman demands from visualizations (Friedman),
as artists we want to leave room for ambiguity and interpretation. We are not concerned
with scientific semantics of our data, but will use it to generate aesthetic forms, in agree-
ment with Manovich:

The intent of these projects is not to reveal patterns or structures in data sets but
to use information visualization as a technique to produce something aestheti-
cally interesting. (Manovich, 13)

3.2.
Perlin Noise
Shiffman observed that In a computer graphics system, its often easiest to seed a
system with randomness.and simultaneously pointed out problematics of this ap-
proach (Shiffman, 7). While different strategies of randomness and noise are prevalent in
Generative Art, one approach is that of Perlin Noise, originally developed by Ken Perlin for
textures in computer-generated animations. It is suited and widely used for generating
unpredictable and naturally looking patterns featuring the subtle irregularities of real
objects (Perlin, 12). Thus it would be a common approach for our purpose of generating
a natural distribution of screen-elements.

Fig 2a. The random-function: function graph and 2D pattern of grey values.

31
Fig 2b. The Perlin-Noise function: function graph and 2D pattern of grey values.

In contrast to a standard random function, Perlin Noise is coherent, i.e. two neighbour
points will have a similar noise value. Perlin specified that all the apparently random
variations be the same size and roughly isotropicthey will look similar in all direc-
tions and positions. (Perlin, 5).

3.3.
Found Object and Found Data
Early twentieth century artists have introduced the Found Object into art history, an eve
ryday object that obtains its state as a work of art through the selection and introduction
into a new context. More particularly, with his Readymades, Marcel Duchamp

took an ordinary article of life, placed it so that its useful significance disap-
peared under the new title and point of viewand created a new thought for
that object. (Duchamp)

Discussing the implications on authorship, Irvin suggests that

Holding the artist responsible for a work means, in part, holding the artist re-
sponsible for having released it into a context where particular interpretative
conventions and knowledge are operative. (Irvin)

The conventions of the new context are as important as the actual fabrication of the
artifact.
In an analogy to a Found Object, we would like to suggest the term Found Data, in
which data is released into a context, and a new thought for that data is created. We
present the use of Found Data as an alternative to implementing Perlin Noise.
We appropriate data for their formal qualities, and deliberately ignore their scientif-
ic denotations. The installation thus re-contextualizes scientific data and uses it for the
creation of natural looking distributions of elements.
Representing physical real-world quantities, the number sequences in Found Data are
mostly continuous and unpredictable as Perlin Noise, but variations may not be same size
and isotropic. As a shared objective however, we hope to produce natural looking patterns.

3.4.
Our Motivations for using Found Data
The Catch and Release project intends to raise awareness about oceanic life. Our aim is
to generate irregular patterns, which give the impression of elements being exposed to
oceanic currents and turbulences. The patterns have to be capable of engaging the view-
ers capacity for seeing meaningful patterns in random data, and thus of opening an

32
interpretive space for imagination.
The use of technical random numbers to raise awareness for nature seems to be con-
tradictory rhetoric. Shiffman is not the only one to observe that

Defaulting to randomness is not a particularly thoughtful solution to a design


problemin particular, the kind of problem that involves creating an organic
or natural-looking simulation. (Shiffman, 7)

Randomness may produce counterproductive connotations of human non-involve-


ment and technological arbitrariness. For this reason, we prefer a non-computer-driven
positioning algorithm that bears a relation with the thematic concern of our installa-
tionawareness for the ocean. By using the ocean as a data source, we provide a self-ref-
erential dimension to the work, and align the form with the content.

4.Research: Use of Data

4.1.Method: Drawing scatter plots based on Found Data


In this section, we are going to discuss in detail, how the data in the Breathe/Live/Speak
installation is used to generate distributed positions of screen elements. Technically, our
method consists of generating scatter plots of two variables. Such scatter plots are used to
view and analyze a correlation between two variables. Strongly correlated variables will
to view and analyze a correlation between two variables. correlated variables
variables Strongly correla
result inresult
will a diagonal linear distribution,
in a diagonal whereas
linear distribution, uncorrelated
whereas variables
uncorrelated will not
variables show
will a
not show
diagonal pattern
a diagonal and produce
pattern a distribution
and produce spreading
a distribution over a wide
spreading over arange
wideof the graph
range of thearea.
graph
area.
For distributions with interesting and surprising shapes to arise, we look for a pair of
uncorrelated variables
For distributions to be
with plotted against
interesting each other.
and surprising We choose
shapes to arise,measurements from
we look for a pair of
uncorrelated
different variables
locations to be
and times, to plotted
minimizeagainst each other. We choose measurements from
interdependencies.
different locations and times, to minimize interdependencies.

Canyon, Oxygen
Barkley Canyon
Aug 13 2012, 15:15

Folger Deep,, Salinity,


Aug 27
7 2012, 17:05

Fig 3. Scatter plot of two uncorrelated variables (right).


Fig 3. scatter plot of two uncorrelated variables (right)

Figure 33 illustrates
Figure illustrates this
thismethod
methodinindetail.
detail.Data of oxygen
Data concentration
of oxygen in Barkley
concentration Canyon
in Barkley
is plotted against salinity from the Folger Deep sensor station. These locations are about
Canyon is plotted against salinity from the Folger Deep sensor station. These locations
100km from each other, and the measurements are separated by a 14-day day period, and
are about
taken at100km from
a slightly each other,
different and
time of daythe measurements
therefore are they
we can hope separated
are notbystrongly
a 14-day
correlated.
period, The at
and taken plot
a displays
slightly different points,
2000 datatime of taken in 1-minute
minute
day therefore can hopeValues
we intervals.
intervals of both
they are
hours. This is long enough
datasets fluctuate in intervals between 10 minutes to several hours.
to generate surprising patterns, but not too long to result in a completely uniform
distribution.
33
inimal scientific meaning (if any at all), however we begin to see
The graph bears minimal
patterns, that we think have potential to be considered natural looking.

4.2 Requirements on the data


not strongly correlated. The plot displays 2000 data points, taken in 1-minute intervals.
Values of both datasets fluctuate in intervals between 10 minutes to several hours. This
is long enough to generate surprising patterns, but not too long to result in a completely
uniform distribution.
The graph bears minimal scientific meaning (if any at all), however we begin to see
patterns, that we think have potential to be considered natural looking.

4.2.Requirements on the data


We impose minimal requirements on the data. We ask that data is continuous, fluctuat-
ing in intervals of the order of 100 data points, and that the two datasets are not strongly
correlated. In this section we show some counter examples of data that are not suited
for this method:

Fig 4a. Horizontal data is not contin- Fig 4b. Horizontal and Vertical Fig4c.Verticaldata
uous, but in discrete steps. data are correlated. is not continuous.

We demonstrate three cases with data that are not suited for our method. In figure
4a, horizontal data is not continuous, but in discrete steps. These lead to regular gaps in
the resulting pattern, which we want to exclude for aesthetic reasons.
In figure 4b, data series are taken from the same time and locationsalinity and
temperature from the Folger T station. The plot approximates a diagonally ascending line
that we would expect from a scatter plot of partially correlated variables.
In figure 4c, the values of the vertical variablethe focus of an underwater video
cameraare discontinuous and differ heavily between subsequent measurements. The
data-points fill the entire space. Their distribution is mostly uniform apart from cluster-
ing vertically around an average value.

4.3.Gallery: Some examples

Fig 5.Barkley Canyon Oxygen vs.Folger Salinity. Fig 6.Barkley Canyon Oxygen vs.Folger Pressure

34
Fig 7.Folger Pressure vs.Barkley Canyon Tem Fig 8.Folger Temperature vs.Folger Salinity.
perature (bordeline case with discrete values).

4.4.Correlating a data series with itself, shifted in time

Fig 9. Barkley Canyon Oxygen correlated with itself, shifted by 13782 minutes.

Fig 10. Folger Temperature correlated with itself, shifted by 5461 minutes.

We also correlated data series with themselves, but shifted in time. Figures 9 and 10
display plots of data series that are correlated with series later in time, but from the same
respective sensor. Our observations show, that more interesting patterns arise, once the
offset between the two series is over some 1000 minutes.

35
4.5.Comparison to Perlin Noise

Fig 11. Perlin noise, correlated to other Perlin noise.

When applying our method to two series of Perlin Noise values, we obtain a similar
pattern. However the plot looks more regular and less organic than our Found Data plots:
the points are more uniformly distributed over the screen, and are lined up in quite reg-
ular distances along continuous lines. We speculate that the same size and isotropic
properties are counterproductive for the use we have in mind.

4.6.Scaling patterns to cover the full screen


We earlier assumed that Found Data patterns may not necessarily be same size and
isotropic, thus at times they result in localized and off-centered clusters on the screen.
In contrast, with our goal of creating an immersive experience, we wanted to situate the
viewer within the data environment, rather than having her look at an object with finite
contours. For our application in the interactive installation, we thus dynamically scaled
the data to center patterns extending them to cover the entire screen area. This provides
more immersion, although it compromises the density and conciseness of the patterns.

Fig 12. Off-centered plot. Fig 13. The same data scaled to fit screen.

36
5.
Application

In the Breathe/Live/Speak installation, our method is used to position elements on interac-


tive screens. Timing is chosen in a way so the elements appear to move as in an oceanic
current, and the colour scheme further underlines this oceanic connotation. The colours,
opacities and orientation angles of the individual elements are calculated using the same
two datasets that were used to determine their position. We chose the range of visual
parameters so as to create an evocative pseudo-spatiality contributing to the immersive
aspect of the work. Viewers interact with the elements and distort their arrangement as
a Kinect camera captures their body motions. Without their interference, elements will
bounce back to their original, data-directed positions.

Fig 14. InstallationScreenshots: Breathe (detail).

Fig 15. InstallationScreenshots: Speak (right).

Three thematic screens illustrate different themes with varying choice of elements:
Breathe with air-bubble elements, Live with plankton organisms, and Speak with typo-
graphic content.

37
6.
Conclusion

We demonstrate how we use scientific data to generate natural looking patterns on


aesthetic scatter plots. We chose to use empirical Found Data instead of Perlin Noise as
a generative principle for positioning screen elements, as we find the method shares a
thematic relation with our subject matter. Results show that our method is equally suited
to produce natural looking patterns.
Our research and creative production suggests that Found Data may be a useful
concept for directly linking the two closely related fields of Generative Art and Data
Visualization. We think of it as an approach that offers a method with low predictability
to enhance possibilities within Generative Art, and releases Data Visualization from the
expectation of literal interpretation.
While this paper focuses in presenting the method used in our examples of installa-
tion artwork, we recognize that the method will benefit from more research to system-
atically clarify what kind of data will lead to interesting patterns.
Scientific and public data is increasingly accessible, and many observe an outbreak of
public visualization projects (Lima, 97). With this paper we offer an approach for artistic
use of data, and thus hope to inspire others to work with this method and to develop it
further.

References

Duchamp, Marcel, Wood Beatrice, and/or Roch, H.P. Anonymous article. The Blind
Man # 2. (1917).
Friedman, Vitaly. Data Visualization and Infographics Smashing Magazine, Monday
Inspiration (January 14, 2008), accessed Jan 17, 2013, http://www.smashingmagazine.
com/2008/01/14/monday-inspiration-data-visualization-and-infographics/.
Galanter, Philip. What Is Generative Art? Complexity Theory as a Context for Art
Theory, Paper presented at GA20036th Generative Art Conference 2003, 2003.
Irvin, Sherri. Appropriation and Authorship in Contemporary Art. The British Journal
of Aesthetics 45, no. 2 (April 1, 2005): 123137.
Lima, Manuel. Visual Complexity: Mapping Patterns of Information. Princeton
Architectural Press, 2011.
Manovich, Lev. Foreword to Visual Complexity: Mapping Patterns of Information, by
Manuel Lima, 1113. Princeton Architectural Press, 2011.
Neptune Canada. Neptune Canada, Transforming Ocean Science, accessed
Jan17,2013, http://www.neptunecanada.com/.
Perlin, Ken. MAKING NOISE. Based on a talk presented at GDCHardCore (Dec 9, 1999),
accessed Jan 17, 2013, http://www.noisemachine.com/talk1/.
Shiffman, Daniel. The Nature of Code: Simulating Natural Systems with Processing. The
Nature of Code, 2012.

38
Geometries of Flight: Remix as Nodal Practice

Monty Adkins
monty.adkins@hud.ac.uk
University of Huddersfield, England

Julio dEscrivn
julio.descrivan@hud.ac.uk
University of Huddersfield, England

Keywords:Audio-Visual, Remix, Hybridity, Nodalism, Video, Visual Music.

Abstract: This paper considers the authors audiovisual work Geometries of Flight as
an example of nodal practice as proposed by Philip Gochenour. The paper outlines
Gochenours concept and situates the remix and the mashup within this model. The
paper interrogates various models of thought concurrent with Gochenours to question
the nature of the remix, appropriation, and originality in creative practice.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

39
The task is to vision anew what is possible, but in a way that allows others to share the view.
Graeme Sullivan (2006)

1.
Introduction

Geometries of Flight is an audiovisual work created by the authors in 2013. Geometries of


Flight was commissioned by Tobias Fischer as a contribution to a publication centred on
the work of Kenneth Kirschner. The brief for the project was to use any of Kirschners
compositions as the starting point for a remix. All of the sound artists commissioned
were given free reign to use his work in any way with no restriction on length or media.
The audio component of the project utilizes solely samples taken from Kirschners
10July,2012 whilst the video uses youtube footage. The authors propose that their use
of these materials goes beyond the accepted notion of the remix and is an example of
nodal practice.

1.1.
Defining nodalism
Developing out of Modernism and Post-Modernism, Deleuze and Guattari proposed in
Mille Plateaux the notion of a rhizomatic culture, one in which hierarchical structures
were discarded in favour of the concept of a planar network of connections. This con-
cept of a rhizomatic understanding of society and culture has been elaborated further
in Philip Gochenours concept of nodalism. Although similar to the rhizomatic model
proposed by Deleuze and Guattari, nodalism proposes a model and way of thinking
adopted by a number of contemporary disciplines and in its neutrality supersedes the
cultural baggage associated with post-modernist thinking and its notions of decon-
structivism, rationalization, parody, quotation and irony. Gochenour proposes that in
the 21st century we find that our conception of the world has taken on a particular form,
one of nodes situated in networks. (Gochenour, 2011) He writes that the nodalistic trope
can be simply described as a figure of speech that is used to portray an object or process in
terms of the connection of discrete units by an open network system that has no hierarchi-
cal structure. (Gochenour, 2011) In contemporary society the node is ubiquitousfrom
referring to the internet as the web, the Facebook logo and our social networking
structures, mathematics, transportation networks, computer science, economics and
critical theory as well as its use in popular culture such as Node Magazinea literary
project initiated by Sean Kearney growing out of William Gibsons novel Spook Country.
Gochenour maintains that nodalism has arguably become a dominant discourse within
Western culture. (Gochenour, 2011) Thus far this notion has not been applied specifically
to music or visual culture.
We propose that nodalism is the way of approaching the production of all artwork
in contemporary culture; that nodalism enables the reintroduction of a sense of local-
hierarchy within a network and that understanding this local-hierarchy and its associ-
ated network is the means of interpreting and contextualising new artistic endeavour.

1.2.Riffs on remixing
Understanding the concept of originality, where our ideas come from and how we
appropriate, re-use and adapt familiar tropes is the subject of many academic texts.

40
Modernist texts such as Harold Blooms The Anxiety of Influence: A theory of Poetry posits
a central thesis that poets are hindered in their quest for an original voice because of the
influence of other poets. Such an anxiety can also be found in the ground zero musical
perspective of post1945 composers such as Pierre Boulez and Karlheinz Stockhausen.
From a nodalist perspective, however, all new artistic endeavour is a hybrid of pre-
existing models, thoughts or workor as Kirby Ferguson puts it everything is a remix.1 1. www.everythingisaremix.info

Michel Foucault in The Archaeology of Knowledge writes that, The frontiers of a book are
never clear-cut it is caught up in a system of references to other books, other texts, other
sentences: it is a node within a network. (Foucault, 1982) This method of thinking does not
lead to a Bloomian anxiety but rather acknowledges culture and its development as an
evolutionary process. Such a model is proposed in The Selfish Gene by Richard Dawkins
(Dawkins, 1976). Dawkins model of culture is one comprised of memes, units of cultural
information that are transmitted from one individual to another. Eventually a critical
mass of memes can be used to identify cultures and sub-cultures that have a shared
understanding of such memes. From a nodal perspective, the identification of a genre
is understood as a grouping of culturally encoded memes i.e. there is a local hierarchy
of memescertain memes are valued above others in order to form a shared sense of
identity. In Geometries of Flight such memes or nodes are Kirschners work 10 July,2012;
an approach to handling form, tonality, and sound processing that stems from the genre
within which Kirschners work is identifieda kind of instrumental experimental
ambient music characterised by such labels as 12k, Room40, Spekk and Audiobulb.
From a compositional perspective, the conscious (or unconscious) usage of memes
inherently implies the drawing together of different musical elements or stylistic
traits either from within a genre, or from another genre: in other words, in electronic
musicsampling (in the broadest sense of the word). This mode of thinking shares a
close kinship with the post-modern sampling aesthetic of Paul Miller (aka DJ Spooky)
who writes,

Essentially, for me, music is a metaphor, a tool for reflection. We need to think of
music as information, not simply as rhythms, but as codes for aesthetic transla-
tion between blurred categories that have slowly become more and more obsolete.
For me, the DJ metaphor is about thinking around the concept of collage and its
place in the everyday world of information, computational modelling, and con-
ceptual artthe basic sense of rhizomatic thoughtthinking in meshworks,
in nets that extend to other netsits the driving force of my music and art
We live in an era where quotation and sampling operate on such a deep level
that the archaeology of what can be called knowledge floats in a murky realm
between the real and unreal (Miller, 2005).

Nodalism with its emphasis on interconnectedness seeks to understand phenomena


through an understanding of the plurality of links or memes that link to the artistic
work under examination. Nodalism, memetics and the rhizomatic are all means of

41
discussing a post-structuralistic aesthetic in which the line between creating an orig-
inal artwork and one that uses elements of pre-existing material is fragile.
We argue that in light of the writings of Foucault, Dawkins, Miller et al, the concept
of originality is such a loaded term that the material which constitutes the piece is
now not a relevant measure. To paraphrase Brian Eno, in contemporary practice it is the
art of arranging and editing that is more important than content. It is this emphasis
on arranging and editing material that is to be found in artists and works as diverse
as Igor Stravinskys Pulcinella (191920), William Burroughs cut-ups and John Oswalds
plunderphonics.
In Geometries of Flight, as in the works cited above, it is the process and reframing
of the original material that is the most important factor in determining the identity of
the new work rather than the embedding of samples as referential units. In such works,
material, concepts, and ideas are assimilated into the very fabric of the new work rather
than merely weaving quotations into the surface level of the work.
In his book In Praise of Copying, Marcus Boon writes that The assemblage of a
new artifact from fragments of preexisting objects or forms is one of the key practices of
modernist aesthetics. (Boon, 2010:145146) Boon states that today the terms montage and
collage are often taken to mean the same thing, and assemblage is often used to describe
the use of similar techniques in sound and literary work. Boon continues,

the power of detournement, the transformation of pre-existing elements in a new


ensemble, stems from the double meaning, from the enrichment of most of the
terms by the coexistence within them of their old and new senses. It is in this
sense that montage is a practice of copying, since it often involves the citation of
the old object in the new. (Boon, 2010:146)

However, Geometries of Flight is not a montage. In a montage something is decon-


structed and often, as in a commercial remix, it is important for the audience to be
2. P
 ortishead, Dummy, aware of the breaks. Like Oswalds plunderphonics and Portisheads Strangers2 , it is
Go!Beat,1994
the identification of the breaks that engenders understanding and meaning in the new
artwork. A more elaborate example can be found in Robin Holloways Gilded Goldbergs
Op.86 (199297). The work transcends the transcriptions of Bach by Busoni to become an
elaborate reworking within a contemporary idiom. Bachs Goldberg Variations provides
the structural and harmonic framework which then acts as a springboard for musical
portraits and character vignettes in the manner of Elgars Enigma Variations Op.39 (1899).
Holloways work is neither a montage or a remix in that the structure of the original
Bach compositionan aria and thirty variations is adhered to. Amon Tobins works
present a final and perhaps the most sophisticated example of such work. Tobins albums
Supermodified, Permutations and Bricolage take samples from a variety of sources and
remix them into a new form with extensive processing. In Tobins work the origin of
the samples used is of less importance than their inherent musical interest to him as
a musician. Although such samples when remixed nevertheless act as signposts to the
original track, Tobin is not trying to say anything about the specific combination of
samples other than something musical. In this case the extensive nodal connections
made carry no intended message rather the focus is on the resulting mix by Tobin.

42
As such nodal practice is a more pertinent way of describing the work of Tobin and such
figures as Fatboy Slim rather than the post-modern aesthetic of sampling and collage
techniquealso found in the work of composers such as Alfred Schnittke and Holloway.
In order to situate and understand Geometries of Flight as a work it is important
to interrogate the notion of the remix. In the commercial world of popular music the
remix has a particular currency. Although exceptionally, Matthew Herbert may remix a
track using only the packaging it was sent to him in, the normal process of remixing is,

() to take an already finished track and remake it by using a combination of:


rearranging it in a different way, removing parts, adding new parts, adding new
effects, changing the genre of the music completely or whatever you can come
up with to artistically make it different than the original. A remix is in an area
of change from the original track that is more than just an edited version or
cover version of the track but it is not too different that it becomes a new track
that happens to sample the original () Making the original track artistically 3. http://www.remixcomps.com/
blog/guide-remix-contests-and-
differentbut at the same time keeping some essential elements of the origi-
remixing-part-1
nal version.3 [accessed 16.1.2013]

Kirschners 10 July, 2012 was used as a source for the sonic materials. Five short
samples were taken and processed considerably. What results is a deconstruction of
the conceptual identity of the original. As such it is somewhat removed from the notion
of the remix cited above and also from the plunderphonics work of John Oswald inso-
much that idenification of the original source is no longer relevant in the formation of
meaning and understanding in the resulting artwork. In this sense Kirschners work
is not remixed but becomes a repository of sonic resource to be drawn upon. From this
perspective Geometries of Flight is not so far removed in its methodology from a work
such as Elizabeth Hoffmans electroacoustic work d-ness (2011) which uses as its source
material a recording of another of Hoffmans works Red is the Rows (2011) for two violins.

In his book Crowds and Power (1960) Elias Canetti maintains that imitation is only
the first stage on the way to total transformation. Canettis observations on the different
degrees of transformation propose a spectrum of difference between mere surface or
superficial imitation and a total interior and exterior transformation. In Geometries of
Flight Kirschners original sonic material has undergone such a fundamental interior
transformation resulting in an exterior that has little superficial sonic resemblance to
the original. Yet there is still a kinship between the two. The question there is what re-
mains of the original? Here we are reminded of Picassos statement on abstract art which
states that There is no abstract art. You must always start with something. Afterwards
you can remove all traces of reality. Theres no danger then, anyway, because the idea of
the object will have left an indelible mark. 4 Picassos quotation suggests that there always 4. Pablo Picasso qotation cited
from:
remains a trace of the original. In a commercial remix there often remains a surface
http://quote.robertgenn.com/
level connection with the original track. Specific gestures, sound objects or motifs /auth_search.php?authid=72
[accessed 16.1.2013]
are embedded clearly in the remix as unambiguous signposts to the original track. In
Geometries of Flight this trace is to be found not in such blatant sonic markers but inthe
harmonic fields employed within the piece. The gestural language of Geometries of Flight

43
is far removed from Kirschners original. However, due to the layering of granular sound
processing the harmonic characteristics of the original gestures and phrases is evident.
In this sense the notion of the remix is extended to include deeper level musical pro-
cesses and a more experimental approach to listening as expounded by Smalley (1997).
The remix therefore becomes not merely the reframing of elements from one
nodethe original track itself, but establishes a local-hierarchy of other nodes regard-
ing the identification of style and context. In the case of Geometries of Flight the remix
involves nodes that draw together Kirschners oeuvre, the genre in which he works,
the authors own work and idiom, sound processing techniques and the sonic trace of
software, reading audio-visual materials, sampling, Smalleys technological listening,
as well as other audio works. It is these additional nodal connections that makes the
contemporary remix such a rich creative endeavour.

2.
Knowing through visual remaking

2.1.
Understanding through of visual editing on verbal cues
Nodalism, beyond presenting a contemporary approach to poietics and originality is
also a vehicle for understanding in that it provides for a nodal hierarchy in which to
traverse a work of art; it can become a research procedure as well as a creative one.
Discussing what constitutes a research act in art practice, Graeme Sullivan (2006) tells
us that,

if the purpose of research is the creation of new knowledge, then the outcome
is not merely to help explain things in causal or relational terms, but to fully
understand them in a way that helps us act on that knowledge.

This acting is the creation of a new work of art. The visual language employed in
Geometries of Flight is one such example of acting on new knowledge provided by the mu-
sic. In conversations between the authors we found a common subjective visualisation
of the large swathes of granular material as an epic freeze. This new intersubjective
knowledge, began to disambiguate the meaning of the new music, to paraphrase Tagg
(Tagg, 2012:Loc 156), by going beyond the iconic, indexical and connotative types of semi-
osis we would normally expect. These were simply not obvious as the music resembled
itself and other instances of granulations and ambient composition yet with a particular
take on the original material (broadly Lo-Fi, irregular piano music passages revealing a
recognisable sample of the original almost halfway through the piece). Byplacing our-
selves at the receiving end of the communication process (Tagg, 2012) and trying to find
common verbalisations to express the musical experience we came up with imagined
landscapes of glaciation, of flight, of blinding whiteness, and of arbitrary arrangements
of streaming video within the cinematic canvas. For now, let us consider the audiovisual
discourse yielded by the second degree of remixing the Kirschner which is the visual
mode of Geometries of Flight.
In an attempt to capture this epic freeze image, one of the authors scoured youtube
with the intention of finding vistas of arctic or antarctic landscape where ice would be
prominent. This led also to the inclusion of aerial polar landscape footage. Once this

44
material had been identified, a process of de-contextualisation began. The idea was first
to create a database of ice materials that brought to mind the musical strands in the
audio mix, secondly it became important to make explicit that this was to be, in the
words of Lev Manovich (2001:Loc 241) an instance of database imagination. Much like
the original Kirschner is broken down, modified, re-selected, given order and re-layered,
the visuals attempted to do the samein essence, the same nodal practice is applied
in both the musical and the visual domains. The resulting edit of video attempting,
again paraphrasing ideas of Manovich (2001:Loc 297) a simulacra of new media. With
the v isual loop at its core Geometries of Flight now results in a non-story that tells the
gradual discovery of a visual language. And much like Manovichs assessment of Dziga
Vertov as providing narrative through a gradual process of discovery of the database
(Manovich, 2001:Loc 266), we have released our database of visuals as a gradual discovery
of the sounding music of Geometries of Flight.
Rather than being an attempt at verbosity, we mention sounding music in an in-
tended contraposition to visual music. The term visual music, coined by art critic Roger
Fry in 1912 to describe the work of Kandinsky is perfectly consonant with our intentions
for Geometries of Flight. In our case, the sounding music gives rise to the visual music,
mediated by the authors intersubjective experience. Garro (2012:103) gives an informative
account of visual music primed for the consideration of electroacoustic music, especially
in the binding of the visual experience to time (even as we regard the canvas, beyond the
first general intuitive sighting, we traverse that which is framed in time). Kandinskys
improvisations and compositions between 1910 and 1914, operate according to this
timed viewing. And it is interesting to consider pieces like Composition VIII (1923) as an
example of an image that needs to be traversed to be comprehended, the overall view not
revealing anything other than multiple paths for the eye to consider. In Point and Line
to Plane (1926), Kandinsky makes much of sound to describe what is in essence visual,
eventually both becoming the same thing: organised vibrations experienced in time.

2.2.
The Plunderphonic model: understanding by remaking
Plunderphonics is a good example to look at to illustrate this understanding of
Kirschners original material and deriving new knowledge from it by remaking it. In
an interview with Norman Igma, John Oswald defines a plunderphone as a recogniz-
able sonic quote, using the actual sound of something familiar which has already been
recorded Further, he distinguishes that from musical quotation: Whistling a bar of
Density 21.5 is a traditional musical quote. Taking Madonna singing Like a Virgin and
re-recording it backwards or slower is plunderphonics, as long as you can reasonably rec-
ognize the source (Igma, 2000). The key characteristic of the plunderphone is the ability
of the listener to recognise the source. This act of recognition mediated by transforma-
tion raises interesting epistemological issues. These can be discussed usefully in three
ways that are applicable to any remix aesthetic.
Firstly, by choosing objects to be remade or imitated, we begin a process of critical
categorisation, and categorisation shows understanding. For instance in Brown, from
John Oswalds 69/96. This piece is a veritable catalogue of James-Brownisms where
Oswald takes us on a lightning tour of funkiness. In this piece we find that the samples
of James Brown are chosen and grouped according to various strands: shouts, beats,

45
saxophone solos, hits, vamps etc. These elements are not just samples from the source
(James Brown) but they are a choice of what makes James Brown into James Brown,
from Oswalds point of view. They demonstrate Oswalds understanding of James Brown.
This understanding is not expressed through language, but through placing samples
one after another as well as alongside each other. The resulting listening experience is
a transmission of this knowledge of James Brown, and is both an interpretation (her-
meneutic process) communicated to fellow musicians as well as a new musical artefact
gifted to the audience.
Secondly, attempting to blend together our material into new constructs, we also
evidence that we understand the basic morphology of that material. Again, in Brown,
if we look at the combination of hits, vocal cries and beats we see matching by beat,
texture and general shape likeness. A further level of complication comes from the
framing of the material for remixing. In this way a whole bar of a drum break may be
cut and placed, a single get down! shout may be trimmed just so. The result becoming
a new rhizomatic expression of James Brown yet nodalised by the very act of ordering.
Music, being time based, declares precedence and being amplitude sensitive, declares
hierarchy (importance). In this way what is chosen as an introduction (one, two, three,
four... [stutter]) is clearly there as both an indexical sign (the count) and iconic sign
(Tagg, 2012:117) (James Browns characteristic voice and count-in). The following saxo-
phone squeal is subservient to the beat and we know this because the amplitude of the
drums is greater, the saxophone becoming just a colouring, perhaps a vocal anaphone
of James Browns funky yelling.
Similarly, in Geometries of Flight, an intimate relationship with the piece by
Kirschner (10, July 2012) is evidenced by the choice and layering of granulations of
the original piano material (a piece which wanders pleasantly in semi-improvisatory
phrasing through cyclical note/chord sequences). Where Kirschner seems not to imply
necessary harmonic, timbral or melodic precedence, Geometries of Flight interprets the
essence of the piece as a celebration of the piano sound both in texture and register.
Itdoes this by presenting broad swathes of granulations which highlight the importance
of timbre by freezing and overlaying different samples of the original piano texture.
The non-teleological cyclical structure of the original is distilled into frozen layers of
sound. But further than this, Geometries of Flight imposes a form, thus re-framing the
samples within a new composition (the act of putting things together anew). This is
to say that the position or placement of the sounds obeys a new form. This new form
results from basic compositional choices: what goes first; what goes second; if sounds
are playing together, which should be louder?; how many sounds can co-exist at any
moment in the mix, etc.

46
Fig. 1. Geometries of Flight: layers reframed5 5. https://vimeo.com/57453946

Thirdly, when remaking through mashup/remix, we show an understanding of


semantic value and the ability to recombine the original material into new semantic
constructs. In parallel to the audio discourse, the visuals in Geometries of Flight show
both iconic signs (as snow and ice are presented) to the idea of freezing sound through
granulation and an indexical sign through synchrony with chosen moments of pre-
sumed musical importance. Iconic signs that evidence an understanding of layering
within the sound world are also evidenced by visual layers of video. Each telling its own
story but only partially, enigmatically, thanks to the framing of the streaming video,
the choice and the positioning within the screen as canvas.

Fig. 2. Geometries of Flight: indexical and iconic signs of freezing6 6. https://vimeo.com/57453946

47
Our imagination is embodied in our art practice. Ideas become tangible as the work
gets made and our thinking in a medium (Sullivan, 2006) raises domain-specific epis-
temological issues as the piece takes shape. Geometries of Flight evidences audiovisual
thinking as much as it does musical thinking and insights about both modalities are
primed by each other.

2.3.
Remaking and likingas confirmation of knowledge
reception
Remixing something shows at least interest in the original if not outright appreciation,
yet placing the work in the contemporary social web yields further signs of acceptance
(and possible understanding) for a work. Contemporary social media requires that we
react to what is shared with us. The ubiquitous thumbs-up icon popularised by Facebook
is now to be found anywhere content is presented on internet. Twitter and Google+ have
their equivalents in favourite and +1 respectively. At the same time, sites like Vimeo,
Flickr or Soundcloud allow for author-enabled downloading and sometimes attaching
Creative Commons licenses that tell us what we are allowed to do with the media (usu-
ally implying that we should probably think of doing something with it!). This liking
seems certainly a way of confirming the artistic message, which is the work itself. We
like if we like and we do not like as a passive indication of either indifference or rejec-
tion (which for an artist is the same). The new Facebook Graph Search, for instance, isa
way to trawl through the evidences of reception of media shared on the net as well as
identifying meta-communities of likers. This introduces nodal thinking into the social
reception phenomenon. In the same way that the artist categorises, orders and evalu-
ates signification introducing nodalism into what was essentially rhizomatic thinking,
now liking establishes nodalism in reception. The database narrative described by Lev
Manovich in his Language of New Media (2001) finds an equivalent in a sort of database
reception. Here, audio-viewers are then able to categorise, order and assess signification
through the construction of playlists or collections and by grouping themselves into
communities of friends/subscribers.
Although the above applies to text-based media, we could say it finds its real purpose
in non-verbal media. Audiovisual art evidences the world in a non-verbal manner so
it is only fair that it will demand just two things of its audience: to like or to remake.
Ifthe latter is intended, then downloading or sharing will be enabled, but the dynam-
ics of the net are such that often sharing may be construed by the simple act of posting,
with the knowledge that copying is possible and easy. In this sense Graeme Sullivan
(2006) captures the interaction between artist and audience perfectly when he writes:

There is an acknowledgment that art practice is not only a personal pur-


suit but also a public process that can change the way we understand things.
Consequently, the ideas expressed and communicated have an interpretive utility
that assumes different textual forms as others make sense of what it is artists
have to say through what it is they see. Interpretive research acts build on the rich
conceptual traditions associated with image making whose purpose is to open up
dialogue between the artist and viewer, and among an interpretive community
whose interests may cut across disciplines (Sullivan, 2006).

48
3.
Conclusion

Nodalism engenders a means of understanding the creative work and brings together
the oft cited characteristics opposing Modernism with PostModernism into a neutral
frame that considers all materials, ideas, and concepts can be hybridised and developed
in the creation of new artwork. The authors also propose that nodalism, if one accepts
a memetic understanding of culture, allows a local-hierarchy of nodes (or memes) to be
re-introduced into an essentially rhizomatic model. Further, nodalism introduces tools
to understand a work of art and to evidence this through remaking. Where all nodes
of a rhizomatic structure are egalitarian, nodalism introduces a sense of direction by
virtue of hierarchy. This then becomes useful for finding ones way through databases
of creative materials.
The hierarchy of nodes is not that between high and low art or the inherent value
of one artwork over another, but rather the preferencing of certain nodes over others.
Itis acknowledged that whilst larger nodal interconnections will assist in the definition
of genres, more localised nodal connections will define a specific artists idiom within
this genre.
In contemporary artistic practice it is the modernist Bloomian anxiety that ironically
may well produce unoriginal work. It is the outward looking practice of nodalism that
facilitates a plethora of resources to be plundered. It is arguable that the more nodes one
is aware of, the more original ones work will be. A similar model is found in Jacques
Lacan description of the linguistic signifying chain which he described as rings of a
necklace that is a ring in another necklace made of rings (Lacan, 1977:153). Or to put it
in the language of semiotics applied to music proposed by Tagg (2012), the mapping of
one semantic network made up of nodes to a new one of nodes re-made.

Bibliography

Bloom, Harold. The Anxiety of Influence: A theory of Poetry, (1973), Oxford:


OxfordUniversity Press, 1997.
Boon, Marcus. In Praise of Copying, Cambridge, Massachusetts & London, England:
Harvard University Press, 2010.
Dawkins, Richard. The Selfish Gene, Oxford: Oxford University Press, 1976.
Foucault, Michel. The Archaeology of Knowledge and the Discourse on Language.
NewYork: Vintage, 1982.
Garro, Diego. From Sonic Art to Visual Music: Divergences, convergences,
intersections. in Organised Sound, 17(02), pp.103113, 2012.
Gochenour, Phillip H. Nodalism in Digital Humanities, Quarterly, Volume 5 Number 3,
2011 http://www.digitalhumanities.org/dhq/vol/5/3/000105/000105.html, [accessed
16.1.2013].
Igma, Norma. Plunderstanding Ecophonomics: Strategies for the Transformation
of Existing Musican Interview by Norm Igma with John Oswald in Arcana:
Musicians on Music, United States: Granary Books, 2000.
Kandinsky, Vassily. Point and Line to Plane. New York: Guggenheim Foundation, 1926.
Lacan, Jacques. (trans. Sheridan, A.), crits: A Selection. Bristol: Routledge, 1977.

49
Manovich, Lev. The language of new media [Kindle Edition]. Cambridge, Mass.:
MITPress, 2001.
Miller, Paul. Interview with Carlo Simula for his book Millesuioni. Omaggio a Deleuze
e Guattari, 2005, http://www.djspooky.com/articles/deleuze_and_guattari.html
[accessed 16.1.2013]
Tagg, Philip. Musics Meanings. New York and Huddersfield: The Mass Media Music
Scholars Press, 2012.
Sullivan, Graeme. Research acts in art practice in Studies in Art Education,
48.1(Fall2006): 1935.

50
Traversal Hermeneutics: The Emergence of
Narrative in Ergodic Media

Miguel Carvalhais
mcarvalhais@fba.up.pt
ID+, Faculdade de Belas Artes, Universidade do Porto, Portugal

Keywords: Generative Aesthetics, Computational Art and Design, Interaction, Narrative,


Cognition.

Abstract: Digital technologies are capable of simulating traditional media and to give rise
to new media forms that often closely resemble the experience of somatic technologies.
Their interactive capabilities are partially responsible for this, but procedural authorship
and poesis are supported by process intensity and generative potential.
Designers, the systems and their human operators have very different and maybe
irreconcilable points of view, which profoundly affect their experiences during the dia
logical construction of the works and of their effusions. From its particular point of

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


view during the traversal, the operator develops a hermeneutic experience during which
models and simulations of the system are built. The operators actions within the system
greatly contribute to this development, but it is their capacity to create theories of the
system that is paramount to the success of this effort.
The analysis and critique of these digital artifacts, indeed the procedural pleasures
attainable through these systems, are indissociable from their procedural understanding.
Although traditional aesthetic studies of surface structures or outputs are still possible,
once we regard behaviors and computational processes as an integral part of the systems
content, it becomes essential to understand how the operator relates to these beyond a
strictly mechanical relation.
This paper discusses how models and simulations allow the operator to anticipate
the behaviors, reactions and configurations of the systems. How they are continuously
revised, confirmed or falsified throughout the traversal, and how this process results in
a dialectical tension that is the basis for the development of narratives and of dramatic
experiences with these, otherwise highly abstract, systems.

51
1.
Artificial Aesthetic Artifacts

In his book Collective Intelligence, Pierre Lvy proposes a classification of the technolo-
gies used to control message flow (1997, 45) in three groups that he terms somatic, molar
and digital. Somatic technologies are defined as those implying the effective presence,
commitment, energy, and sensibility of the body for the production of signs, and that
are also characterized by the multimodal nature of the messages produced and by the
uniqueness of each message, that is always produced in and dependent on a dynamic
and complex context that inevitably affects it.
Molar technologies, that we usually simply call media, much as Lvy also does, focus
and reproduce messages to ensure they will travel farther, and improve distribution
through space and time. (1997, 46) They are described as technologies that inevitably af-
fect the production of messages but that are not, as a first approximation, technologies
for sign creation, rather for the fixation, reproduction, and transportation of somatically
produced messages. (1997, 46) Their capacity to create new signs is very limited, but it may
be felt in media such as film, where the processes of montage introduce some potential
for the generation of new messages for, although the raw image or sound may be stored
on the recording, the global messagethe filmresults from () montage. (1997, 46)
Digital technologies stem from digitalization, the absolute of montage that affects
the tiniest fragments of a message, an indefinite and constantly renewed receptivity to
the combination, fusion, and replenishment of signs (1997, 48) that preserves the power
to record and distribute information while bringing the technologies closer to some of
the characteristics of somatic technologies. This, however, only happens when digital
technologies are able to retain a certain degree of what Chris Crawford called process
intensity, the degree to which a program emphasizes processes instead of data (1987),
and consequently retains some generative potential (Boden and Edmonds 2009). Perhaps
naturally, given the way how we tend to relate to any new medium in the light of the
previously existing media (Bolter and Grusin 1999, McLuhan 1964), digital technologies
tend to follow on the steps of their molar predecessors, thus optimizing for constancy
and effectivity, or for data intensity, instead of investing the technological resources in
developing procedural and participatory traits (Murray 1997, 71). Many digital media are
built with the explicit intent of simulating the traits of molar media rather than tryingto
escape from the conventions and limitations of previous technologies. We therefore find
that in such cases, the potential of the technologies is not effectively exploited (Lvy
1997, 49), even if they are digital and computational.

Processing data is the very essence of what a computer does. There are many
technologies that can store data: magnetic tape, punched cards, punched tape,
paper and ink, microfilm, microfiche, and optical disk, to name just a few. But
there is only one technology that can process data: the computer. This is its sin-
gle source of superiority over the other technologies. Using the computer in a
data-intensive mode wastes its greatest strength. (Crawford 1987)

The code of these technologies is where the potential for procedural authorship re-
sides (Murray 1997) but, while opening spaces of possibilities, code may also enforce strict

52
limitations within those spaces. The code is the law that governs these technologies and
their products (Lessig 2001, 35), a law that one has no option besides abiding to, save for
actually interfering with the code, something which may in some cases actually remain
a possibility but that is far from being the norm when it comes to the experience of
digital media.1 Therefore, once one develops a digital medium as an analogue of molar 1. Not quite with the experience of
digital technologies. If in many
media, one is building an experience that may have some benefits over the molar equiv-
occasions there is at least the
alentsuch as speed, economy, etc.but that may actually limit the freedom to ex- theoretical possibility of access-
ing and editing the code of a
plore and to reconfigure the messages being communicated. Aarseth (1997, 46) offers the
digital medium, more often than
example of William Gibsons 1992 poem Agrippa (a book of the dead) as a digital message not, that experience is not sim-
ple or straightforward, or it may
that was built to force and preserve its linear integrity in ways that wouldnt in princi-
not fit within the expectations of
ple be achievable with molar media and that are strictly enforced by the nature of code. the users or readers.

We may therefore posit that if digital technologies allow us to develop radically new
media and messages, they may also allow us to develop artifacts that outperform con-
ventional molar media in regard to their specific traits. We are consequently faced with
an ambiguous descriptor that may be equally applied to media with very diverse traits.
For this reason we proposed the alternative designation of some digital media as artifi-
cial aesthetic artifacts (Carvalhais 2010, 2011b), a term that simultaneously points to their
sensorial nature and to their essence as computational systems, as systems where com-
putation is not only found at the logical or code layer (as defined by Lessig) but is also an
integral part of the content layer.
Artificial aesthetic artifacts have the potential to develop what Christopher Alexander
calls a living structure (2002): they are process intensive (regardless of whether they use
data structures and of their complexity and extension), they are autopoietic and they are
rich in procedural authorship.2 2. We may identify this procedural
authorship both in the author
To consider a subclass of digital media as artificial aesthetic artifacts allows us to
as well as in the readers, users
better understand the importance of the added dynamics and of the more complex or interactants, and even within
the system itself, which may
user functions that are involved in their creation and experience. It allows us to better
be the bearer of a considerable
parse between digital systems that are closer in their nature and modes of operation to degree of autonomy.

molar media and those that in some ways become more similar to somatic technologies.
Artificial aesthetic artifacts become utterly dependent of their contexts of operation to
develop messages that, regardless of the initial structures or of the intended final configu-
rations are unique, messages that, in Lvys words, become inseparable from a changing
context (1997, 46). This was, of course, how Lvy described somatic messages, that are
never exactly reproduced by somatic technology (1997, 45), and it is fitting to think of
artificial aesthetic artifacts along the same lines. The contexts are necessarily different,
perhaps at times less linked to physical settings3 and more dependent on interaction, 3. Although this is naturally
possible, as any message that
interpretation and on the procedural contexts at the core of the systems, a layer that, as
originates in a digital medium
we will see is difficult to perceive directly. must eventually be translated
to sensorial stimuli before being
If digital technologies that simulate traits of molar media can, in some ways, be seen
perceived by humans.
as stepping even further from the traits of somatic messages, we find that artificial aes-
thetic artifacts bring us closer to that original essence of the technologies for message
production that are centered in the human body and that are dependent from it. If the
focus of molar technologies can be described as fidelity in reproduction, that of artificial
aesthetic artifacts may very well be variety in every instantiation. To keep recognizable
structures or patterns between instantiations but to creatively infuse them with disorder,
as suggested by Italo Calvino (qtd. in Aarseth 1997, 129).

53
2.
MDA

Coprocessing (Aarseth 1997, 135), the human-machine regime of collaboration that is


found at the heart of many of these systems, allows the conversational construction of
the works and of their effusions. But as we will see, even non-interactive systems, or those
where readers inputs may be minimal, can be construed through iterative exchanges of
information between the systems and their users.
According to Aarseth, cybernetic systemsas we may classify many of these arti-
ficial aesthetic artifactscan develop three regimes of collaboration with the human,
(1) preprocessing, in which the machine is programmed, configured, and loaded by the
human; (2) coprocessing, in which the machine and the human produce () in tandem;
and (3) postprocessing, in which the human selects some of the machines effusions and
excludes others. (1997, 135) Alexander Galloway concurrently proposes the identification of
machine actions and of operator actions, the first of these performed by the software and
hardware (2006, 5) and the later by the human, clearly distinguishing them in scope but
warning us of the artificiality of the division, as both the machine and the operator work
together in a cybernetic relationship, which makes both types of action ontologically
the same, existing as as a unified, single phenomenon, even if they are distinguishable
for the purposes of analysis. (2006, 5)
Notwithstanding this, if we want to understand the relevance of artificial aesthetic
artifacts as communicational and artistic systems, we should be careful to maintain the
distinction in the analysis because not only in the pre- and post- positions but also in
coprocessing, the roles of the human operators are indeed different from those of the ma-
chines; perhaps more importantly, the points of view of the machines or systems (Bogost
2012) and of the humans at the different positions of collaboration may be quite different.
To better understand this, it may be useful to resort to Hunicke, LeBlanc and Zubeks
MDA framework (2004), originally developed as a formal approach to game design and
game research. The domain of computer games is of course one where we can find sev-
eral artificial aesthetic artifacts, and one from where we can extrapolate a large quantity
of knowledge for their study.
MDA, for Mechanics, Dynamics, and Aesthetics, is a framework for understanding
games that aims to bridge the gap between game design and development, game crit-
icism, and technical game research (Hunicke, et al. 2004) by proposing an approach by
both the perspective of the designer and that of the player, two views through which we
discover a wide range of possibilities and interdependencies in a system. MDA is devel-
oped from the assumption that games are characterized by a relatively unpredictable
consumption, meaning that the string of events that occur during gameplay and the
outcome of those events are unknown at the time the product is finished, and that the
main content of a game is its behavior, not the media that eventually streams out of it
towards the player. This is a sense in which we again discover code as the content of
games, described as being more like artifacts than media. MDA therefore formalizes the
consumption of games by analyzing them in three distinct components: Rules, System
and Fun; and establishing their design counterparts, described as: Mechanics, Dynamics
and Aesthetics.

54
Mechanics describes the particular components of the game, at the level of data
representation and algorithms.
Dynamics describes the run-time behavior of the mechanics acting on player
inputs and each others outputs over time.
Aesthetics describes the desirable emotional responses evoked in the player, when
she interacts with the game system. (Hunicke, et al. 2004)

Each of these three components can be considered as a lens to the game that is
separate from, but causally linked to, all the others and that shapes the perspectives one
may develop:

From the designers perspective, the mechanics give rise to dynamic system be-
havior, which in turn leads to particular aesthetic experiences. From the players
perspective, aesthetics set the tone, which is born out in observable dynamics
and eventually, operable mechanics. (Hunicke, et al. 2004)

We can therefore identify the layers of emergence in the systems becoming after the
preprocessing stage, and consecutively understand the converse layers through which 4. We often refer to user as a sin-
gular human counterpart in the
the player, reader or interactant may peer through in the dialogue with the system. The
systems operation. We should
more a system is characterized by process intensity, the more complex will the emer- however note that very often
this user can of course be plural,
gences from one layer to the next be, the more control and agency (Murray 1997) the author
and distributed, both in space
may need to offer to the user, to the system or both.4 Therefore, by focusing and filtering and time, or the users role can
be occupied by another artificial
the perspectives, each of these layers inevitably affects the degrees of control that each
aesthetic artifact, or by parts
coprocessor can have within the system. of the same artificial aesthetic
artifact, itself a very singular
form of plurality.
3.
Readers Roles

Although Aarseth doesnt use the term consumption, he addresses the unpredictability of
the experience of ergodic texts5and by extension of other ergodic mediathrough the 5.  During the cybertextual process,
the user will have effectuated
analysis of their traversal function, the mechanisms by which units of the system are
a semiotic sequence, and this
revealed as surface structures that are presented to the human operator. The analytical selective movement is a work of
physical construction that the
modelAarseths textonomydeveloped in Cybertext is built as a descriptor of the
various concepts of reading do
artifacts according to their modes of traversal, each variable focusing on different as- not account for. This phenome-
non I call ergodic, using a term
pects of the traversal function that uniquely characterize each of the systems: Dynamics,
appropriated from physics that
Determinability, Transiency, Perspective, Access, Linking and User Functions (Aarseth 1997, derives from the Greek words er-
gon and hodos, meaning work
62-64). In spite of the relative neglect of the political, social, and cultural contexts in
and path. In ergodic literature,
which texts are used and of the interactions of different modalities within electronic nontrivial effort is required to
allow the reader to traverse the
texts (Hayles 2005, 36), the model is nevertheless possible to apply to similar traits in
text. (Aarseth 1997, 1)
systems whose primary function is not to relay verbal information (Aarseth 1997, 62)
or with outputs that are not exclusively verbal, although there is room for improvement
and completion by expansion with further variables (Carvalhais 2010, 2012).
Trough the traversal, human operators always develop an interpretative function,
similar to that we can find in more conventional media, where all decisions made by
the reader only concern meaning. In the case of ergodic media and of artificial aesthetic
artifacts, this interpretative function may be accompanied by three additional functions

55
postulated by Aarseth (1997, 64): the explorative function, in which decisions can be made
regarding which paths to take along the traversal; the configurative function, in which the
order of the parts can be rearranged and the navigable structure can be created, shaped or
influenced, more than just explored; and finally, in Aarseths model, the textonic function,
in which these parts can be permanently added to the (textual) system. We can generalize
Aarseths textonic function by shifting its focus from textual structural components to-
wards any component of the systems outputs (regardless of their nature or modalities)
or even of the systems code, thus calling it structural (Carvalhais 2011b, 375).
Aarseths user functions are very good descriptors of the nature of the human oper-
ators cybernetic interactions with the system. The omnipresence of the interpretative
function can perhaps be seen as an extraneous emphasis, especially on media from
which verbal structures are so often absent and where high levels of abstraction further
remove one from any apparent meaning in the systems emanations. Markku Eskelinen,
for example, warns us of how in computer games, we interpret in order to be able to
configure and move from the beginning to the winning or some other situation, whereas
in ergodic literature we may have to configure in order to be able to interpret (2001), thus
displacing the primacy to the configurative function (Bogost 2006, 108). In spite of this
view, and regardless of its dominance over any of the other functions, interpretation is
nevertheless prevalent.
And interpretation becomes especially important in the experience of artificial aes-
thetic artifacts because, besides semantic interpretative actsthat may or may not occur
depending on the nature of the systems sensorial outputs, of which particular symbols
are produced, etc.there are several aesthetic interpretative acts that need to be per-
formed in order to achieve a poetic understanding of the system. Much as machine and
operator actions fuse, so we may propose that semantic messages expressible in symbols,
[and] determining translatable, logical decisions and aesthetic messages, determining
interior states, [that are ultimately] untranslatable (Moles 1966, 167) may also become
somewhat indistinguishable in the exchanges with the aesthetic artificial artifacts.
At the layers of mechanics and dynamics, systems most often operate in a space of
possibilities that anticipates the differentiation of modalities (Hansen 2004) that happens
at the layer of aesthetics. When confronted with the modal outputs of the transcoded
processes, the human operator tries to deduce meaning from them, not only a message
that may be communicated but also clues to the procedural nature of the outputs, to their
origin and significance. As so often happens in other contexts, humans try to identify a
design stance that explains the purpose of inanimate objects, and intentional stances that
point to the why of the behaviors of animate objects, to their motivations and emotions
(De Landa 1991). Although crossed and combined, and eventually arbitrary (i.e., not trivial)
in their relation to the previous two layers, these outputssymbols and behaviorsare
the only hints, the only points of access the operator has to the internal, coded level, that
can only be fully experienced by way of the external, expressive level. (Aarseth 1997, 40)

When inactive, the program and data of the internal level can of course be studied
and described as objects in their own right but not as ontological equivalents of
their representations at the external level. (Aarseth 1997, 40)

56
An alternative way of understanding this relation is put forward by Douglas Hofstadter,
that explains that although what happens on the lower level is responsible for what
happens on the higher level, it is nonetheless irrelevant to the higher level, which can
blithely ignore the processes on the lower level. (2007, 43)
So, although artificial aesthetic artifacts can still be subjected to traditional aesthetic
analysis at the level of their outputs, the operator needs to develop a more comprehensive
procedural interpretation of the system, in order to understand, decode, and ultimately,
to relate to their mechanics and dynamics layers.
Through procedural intuition (Strickland 2007) and the interaction with the system,
the human operator starts to build hypothesis about the mechanics and dynamics layers
of the system. These hypothesis are developed as simulations of the system or of its con-
stituent parts, simulations that are not consciously created but that nevertheless provide
the operator with possible scenarios about the systems outputs or behaviors, about the
causal procedurality of the phenomena she interacts with (Dehaene 2009). This task is
aided by cognitive processes of patternicity (Shermer 2011, 5) that seek patterns amidst
the manifest sensorial clues in an effort to reduce complexity and to make many sym-
bols that have been freshly activated in concert to trigger just one familiar pre-existing
symbol (or a very small set of them). (Hofstadter 2007, 277)
Upon establishing patterns, the operator adds meaning to them, through processes of
agenticity (Shermer 2011), through which she endeavors to operate along the same lines
as the system (Metzinger 2009, 176), by emulating its operations and quite literally, by
simulating it. These mental simulations can be developed concurrently, posing parallel
hypothesis that are evaluated against each otherin their capacity to generate valid
predictions or approximations to the actual behaviors of the systemand against the
system itselfin the frequency with which the hypothesis are validated. The various
simulations can consequently be adjusted and the models evolved in a process where the
system (i.e., the external phenomenon) is used as the fitness function for the selection
of the best models or simulations that are produced by the operator. During the course
of several iterations (and interactions), the operator may therefore be able to develop a
working model of the system, a theory of the processes within it, a theory of the artificial
aesthetic artifact.
This set of simulations allows the operator to try to peer at the system from the point
of view of its designer, from which the system is encoded with prescriptive rules, and
even from the point of view of the system itself, a position better rendered by descriptive
rules (Carvalhais 2012).

[Theory of mind] refers to your ability to attribute intelligent mental beingness


to other people: to understand that your fellow humans behave the way they do
because (you assume) they have thoughts, emotions, ideas, and motivations of
more or less the same kind as you yourself possess. In other words, even though
you cannot actually feel what it is like to be another individual, you use your
theory of mind to automatically project intentions, perceptions, and beliefs into
the minds of others. In so doing you are able to infer their feelings and inten-
tions and to predict and influence their behavior. (Ramachandran 2011, loc. 2632)

57
As with the development of theories of mind towards humans, animals or other en-
tities, either real or fictional, the development of a theory of an artificial aesthetic arti-
fact may very well stem from an innate, intuitive mental faculty (Ramachandran 2011,
loc.2632), a capacity that is so far unique to humans (Dehaene 2009, loc. 194).
Although, as postulated by the MDA model, while interfacing with the aesthetics
layer of the system, the operator may be unable to have a clear view of the dynamics
and mechanics layers, through these processes of simulation she effectively tries to re-
verse her view of the system, even if ultimately following models that are incomplete or
6. I ncomplete or erroneous models altogether erroneous.6 It is regarding the validation of these models that the next step in
can nevertheless produce
the exchange is taken.
accurate enough predictions of
the outputs or behaviors of a
system. So a good simulation is
not necessarily just an accurate
4.Dramatic Arcs
simulation, rather it is an effec-
tive model for the anticipation
Traditional narratives are a fertile ground for the development of theories of
of the system. (Carvalhais 2011a).
mindfor characters and events, for narrators or even maybe the imagined au-
thorsand for hypothesis of procedural causalityfor mechanical events and natural
phenomena. Provided the narrative is internally consistent, the reader or spectator is able
to infer from the known events and information and to speculate about the narrative
developments, anticipating its evolution and resolution. The reader can conjecture about
narrative arcs, stable situations and unbalancing accidents, about events, goals, obsta-
cles, commitments, protagonists and antagonists, eventually reconciling estimations as
the narrative unfolds. Once the narrative is over, any further reading will most likely be
aided by recollection and memorization than by further speculation and simulation, due
to the stability of the narratives in these technologies.
A similar process is developed during the experience of artificial aesthetic artifacts
and, while memory may also serve a role, due to the unpredictable nature of these sys-
temsthe indeterminate and unstable nature of the traversal function, according to
Aarseth (1994, 6162)the processes of simulation must be developed even in rereadings,
where the same systems may, for a variety of reasons (including, but not limited to, the
operators interactions) produce very dissimilar outputs.
The operator is constantly led to the production of models and to the resulting building
of expectations to be confronted with the systems. This effort results in a dialectical pull
between confirmation and violation of expectations that leads to a dramatic tension that
characterizes artificial aesthetic artifacts and is a setting for the development of narra-
tives. This is not only the aporia-epiphany pair that was identified by Aarseth in hypertext
literature, at least not in the terms he proposed it, but he was certainly right in that this
pair, although not being a narrative structure on itself, constitutes a more fundamental
layer of human experience, from which narratives are spun. (1997, 92)
Traditional narratives, due in part to their lower (or even absent) process intensity,
relinquish procedural authorship and set the narrative in data to be replayed and perform
it, presenting the reader with a single unified path to traverse. Artificial aesthetic artifacts
make use of procedurality to build unique dramatic arcs from the variations and the space
of possibilities that is opened by their computational nature, from the interactions and
the simulations developed by the operator. These narratives tell the operators personal
story, a story that could not be without her (Aarseth 1997, 4), a story that absolutely de-
pends on her to be shaped and formed.

58
This leads us to regard Aarseths perspective variable, that may be so difficult to un-
derstand in the context of abstract and non-verbal artifacts, as something that far from
just describing the operators playing of a strategic role as a character in the world [of the
system] (1997, 62), actually inscribes her as inseparable from the work, or from the partic-
ular instance of the work as it is experienced, imagined, theorized and experienced by her.

Acknowledgments: This work is funded by FEDER through the Operational Competi


tiveness ProgrammeCOMPETEand by national funds through the Foundation
for Science and TechnologyFCTin the scope of project PEst-C/EAT/UI4057/2011
(FCOMP-Ol-0124-FEDER-D22700);

References

Aarseth, Espen J. Nonlinearity and Literary Theory. In Hyper / Text / Theory, edited by
George P. Landow. 51-86. Baltimore, MD: The Johns Hopkins University Press, 1994.
. Cybertext: Perspectives on Ergodic Literature. Baltimore, MD: The Johns Hopkins
University Press, 1997.
Alexander, Christopher. The Nature of Order: An Essay on the Art of Building and the
Nature of the Universe. Book Two: The Process of Creating Life. Berkeley, CA: The
Center for Environmental Structure, 2002. 1980.
Boden, Margaret A., and Ernest A. Edmonds. What Is Generative Art?. Digital
Creativity 20, no. 1 (2009): 2146.
Bogost, Ian. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:
TheMIT Press, 2006.
. Alien Phenomenology, or What Its Like to Be a Thing. Minneapolis, MN:
University of Minnesota Press, 2012. ebook.
Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media.
Cambridge, MA: The MIT Press, 1999. 2002.
Carvalhais, Miguel. Towards a Model for Artificial Aesthetics: Contributions to the
Study of Creative Practices in Procedural and Computational Systems. Universidade
do Porto, 2010. Thesis.
. The Emergence of Narrative: Procedural Creation of Narrative in Artificial
Aesthetic Artifacts. In Avanca | Cinema. Avanca, 2011a.
. Procedural Taxonomy: An Analytical Model for Artificial Aesthetics. In ISEA
2011, 17th International Symposium on Electronic Art. Istambul, 2011b.
. Unfolding and Unwinding, a Perspective on Generative Narrative. In ISEA2012
Albuquerque: Machine Wilderness, edited by Andrea Polli, 46-51. Albuquerque,
NM,2012.
Crawford, Chris. Process Intensity. In, Journal of Computer Game Design 1, no. 5, 1987.
De Landa, Manuel. War in the Age of Intelligent Machines. New York, NY:
ZoneBooks,1991. 2003.
Dehaene, Stanislas. Reading in the Brain: The Science and Evolution of a Human
Invention. New York, NY: Viking, 2009. ebook.
Eskelinen, Markku. Cybertext Theory: What an English Professor Should Know before
Trying. In, Electronic Book Review, 2001.

59
Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis, MN:
University of Minnesota Press, 2006.
Hansen, Mark B. N. New Philosophy for New Media. Cambridge, MA:
TheMITPress,2004.
Hayles, N. Katherine. My Mother Was a Computer: Digital Subjects and Literary Texts.
Chicago, IL: The University of Chicago Press, 2005. ebook.
Hofstadter, Douglas R. I Am a Strange Loop. Cambridge, MA: Basic Books, 2007.
Hunicke, Robert, Marc LeBlanc, and Robert Zubek. MDA: A Formal Approach to
Game Design and Game Research. In Challenges in Games AI Workshop, Nineteenth
National Conference of Artificial Intelligence. San Jose, CA, 2004.
Lessig, Lawrence. The Future of Ideas: The Fate of the Commons in a Connected World.
New York, NY: Vintage Books, 2001. 2002.
Lvy, Pierre. Collective Intelligence: Mankinds Emerging World in Cyberspace.
Translated by Robert Bononno. Cambridge, MA: Perseus Books, 1997.
McLuhan, Marshall. Understanding Media: The Extensions of Man. New York, NY:
Routledge Classics, 1964. 2006.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth of the Self.
New York, NY: Basic Books, 2009.
Moles, Abraham. Information Theory and Esthetic Perception. Translated by Joel E.
Cohen. Urbana, IL: University of Illinois Press, 1966. 1958.
Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace.
Cambridge, MA: The MIT Press, 1997.
Ramachandran, V.S. The Tell-Tale Brain: A Neuroscientists Quest for What Makes Us
Human. New York, NY: W. W. Norton & Company, 2011. ebook.
Shermer, Michael. The Believing Brain: From Ghosts and Gods to Politics and
ConspiraciesHow We Construct Beliefs and Reinforce Them as Truths. New York,
NY: Times Books, 2011. ebook.
Strickland, Stephanie. Quantum Poetics: Six Thoughts. In Media Poetry: An
International Anthology, edited by Eduardo Kac. 25-44. Bristol: Intellect, 2007.

60
Space and Time in Ergodic Works

Sofia Figueiredo
sofia.figueiredo@gmail.com
Escola Superior de Educao de Viseu

Keywords: Ergodic, Interactivity, New Media, Space, Time.

Abstract: The following paper discusses dimensions of space and time in interactive er-
godic works. It starts by presenting four examples of ergodic works, describing how the
dimensions of time and space are created and how they are experienced by users. These
analyses use concepts and theories developed by Markku Eskelinen, Janet Murray, Lev
Manovich and Espen Aarseth, in an attempt to understand space and time in relation
to ergodicity.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

61
1.
Introduction

Interactive ergodic works exist within a logic of completion by users actions (as defined
by Aarseth in 1997). Without the users actions, which generate several possible material
expressions, an ergodic work will not be fully realized. Since the work partially evolves in
response to users actions, it seems clear that the dimensions of time and space need to be
readdressed in ways that go beyond the usual categories of printbased or filmbased
narratology. Because of its concern with the ergodic nature of certain modes of interac-
tivity, new media theory offers concepts that are useful for thinking about these issues.
It would seem, at first glance, that the dimension of time is the one that undergoes
the most significant transformations. For instance, in the case of hyperfiction, narrative
discourse ceases to exist in a single order and allows for different paths, different points
of access to the story, and, necessarily, different meanings. Of course, even in traditional
narrative, the relationship between the time frame in which the events occurred and the
timeline of their narration cannot (and, most of the times, does not) directly match, as
we can see through the narratological categories of analepsis and prolepsis. The biggest
problem that arises in ergodic works is the relative difficulty we have in classifying more
or less random relations between the time frame of events and the timeline of narration
that result from users actions (Eskelinen 2012).
The way time is produced and experienced in interactive media is so strongly altered
that several scholars have suggested, more or less emphatically, that the defining char-
acteristic of interactive media is spatiality, instead of twentieth centurys mainstream
medias (cinema) temporality (one of them being Aarseth, in his 2001 article presented
here as a reference). Lev Manovich, on the other hand, models his analysis of new media
on early cinema and on forms of montage, which see the database as a source of tem-
poral relations (Manovich 2002). Janet Murray in turn reads hypertextuality in terms of
navigational structures that can be understood spatially (Murray 2011). In this paper, con-
cepts developed by Markku Eskelinen, Janet Murray, Lev Manovich and Espen Aarseth for
thinking about time and space in interactive media will be applied to four ergodic works.

2.
Time relations in ergodic worksCamilleUtterbacks
Liquid Time and Markku Eskelinentheorization
of time in ergodic works

Liquid Time, a video installation by Camille Utterback (Utterback 2002), has been repeat-
edly analyzed in theoretical texts regarding interactive media [Janet Murray being one of
the, in the text given as a reference hereMurray 2011]. Given its declared relationship
with time, it was chosen as my first example in this paper. I will try to relate it with
Markku Eskelinens analysis of time relations in interactive narrative works (Eskelinen
2012), and find out if some of the points he makes are present in Liquid Time.
Eskelinen refers to classic narratological categories about time relationships between
narration and story commonly accepted by most scholars, and then expands them so
that we can use them to analyze not only traditional narratives, but interactive ergodic
narratives as well. Some categories survive unscathed the introduction of interactivity,
but most are changed in one way or another. Basing his analysis on Genettes approach

62
to the subject, Eskelinen considers time through the categories of order, duration, and
repetition. He introduces two other possible time dimensions that can be verified with
interactive media: system time and reading time. He expands these categories, having in
mind the differences introduced by interactivityfor example, about order he says that
while being the only category subject to changes in classic hypertext fiction, order is some-
times overstressed by scholars as the main innovation of interactive fiction. Nonetheless,
the order of narrative elements is certainly altered with users actions, maybe not in such
a novel or random fashion as some scholars would have led us to believe. Analepsis and
prolepsis exist in both oral and printed narrative, and chronological sequence is not al-
ways the main criterion for ordering events in narrative time. However it is possible to
identify some changes in this category when subject to interaction, namely, the fact that
analepsis and prolepsis can be absolute or relative, in relation to the whole or parts of the
narrative: if all the possible orders have an unchangeable element, then the anachrony
is absolute; if only some are repeated, then the anachrony is relative.
Eskelinen goes on to question the frequency of narrative elements, coming to the
conclusion that not many differences distinguish traditional from interactive media, as
traditional media categorization (by GenetteEskelinen 2012, 146) already contemplates
the possibilities of narrating once or several times either events that occurred once, or
several events, in multiple combinationsthus remaining only the necessity to consider
variability of frequency; and duration/speed/rhythm of the narrative, which he develops
around the concepts of narrative time and screen time (as cinemas screen time, having
inherited this view from Bordwell) (Eskelinen 2012, 150). It would be redundant to exten-
sively describe Eskelinens approach to time relationships in narratives here. The interest
of his revision of narratological categories is to see how they apply to the example pre-
sented here, Camille Utterbacks Liquid Time.
Liquid Time is described by its author as an exploration of how the concept of point
of view is predicated on embodied existence. More interestingly, for the case at hand, is
how this concept is put in practice: In the Liquid Time Series installation, a participants
physical motion in the installation space fragments time in a pre-recorded video clip.
(Utterback 2002). The temporal dimension of this piece is visually explored and it is ma-
nipulable by the visitorsits users. We see, in a single work (a video) multiple timelines
and, consequently, multiple relationships between narration time, story time, and screen
time, to use Eskelinens terms simultaneously. Hence, according to Eskelinens categories,
we can classify time relationships in Liquid Time as follows:
as regards order, since Liquid Time doesnt have a fixed order of narration of events,
we can say it presents the pre-recorded video clips in a random fashion (every time,
a different order is presented); it is also non-linear, as the events are not presented
chronologically or consequentially. As the video is altered by the users proximity or
distance, we can only guess that generated analepsis and prolepsis are relative, since
they occur once in any possible timeline. Finally, it is possible that, since the video
shows us spaces in New York, perhaps there is a different organizational principle,
such as space, in which case we are talking about a syllepsis (multiple order of events,
non-chronological), in Genettes words.
moving on to frequency: the frequency of repetitions depends entirely on the users
random actions; plus, most of the times we will be talking about resemblance, and

63
not complete identity between repeated sequences. We can thus say that Liquid Time
is indeterminate in frequency of repetitions terms.
lastly, as we consider the duration and speed of Liquid Time, as well as its possible
relation with a pre-defined system time and a viewing time, it is possible to come to
the conclusion that nothing is rigid; the works duration and speed are a reflex of the
users actions, and Liquid Time is accessible for as long as the user wants. Whereas
the time of the events captured in video and the time of each video sequence are fixed
(and I do not, at the moment, know how they relate), the viewing time is not: each
user, in each viewing, changes the viewed sequence and the relations between the
time of the video capture and the time of its fruition, a reflex of the importance given
by the Utterback to singular points of view. Each time a viewer affects, with his or
hers actions, the sequence he or she is watching, it is being created a new instance
of Liquid Time, a personal and unique one: in this lies the reason of its existence, as
it is, undeniably, an ergodic work.
As a conclusion, I will propose that the time in Liquid Time is, indeed, liquid; that the
analysis of its possible facets, as proposed by Eskelinen, and the way they react to each
other and to the user, give strength to this idea of fluidityLiquid Time is liquid not only
superficially, but in all of its aspects. Time relations are viscous, never solidified; Liquid
Time observes, in depth, and as much as I can understand, this liquidity in all categories
defined by Eskelinen.

3.
Space as an interaction design strategySimon Pennys
Fugitive and Janet Murrays approach to space in new media

As time gives way to space as a crucial perceptual experience in new media environments,
we must think about the ways in which space is organized and if and how it changes
the users experience. Janet Murray (Murray 2011) talks about space as an interaction
design strategy, challenging common notions associated with the linearity of the twen-
tieth centurys mainstream media, cinema. The possibility of translating time into space
and space into time further complicates our interfacemediated experiences in digital
media environments.
The spatial affordances of the digital medium can be used for designing different
kinds of interaction, including interactions in ergodic works. Murray goes on to describe
several strategies for the design of interactivity, comparing them with their analog coun-
terparts, such as containers (list and tables, the library model), landscapes and maps.
Murray then discusses the nature of virtual space in the mind of the user, its relation-
ship with discreet places, and the evermore present ubiquitous digital devices that force
us to adjust to and locate ourselves within multiple spaces and the over imposed layers
of information that they represent (augmented reality). Most importantly, Murray ques-
tions the ways in which virtual space expands or contracts reallife spacesas we
add layers of virtual spaces onto the real spaces, she wonders if, for example, gestural
interfaces for video games are allowing us to think of the space between interactor and
the device as a site for inscribing commands. This way of thinking is especially relevant
when we think in terms of works such as Fugitive.
Fugitive emphatically describes itself as a non-narrative, opposing narratives to

64
interactive media as mutually exclusive categories. Simon Penny mentions cinema as a
sort of antithesis of Fugitive:

Fugitive and cinema


Fugitive, while screenal, is emphatically not cinema. Like all interactive media,
in Fugitive there is no pregiven narrative. Rather a unique experience unfolds for
the user as a result of her interaction with the system.
Fugitive undoes cinema
If the user moves circumferentially, the scene that is triggered is a pan. As
long as she circles, the image also circles, unfolding successive frames of the
pan in successive positions around the wall. If the user moves radially, the shot
triggered is a zoom, corresponding to the position in the pan. Fugitive, in a sense,
undoes cinema, since the image is aligned, (relatively) to the original position of
the camera. As the user moves toward the image, the image zooms. The system
can be understood as a kinesthetic video editor. Each user makes a different movie,
depending on her behavior. (Penny n.d.)

(One could argue that cinema can be interactive as well, and that statements such
as Fugitive, while screenal, is emphatically not cinema. Like all interactive media, in
Fugitive there is no pregiven narrative are perhaps overlooking a few of those casesad-
mittedly, not too many.)
The point we will try to focus on with this example is that, in its attempt to avoid being
cinema, Fugitive uses space and spatial means for interaction as its main characteristic.
Fugitive is, in short, a video projection that, inside the limits of its cylindrical screen, runs
away from the user as he/she tries to approach it in more or less frantic ways, which are
mirrored by the systems faster or slower movements. In addition to the movements of
the projection, the images that are projected are also triggered in response to users mo-
tion: when the user runs faster, the system chooses a video with a higher frame-rate to
project; if the user moves circularly, the video will have a camera movement that echoes
the users movement. These decisions echo, in our view, Murrays questioning of the
space between users and system and the possibilities that it brings to interface designers.
Penny gives some information about the system behind Fugitive and the philosophy
that originated the project. Interestingly, he states that Fugitive reacts not to the instan-
taneous position of users but to the temporal dynamic of their ongoing movements
(Penny n.d.). This aims to capture the users mood, task that would not be possible if the
only available data was the instantaneous position of the interactor. Fugitive attempts
to interpret gross bodily movement as an indicator of mood and then respond to
it, in an instantaneous (as much as possible) fashion, so as to reaffirm to the user that
the piece is interactive and that its (the users) actions have a responseMurray also
stresses the need to find transparent and immediately satisfactory ways to give agency
to the users, agency being the capacity to change the system and its responses.
As Fugitive maximizes the space it is given, attempting to convey multiple messages
(of the body as a presence in interaction, of the ways in which to interact with the piece,
and so on) through interaction in a space, originating responses in different ways of
travelling (visuallythrough the eyes of a camera) through a given space, and as such

65
is, in our view, a valid example to discuss, if not the categories presented by Murray (only
the landscape category is of some use to the analysis of the images presented in Fugitive),
at least the spirit of her questioning and the broad strokes of her approach to designing
interaction strategies that fully explore space. In Fugitive, ergodic intervention results in
multiple outcomes that translate the kinetic and spatial relation of user to the cinematic
representation of space.

4.Space and database aestheticsJonathan Harris We Feel


Fine and Lev Manovichs concept of database

We feel fine, a web based installation by Jonathan Harris in collaboration with Sep Kamvar,
attempts to pick up every mention (on the Internetmainly from blogs, as Harris ex-
plains in his Ted Talks (Harris 2007, 2008)) of the word feel or feeling, and then grabs the
whole phrase and displays it, trying to make visible in one or another form of organized
display the enormous amount of feelings floating around the world of personal expres-
sion on the Internet.
In this section of the article I will try to discuss Lev Manovichs emphasis on the data
base as a prime medium of expression (and, among others, artistic expression) of our
computerized society, as he calls it (Manovich 2002), and cross them with the concepts
behind We Feel Fine, in an attempt to better understand its concepts and the reasons
behind its existence.
Manovich starts by naming the database form as the main aesthetic form of new
media. He compares it to cinema, a (mostly) narrative form that was mainstream in
the twentieth century, and establishes some parallels and contrasts between the way
that database (new media) and narrative (cinema) function in their ways of conveying
meaning and organizing its constitutive elements. Database corresponds to the result of
a digitizing craze (Manovich 2002, 198) and is described as a collection of images, texts
and other data records (Manovich 2002, 195). On the other hand, narrative is described
as only one of the ways though which we can access these collected elements.
Of course not all new media objects are databases: games, for example, usually con-
tain narrative elements, and their database is subject to an algorithmthe other half,
Manovich tells us, of the ontology of the world according to a computer. The web is, in
Manovichs view, the place where the database has developed in its purest form: a gigan-
tic and always changing data corpus, something that operates under an anti-narrative
logic (Manovich 2002, 196).
Interestingly, Manovich goes on analyzing some of the films of Greenway and Vertov,
calling their works databases in film form, and ending his text by considering that Vertov,
in particular, has done something that new media designers still have to learnhow to
merge database and narrative into a new form. (Manovich 2002, 212). He has done this by
filming a database, or presenting us (the viewers) several shots, and several techniques, in
a non-narrative way, in his Man with a Movie Camera. I will argue that Jonathan Harris
has done the opposite movementpresenting narratives, or narrative pieces, in a da-
tabase formthus possible having learned how to combine narrative and database in
an aesthetic and artistically meaningful way, and not resorting to a known form, such
as a film, but using new media specificity.

66
We Feel Fine, as described earlier, picks up specific sentences from every web users
personal narrative. These specific sentences start, of course, by the statement I feel or
similar. We Feel Fine then goes on organizing, creating statistics, rearranging, or even
animating particles with data that is shown to us as we choose. There are several ways
of observing how people are feeling in a given moment: all of them are at the very least
dependent on spatial representation, which is, for Manovich, the only way to create a
pure database (Manovich 2002, 209).
We feel fine achieves yet another accomplishment: it manages to present us with
something Manovich claims is our expectation of computer-based objects (while he
refers specifically to computer narratives, I will stretch this concept to any computer
made object that in some way inherits analog behaviors and characteristics, such as an
art work as We Feel Fine). Manovich says that, while we reject the modernist concept of
medium-specificity, we still expect computer-made objects to bring new dimensions to
traditional forms. We feel fine, in my opinion, does just that: explores computer conven-
tions, ways of creating meaning and form, and uses them to create new shapes from the
frequency, tone, and other characteristics of World Wide Web users feelings and how
they are expressed through it.
We Feel Fine spatially organizes data created by internet users, for us to read and
interpret. Visual spaces are created each time we, as users, make choices or refresh the
system. As internet users we have another interesting possibility: we can create content
that will be captured by We Feel Fine, creating thus a feedback circle. We Feel Fine could
not live without the event of ubiquitous interactivity. Either way, ergodicity is required
to bring these informational spaces to life: as we chase the tiny circle-shaped feelings
through the screen and generate different outputs of this feelings gatherer, we create,
through our actions, new visual instances of a giant databasethe internet.

5.
Space in video gamesMary Flanagans [Domestic] and Espen
Aarseths discussion of space in video games

As a final example, though obviously not the last possible analysis, I would like to ap-
proach space and the spatial dimension in new media twisting Aarseths words about
space in video games to include artworks that function in a video game structure. There
are many such examplesin fact, growing in numbersbut, for the current research,
[Domestic], a piece by Mary Flanagan from 2003 (Flanagan 2003), seemed like a perfect
choice, given that the author appropriates a video game space and redefines its rules for
her own purposes. [Domestic] aims to recreate a childhood memory in a way that engages
the spectator/user of the piece. The depicted event is not recreated realistically: instead,
we find ourselves navigating through corridors that present us with the inner feelings and
thoughts of Flanagan as a child, experiencing the traumatic circumstance of a house fire.
[Domestic] is built over the game engine of Unreal Tournament, a multiuser first per-
son shooter, and allows us to use certain tools that are adaptations of UTs weapons: books,
literature, in the authors words, as a way to escape the horrors, an escapist tool that
solves problems by erasing them from the childs mindand our game/artwork space.
In Allegories of SpaceThe question of spatiality in Computer Games (Aarseth 2001)
Aarseth considers the possibility of classifying computer games by the way they explore

67
the spatial dimension. Space is, in Aarseths words, the defining element in computer
games (Aarseth 2001, 154). This idea, in these words or similar ones, is repeated several
times in the article. Aarseth analyses other possible defining dimensions or characteris-
ticssuch as timeand comes to the conclusion that most, if not all, computer games,
revolve around spatial exploration in one way or the other. The way such exploration is
implemented varies and can be ordered in classes, given some characteristics: for ex-
ample, he defines outdoors and indoors games as, respectively, games that allow free
movement in contrast to others, which are discontinuous, labyrinthine, full of carefully
constructed obstacles. Other distinctions can be made between the players puppet and
the environment, or between games that allow the player to influence the game world
and games that dont.
Aarseth then discusses the nature of virtual and computer games spaces. Combining
two extremes of virtual and real space theories, to Aarseth space in computer games is
both a realistic and a symbolic representation, since it is, in the end, a reduction of real
space to a symbolic form and a set of rules.
[Domestic], living on top of a computer games structure, can be classified and an-
alyzed under Aarseths system. Being both a semi-indoors game and multiuser, Unreal
Tournament presents some of the characteristics identified by Aarseth in these types of
games: we have labyrinths to cross, but mostly we are playing against other humans;
the landscape is not symmetrical but its usage is open to both opposing factions. Having
inherited the spatial structure of UT, [Domestic] happens in a space dominated by dark
corridors, niches we cannot see from a distance, and it is required of us that we clear
some obstaclesthe traumatic parts of the eventrecurring to escapist tools. It seems,
thus, that we have here the typical indoors topologyone that is mediated by obsta-
cles we must overcomea symbolic construction of life and what it means to live, to
overcome difficulties and to reach a higher level of comfort. While walking through
[Domestic] we build new instances of these memoriesfor each user, a new game be-
comes materialized, new sequences, different consequences, as the algorithm responds
to the players actions.
More importantly, though, [Domestic] is a space constructed by human dynamicsit
is the spatial representation of a memory; a created space, inhabited by symbolism, that
would not exist if such and such experience did not happen to Flanagan. [Domestic] is,
in the end, a reductive operation leading to a representation of space that is not in itself
spatial, but symbolic and rulebased (Aarseth 2001, 163).

6.
Conclusions

After applying the selected descriptive models to the analysis of space and time in ergodic
works, I came to the following conclusions:
Firstly, and obviously, there are significant differences in the ways we can approach
the dimensions of time and space in ergodic and non ergodic works. These differences
have been described and classified in multiple ways. One major highlighted differ-
ence is the way timelines mix and create new relationships after the user inputs his/
her data. I have confirmed that Eskelinens classification of new categories for these
relationships are operational when applied to Utterbacks Liquid Time, and can be

68
expanded to fit other artworks dealing with the ergodic production of the experience
of time. It would be interesting if, moving through the superficial hype surrounding
the rearrangement of time in interactive objects, we, as scholars considering new
media, could begin to see how time and its multiple dimensions are indeed put to
creating new facets in our knowledge. Liquid Time conveys a message (or a plurality
of messages) that can be further expanded when analyzed under Eskelinens work.
As we move on from the dimension of time to the dimension of space, I have tried
to see if some of Murrays considerations about this factor in interaction design are
similarly relevant for describing Fugitive. Murrays work attempts to classify every
kind of object, focusing mostly its effort in prosaic new media objects, such as web
pages, applications, and others, but its general meaning can be applied to artworks
too. Fugitive allows us to test some of the concepts Murray presents, such as the pos-
sibility of actions affecting both real and virtual spaces (considering the real space of
the installation and the represented cinematic space), and the space between them.
The marked gap between these two spaces allows the user to pause and take into con-
sideration his or hers relation to represented space; to space in cinema as well as to
space in art; and to the space given to him or her for interaction in this particular work.
Manovichs analysis of databases and the database culture we presently now live in
seems to not only fit, but to have spawned the presented artwork, Harris We Feel Fine.
Artistic production exploring the rich material originated by the collective use of the
World Wide Web, is more likely than not expanding the possibilities of the database
form and aesthetics. Manovichs exploration of the concept is probably going to be
more and more pertinent in the art world as well as in the broader new media world,
and it would be interesting, in future endeavors, to continue to explore Manovichs
text in confront with new media art pieces.
Finally, in the more specific field of computer games, or video games in general, the
dimension of space is one of great importanceAarseth argues that it is the defining
dimension of video games. [Domestic], built over a computer game structure, is one
of the possible examples of spatial exploration in ways that convey meanings and it
is absolutely true that without the spatial dimension the piece would be a completely
different experience.
Ergodic interactions affect our experience of space and time in new media objects in
ways that differ from work to work. Although space seems to have been explored more
extensively and meaningfully than time, I believe that the ergodic production of time
needs to be addressed in greater detail. Critical and artistic exploration of the interactive
dimension of time and space can open up new ways to create digital works, which dont
simply go back to conventional formats inherited from past endeavors.

Bibliographic References

Aarseth, Espen. Allegories of Space. The Question of Spatiality in Computer Games.


In Cybertext Yearbook 2000, edited by Markku Eskelinen and Raine Koskimaa,
152171. Saarijrvi: The Research Centre for Contemporary Culture, University
ofJyvskyl,2001.

69
Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Baltimore, MD: The
Johns Hopkins University Press, 1997.
Eskelinen, Markku. Cybertext poetics: the critical landscape of new media literary
theory. New York, NY: Continuum, 2012.
Jonathan Harris: The Web as Art. Ted Talks, 2008. http://www.ted.com/talks/jonathan_
harris_collects_stories.html.
Jonathan Harris: The Webs Secret Stories. Ted Talks, 2007. http://www.ted.com/talks/
jonathan_harris_tells_the_web_s_secret_stories.html.
Manovich, Lev. The Language of New Media. New Ed. Cambridge, MA: MIT Press, 2002.
Murray, Janet H. Inventing the Medium: Principles of Interaction Design as a Cultural
Practice. Cambridge, MA: The MIT Press, 2011. Kindle edition.
Penny, Simon. Fugitive. Simon Penny. Accessed November 1, 2012. http://simonpenny.
net/works/fugitive.html.

Referenced Artistic Works

Flanagan, Mary. [Domestic]. Mary Flanagan. 2003. Accessed November 1, 2012.


http://www.maryflanagan.com/domestic.
Harris, Jonathan. We Feel Fine. Jonathan Harris. 2006. Accessed November 1, 2012.
http://www.number27.org/wefeelfine.html.
Harris, Jonathan, and Kamvar, Sep. We Feel Fine. We Feel Fine. 2006. Accessed
November 1, 2012. http://www.wefeelfine.org/.
Penny, Simon. Fugitive. Simon Penny. Accessed November 1, 2012.
http://simonpenny.net/works/fugitive.html.
Utterback, Camille. 2000. Liquid Time Series. Camille Utterback.
http://camilleutterback.com/projects/liquid-time-series/.

70
Representation and Mimesis in Generative Art:
Creating Fifty Sisters

Jon McCormack
Jon.McCormack@monash.edu
Centre for Electronic Media Art, Monash Univeristy, Caulfield East, Australia

Keywords: Generative Art, Representation, Mimesis, Artificial Life.

Abstract: Fifty Sisters is a generative artwork commissioned for the Ars Electronica
Museum in Linz. The work consists of fifty 1m 1m images of computer-synthesized
plant-forms, algorithmically grown from computer code using artificial evolution and
generative grammars. Each plant-like form is derived from the primitive graphic ele-
ments of oil company logos. The title of the work refers to the original Seven Sistersa
cartel of seven oil companies that dominated the global petrochemical industry and
Middle East oil production from the mid1940s until the oil crisis of the 1970s.
In this paper I discuss the issue of representation in generative art and how dialogues in

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


mimesis inform the production of a generative artwork, using Fifty Sisters as an example.
I also provide information on how these concepts translate into the technical and how
issues of representation necessarily pervade all computer-based generative art.

71
1.
Introduction

In a recent paper, the author and several colleagues proposed what we considered to be
the ten most important questions for generative art (McCormack et al. 2012). The fifth
question on our list asked the following in relation to computational generative art:

In what sense is generative art representational, and what is it representing?

In this paper, I will expand on this question and its implications. From the onset I
should make clear that my topic relates to computational generative art. While generative
art has many non-computational modes, they are not specifically addressed in this paper.
Rather than discussing theoretical ideas, I will describe a recently completed generative
art work, Fifty Sisters, and look at how representational issues come into play in almost
every aspect of creating the work: conceptualization, implementation, and realization.
Representation and mimesis are some of the oldest issues in art, dating back at least to
the ancient Greek philosophers (Scruton 2009). The idea of replicating naturalistic effects
in painting came to the fore in renaissance aesthetics, where painters were concerned
with a truthful representation of what they saw. Roughly corresponding with the math-
ematical formalization of perspective projections and with progressive advances in paint
technologies (Ball 2002), artists skills developed in portraying the real in art. However,
any art acting as a mirror of natureas was famously advocated by Leonardostill
requires interpretation and ordering from the artist. Image reproduction technologies
forever changed the idea of capturing the real in art, inviting the possibility for artists
to focus on other kinds of truths. By the time Hal Foster published Return of the Real
1. In fact a circle and a half. (Foster 1996) mimesis had come full circle1. In more recent times, representation and se-
miotics arguably have been overtaken by other concerns, such as art as social exchange
and dialogues concerning relational aesthetics. Representation and mimesis are old and
well established discourses in art. Generative art reopens these dialogues in ways that
other art forms cannot, because generative art brings something new to art: the idea of
representing process.
In generative art, as with other forms of art, we should expect a range of representa-
tional styles, e.g. visual art ranges from abstract, non-objective mark making and vast
sways of negative space, to highly figurative and photorealistic imagery. But unlike visual
art, generative art has not been so extensively analyzed in terms of how it deals with
mimesis and representation. Here we seek to begin to address that deficiency.

1.1.Computers and Representation


It is almost impossible to write a computer program withoutat least implicitlycon-
sidering representation. Digital computers use collections of bits (electrical signals or
states standing for 1s and 0s, ON and OFF, etc.) to encode and represent data and instruc-
tions. At the level of software, programs generally represent things using atomic variables
(integers, floating-point numbers, Booleans, characters, etc.) or compound collections of
these variables (data structures, arrays, strings, objects), which may easily include other
compound collections.

72
It is important to distinguish between a variable and its interpretation, i.e. its se-
mantics. I can give a variable any semantic interpretation I choose: it could represent
happiness, my bank balance, or the text of this paper for example. The important point is
that the programmer, not the computer, confers this meaning. The computer hardware
does impose practical limitations on the kinds of interpretations that are possible. If I
represent the concept of happiness as an integer, then the machine will manipulate it as
an integer, not as an emotional state of being. I can interpret a variable called happiness
as the degree of happiness of an individual, but this representation is limited to the 32 or
64 bits typically used to represent integers.
At the base level, computers are symbol-processing machinesthey transform pat-
terns of bits that represent symbols. All symbols are subject to interpretation, either by
the programmer, the user of the program, or the machine itself. As an added complica-
tion, symbols are often not interpreted directly (as bit patterns), but are transformed by
some process.
As an example: two variables may represent the Cartesian coordinates of the center of
a circle that is displayed graphically on a computer screen. As the variables are changed
the circle moves on the screen. An additional variable represents the circles radius. We
speak of a circle moving on the screen, but, like the world of the Matrix, there is no cir-
cle and it is not moving. Discrete patterns of bits are changing at regular intervals. We
interpret the complex process of changing individual pixels seamlessly as a moving circle.
Perhaps this all seems obvious and even slightly trivial. Yet it is common for peo-
ple working with computers to forget about these representational gaps. In observing a
computer simulation of ant behavior, we might speak of ants foraging for food, but this
interpretation (as it would be for a painting) is semantically loaded. They are only ants
in as much as they homomorphically model an ant. Foraging is a convenient anthropo-
morphic label we give to a series of discrete changes read as position, movement, behavior
and so on. We use this shorthand because it is both convenient and necessary: speaking
only in terms of bit patterns is not practical or enlightening (Dennet 1991), despite this
being the basis of digital computer representations.
As Nietzsche reminds us, writing on a typewriter is different than writing with a pen
(Kittler 1990). The tool affects our way of thinking. With the computer it is even more
profound, because to translate ideas into code we must think algorithmically, which in
turn influences how we think about the world and act in it.
In making a generative computer artwork, representation exists at many levels, not
just the bit-pattern level of variables, data structures or screen graphics. Rather than
expand on this in abstraction, let us look at a concrete example in generative art to see
how these issues come into play.

2.
Fifty Sisters

Fifty Sisters is a generative artwork commissioned for the Ars Electronica museum in
2. www.aec.at
Linz, Austria2. The work consists of fifty 1m x 1m digital images of computer-synthesized
plant-forms, arranged in a 5 x 10 grid in the museum foyer. Each image is algorithmical-
ly grown from computer code using artificial evolution and generative developmental
grammars. The form is derived from the primitive graphic elements of oil company logos.

73
The title of the work refers to the original Seven Sistersa cartel of seven oil compa-
nies that dominated the global petrochemical industry and Middle East oil production
from the mid1940s until the oil crisis of the 1970s. Fossil fuels began as plants that over
millions of years were transformed by geological processes into the coal and oil that
currently powers modern civilization. The images remind the viewer that the basis of
an oil companys financial success is derived from plants and natural processes that op-
erated over vast geological timescales. With peek oil expected to be reached this century
(if not already), we are expending this non-renewable resource in the relative blink of
an eye. Two example images from Fifty Sisters are shown in Figure 1. More information
on the work and its motivations can be found at http://jonmccormack.info/~jonmc/sa/
artworks/fifty-sisters/.
The process to create each form involved a number of steps. Firstly, an oil company
3. W
 ikipedia has vector versions of logo was chosen and 2D vector art created3. From the 2D vector art, the basic graphic ele-
many oil company logos.
ments were separated manually and then converted to 3D geometric primitives. To cre-
ate each plant form custom software was developed by the artist. Technical details can
be found in (McCormack 2005). In basic terms, the software simulates the growth and
development of the form from a series of developmental rules, metaphorically similar to
the way DNA encodes the developmental plans of biological organisms.

Fig. 1. Synthesized plant forms based on the BP logo (left) and ESSO logo (right).

Rules consist of any number of developmental symbols that represent individual or col-
lective elements of the growing form. Symbols include continuous data, such as size, age,
chemical concentrations, etc., that change over the lifetime of the developmental simu-
lation. If certain conditions are met (e.g. size becomes greater than some fixed value), the
symbol may subdivide, be replaced by another symbol, or die. This method is somewhat
analogous to cell division in biology, but with far greater abstraction and simplification.
As the developmental rules are a machine-representable code, they can be subject
to genetic manipulation, including mixing of rules from other forms (a kind of gene
splicing) or guided evolution using a variant of the Interactive Genetic Algorithm or IGA
(McCormack 2004). The terminal symbols of any rule can be interpreted as instructions
that encode the geometric construction of form. These symbols include instantiations of

74
the geometric elements of the oil company logos. Thus when the form is constructed, its
geometry includes geometric elements of the original logo. The final form depends on
how the rules have evolved and mutated. The results are often surprising; in some cases
the original logoform is clearly visible, in others it is almost impossible to recognize as
it has become highly abstracted. Figure 2 shows an example form (using elements from
the Shell logo) and the rules, or digital DNA used to generate it.

Fig. 2. Synthesized plant forms (left) and the developmental rules from which it was generated (right).

The forms generated by this developmental/genetic process are output as 3D geomet-


ric models. Most plant forms are easily expressed in only a page or two of information
(a few hundred bytes), yet they generate geometric models many orders of magnitude
greater (~107109 bytes). The models are read into a 3D renderer, which renders an image
using photorealistic rendering techniques.

2.1.
Representation in Fifty Sisters
Fifty Sisters is a useful example of representation in generative art, because it deals with
representation and mimesis at multiple levels. As images, each plant form has several
representations: that of a real plant, a computer graphic, and a corporate logotype. The
generative code (digital DNA) from which each image is generated (Figure 2) is also ex-
hibited in a separate touch screen application that forms part of the exhibition of the
work. This allows the viewer to see a different representation of the form: as code that
through a process mimetic to biology generates that form.
Beyond the visual and textural representations, there is an additional layer of rep-
resentation to contend with, that of the generative process. What is represented in this
process? The process represents another process: biological development and evolution.
The artist-developed computer program simulates and abstracts the process of biological
development and evolution. It is the personal expression of a biological process in software.
So in this sense the software program, when run, represents these natural processes in
a somewhat similar way to that in which a landscape painting represents a landscape.

75
The difference, of course, is that the viewer of the work cannot see or otherwise experi-
ence this process directly.
This idea of one dynamic process representing another is new to art, and is what
best distinguishes this kind of generative art from other practices. Certainly process was
often of interest in modern art. One only has to think of Sol Le Witt, Cornelius Cardew
or Jackson Pollack for example. But in these cases the process of generating the art was
not representing another process: Le Witts drawing instructions were not representing
anything other than instructions to draw. A computer process being mimetic to another
process is different, because it involves choices about sign, signifier and what is signified.
Moreover the complexity and unpredictability of a computer process (vis. Emergence
(McCormack & Dorin 2001)) introduces additional properties not directly represented in
the generative process itself.

2.2.
Mimesis in Fifty Sisters
As alluded to in the introduction, mirroring nature involves interpretation and ordering
by the artist. As simulacra or simulation, a computer process is not the same as what it
seeks to mirror. This is well known in the simulation sciences, were formal methods are
used to verify and validate simulations to models, and models to reality. The experimenter
selects those aspects to model and those to ignore. Naturally, aspects or mechanisms that
the experimenter is unaware of cannot be in the model, although through experimenta-
tion she or he may become aware of them, and then subsequently incorporate them into
the model. The aspects of a phenomena or system that are modeled are subject to varying
degrees of abstraction necessary for them to be practically simulated.
Art allows a difference license, where the most interesting works can abstract from
the world of the imagination rather than the world of the real. In Fifty Sisters mimesis
plays many roles. The plants themselves are in some way mimetic to real plants, yet no
such forms could ever exist in reality. This is not really surprising; such issues have been
endlessly explored in painting.
Things become more interesting in relation to process however. The generative pro-
cess is mimetic to real biological development and evolution. The work speaks of digital
DNA, evolution and development as signifiers to the interpretation of their biological
parallels. While this simulated biology is grossly abstracted and simplified, it still exhibits
some of the features of its real-world counterpart. Moreover, its conceptualization and
intent as artistic concepts originates from interpretations of biological development and
evolutionary process.
This analysis reveals some curious aspects about the work. For example the choice of
using standard 3D rendering techniques, which focus on a Cartesian, photographic-like
visual realism, whereas the biological processes focus on a somewhat different kind of
realism. This is partly explained by the technical constraints in developing works like
this, but more importantly the aesthetic language of modern corporate communication
is similarly derived from these techniques. Corporate logos are visually presented using
the purity of glittering computer graphics, with its clean and sleek mathematical veneer.
Fifty Sisters deliberately borrows from this vernacular, presenting developmentally man-
gled corporate logo forms using their native visual language.

76
3.
Conclusion

In order to understand a generative artwork we must examine the process alongside


what that process produces. It is important to look at what the process is representing,
and how it performs this representation. For generative computer art, there always exist
multiple levels of representation and it is easy to forget about how these representational
structures are formed when they are so easily taken for granted. One of arts roles can
be to reveal what is normally hidden or taken for granted, bringing it into awareness (or
even sub-consciousness). Computer representations and processes are typically hidden
from direct perception, so by bringing them into perception we reveal their most unique
and interesting aspects as symbol processing machines.

# 1 bpOldZoom.dna
# 1 <built-in>
# 1 <command-line>

77
object spine {
<< noises >> efn;
<< math >> efm;
< efn.gauss > gauss;
< efm.acos > arccos;
< efm.sin > sin;

surface BP_old_hood;
surface BP_old_back;
surface BP_old_BP;
surface sphere10;

equiv col;

rules:
el(i) : i <= 47 -> /(137.5) [ ^(arccos(1 - i * 0.04000) )
[ f(90) elem(i) ]] el(i+1);

elem(i) -> sph seg(100,40,5,1,10,0.1,0.1);

sph -> !(15) c(0) !(7) C(1,0) !(3) C(1,0) !(1) C(1,0);
seg(n,l,t,u,r,sc,v) : n > 0 -> /(t) ^(u) !(r * sc) C(l,0) [col(1)
S(sc) BP_old_hood ] seg(n - 1, l + gauss(0,v), t + gauss(0,v), u
+ gauss(0,v), r, sc + gauss(0,v), v)
: n <= 0 -> /(t) ^(u) !(r * sc) C(l, 1) [ col(9)
tusks(36)
] f(14) S(1.0) ll(18, 15, 10);

tusks(n) : n > 0 -> [ ^(90) !(14) c(0) C(100, 0)


tusk_s(20,gauss(90,5),14,5)
] /(10) tusks(n - 1);

tusk_s(n,l,r,u) : n > 0 -> !(r) /(gauss(0,5)) &(u) C(l,0) tusk_s(n


- 1, l *
0.90, r * 0.78, u * gauss(1.08, 0.067))
: n <= 0 -> !(r * 0.1) ^(u) C(l,0);

ll(i,u,t) : i > 0 -> /(20) [ &(u) +(t) col(2) BP_old_hood col(1)


BP_old_back
] ll(i - 1, u, t)
: i <= 0 -> [ f(5.0) col(1) lm(18,u+10,t - 4.5) ];
lm(i,u,t) : i > 0 -> /(20) [ &(u) +(t) S(0.5) col(3) BP_old_hood ]
lm(i - 1,
u, t)
: i <= 0 -> [ f(5.0) col(1) ls(6,u + 10,t - 10) ];

78
ls(i,u,t) : i > 0 -> /(20 * 3) [ &(u) +(t) S(0.25) col(3) BP_old_
hood col(1)
BP_old_back col(1) BP_old_BP ] ls(i - 1, u, t);

axiom:
@(0.2) *(2,0,1) col(0) [col(8) S(9) sphere10] el(0);
}

scene {
spine(time * 50);
}
Fig. 3. Synthesized plant form based on the BP logo (top) and the developmental
rules from which it was generated (bottom).

Acknowledgements: I am grateful for discussion with Gordon Monro who first raised the
question of representation in generative art with me. This research was supported by an
Australian Research Council Discovery Grant, DP1094064.

References

Ball, P. Bright earth: art and the invention of color. New York, Farrar Straus
andGiroux.2002.
Dennett, D. C. Real Patterns. Journal of Philosophy 88: 2751. 1991.
Foster, H. The return of the real: the avant-garde at the end of the century. MIT Press,
Cambridge, Mass.1996.
Kittler, F. Discourse Networks 1800/1900, with a Foreword by David E. Wellbery.
Stanford. 1990.
McCormack, J. Aesthetic Evolution of L-systems Revisited. Applications of Evolutionary
Computing (EvoWorkshops 2004). G. R. Raidl, S. Cagnnoni, J. Brankeet al. Berlin,
Heidelberg, Springer-Verlag. LNCS 3005: 477488. 2004.
McCormack, J. A Developmental Model for Generative Media. Advances in Artificial
Life (8th European Conference, ECAL 2005). M. Capcarrere, A. A. Freitas, P. J.
Bentley, C. G. Johnson and J. Timmis (eds). Berlin; Heidelberg, Springer-Verlag.
LNAI3630:8897.2005.
McCormack, J., O. Bown, A. Dorin, J. McCabe, G. Monro and M. Whitelaw. Ten
Questions Concerning Generative Computer Art. Leonardo (to appear, accepted July
2012).[preprint available at: http://www.csse.monash.edu.au/~jonmc/research/
Papers/TenQuestionsLJ-Preprint.pdf] 2012.
McCormack, J. and A. Dorin. Art, Emergence and the Computational Sublime. Second
Iteration: a conference on generative systems in the electronic arts, Melbourne,
Australia, CEMA. 2001.
Scruton, R. Beauty. Oxford University Press, Oxford; New York. 2009.

79
80
The Textural X

Alex McLean
a.mclean@leeds.ac.uk
Interdisciplinary Centre for Scientific Research in Music, University of Leeds, UK

Keywords: Computer Programming, Live Coding, Knitting.

Abstract: This paper considers the binding of analogue and digital forms in the context
of computer programming. An argument is constructed based upon a knitting metaphor,
relating patterning of wool with the functions of code over time. The relation between
linear and cyclic time is considered, from the standpoint of the experience of program-
ming, in particular the live coding of dance music. By way of illustration, example code
demonstrating the weaving of analogue (continuous) and digital (discrete) pattern is
shown, using pure functional code with visual examples.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

81
1.
Introduction

Somewhere between the Analogue and the Digital lies the whole experience of texture.
Computers offer a cruel enactment of a wholly digital realm: a discrete, mathemati-
cal world stripped of gesture and emotion. Strangely, many who associate themselves
1. F
 or example, see Simon Pennys with the term digital art approach the digital nature of their medium with disgust1, and
commentary on his artwork
have constructed an analogue substrate, where code is hidden from view, and pixels are
Fugitive II (accessed 20th January
2013). merged into beautifully anti-aliased, continuous shape. But even this substrate must
stand the charge of being impoverished; human-computer gestures are increasingly re-
duced to prods and smears on flat glass. This charge extends to interactive digital art; see
Figure 1 for an extreme example of the role spectators are reduced to in art and design
exhibitions. It seems that in order to escape the digital environment, we have created a
2. T
 his polemic should not get in farce of the world outside2.
the way of celebrating the many
fine examples of interactive art-
works which live between these
extremes.

Fig. 1: Instructions for interacting with an artwork, seen at STRP festival, 2009.

Perhaps instead of focusing on either digital or analogue aspects, we should instead


focus on how they mutually support one another in perception (Paivio 1990). After all, the
shift from digital to analogue is a reversal of history; the discrete form of text emerged
from imagery, and is made from imagery. Rather than thinking of computers as devices
either for textual communication, or for systems that support gestural tools, perhaps we
should use them to search for an illusive X which binds the two. As this X has continued
to flow between us over evolutionary timescales, the vocal tract has developed as its pri-
mary conduit. Over this time our mouths have articulated not only to eat and breathe,
but to form grunts, drones, chants and words; an organ for digital phonetic symbols,
slurred into diphones and with analogue prosodic gesture and rhythm. We do not fully
understand the operation of the vocal tract, but the trace of X is clear, in the poetic whole
emerging from the simultaneously discrete and analogue articulation, intertwined in
mutual support.

2.
The loop

Along with the need to communicate with the voice, comes also the need to keep warm.
Somewhat neatly, knitting provides metaphorical patterns and knots with which we
may bind language with form. Knitting patterns are a kind of natural, domain-embed-
ded programming language (Gold 2011), and mechanical computers and textile looms
share early history (Babbage and Lovelace; Essinger 2004). In the following then, we use

82
knitting as a metaphor on which we build an alternate viewpoint of the experience of
programming, with focus on time.
Time itself is an arrow, and we are propelled forward with it. It is also a circle, for
example the cycle of life, returning to where it began. These two views are hardly recon-
cilable, in linear terms it is the future which comes to meet us, and in cyclic terms it is
the past. So it is with knitting socks; a line of wool, five straight needles, a cyclic pattern
tying loops into circles, the heel turns but eventually the sock emerges.
We often understand computers in terms of an algorithm (pattern), converting ana-
logue inputs into digital discontinuities (wool into knots) and the form of text(ile) that re-
sults. But how often do we attend to the experimental possibilities of the loop? Nowadays
computer processes rarely run to a conclusion, but loop continuously, oscillating in
sympathy with human interaction. Perhaps we should consider software not as tools,
applications, or frameworks for producing something, but as a fabric which captures the
oscillations of hardware, to be experienced in its own right.
Notionally, the present moment is a durationless point in time. Experientially, a per-
ceived moment has a duration of sorts, for example we do not experience sound in terms
of states of airpressure, but in terms of fluctuations which deliver the experience of a dis-
crete, momentary sound. We can extend this argument to the cyclic pattern of a rhythm,
which may last from seconds to minutes. If the cyclic period of a rhythm matches with
the duration of the present, we freeze, lost inside feedback. In this state, we step out of
time, but also bring time into sharp focus; small changes amplified against a stationery
ground. In terms of the sock, a repeated pattern forms vertical striations of purl against
knit; a small change in the cyclic pattern causes a sharp discontinuity, making those
striations appear smooth.

3.
Knitting with time: Live coding

Live coding is the use of programming languages in exploratory work, where code is dy-
namically interpreted so that edits take effect without restarts. Live coders often work
before a live audience, such as in improvised music performance (see figures 2 and 3, and
also Collins et al. 2003). This is a radical departure from conventional software develop-
ment, breaking down artificial barriers between technologists and creative users, and
has taken electronic music research by surprise (Emmerson 2007). Time will tell whether
the Live Coding movement will really contribute to fundamental widespread change, but
it is useful to criticise technical practice (after Agre 1997), and look for points where the
ongoing narrative of technological determinism may be broken.

Fig. 2: Dave Griffiths and Alex McLean live coding as two thirds of the band Slub (http://slub.org/) in
Mexico City, November 2012. Their screens are projected behind them, so the audience can see their code,
in-line with the TOPLAP manifesto (Ward et al. 2004). 83
Fig. 3: Audience dancing to the live coding performance shown in the previous figure.

In 1987, Nintendo trialled a knitting machine controlled by a computer games console.


Industry commentators recall this as a hilarious aberration, a prototype quickly dropped
after a bemused executive failed to find words to sell it to management. In this moment,
knitting and computation, so close in Babbage and Lovelaces time, had the possibility
of being reunited once more in console gaming. We can imagine this as a 1980s para-
digm shift that never was, an ungovernable flow of scarves emerging from every childs
bedroom. This could have created a very different expectation for human-computer in-
terfaces, with progress towards interactions more textural rather than those prods and
smears currently in vogue.
The knitting metaphor may still serve us well. Live coding music is very much like
knitting with time. Time is a livecoders wool, not so much in the sense of recorded tape,
but more in terms of the line of a monster curve. The line is twisted, knotted and trans-
formed by patterning structures, thereby creating new dimensions of experience.
Although knitting of socks is enjoyable, the real purpose of socks is to be worn. We wear
code by running it, constructing environments that we listen and dance to. In dancing
the encoded meter, we set the ground: we find an implied pulse and feel it with the whole
body, where the pattern is experienced in contrast. By stepping into the music, we become
part of the program interpretation. But, when we are finished, we are left with nothing.
Live coders knit a live fabric, not an end product; we can touch it, but then it is gone.

4.Knitting with code

This line of thinking may be set more concretely in the practice of computer program-
ming, by considering sourcecode as a pattern for physical experience. In particular, using
Tidal, a domain specific language for musical pattern, embedded in the pure functional
programming language Haskell. Pure functional programming is a familiar topic in
computer science, and occasionally found in mainstream programming practice. What
makes a programming language pure is that a function has no effect beyond turning one
value into another value. What makes it functional is that values can be higher order
constructions, such as add 5 or make twice as fast.
Tidal represents music as a pure function, which takes time as input, and outputs
sound events. This maps from the single dimension of time into multidimensional dance
music, and has a direct analogue with knitting thread into two dimensional texture. Both
involve repetitive, looping patterns, forming a shape that fits the body.

84
In Tidal, the knitting of time into music is represented using the following datatype:

data Pattern a = Pattern (Arc -> [Event a])

In other words, a Pattern is a function from an Arc of time, to a list of events of type
a, where a can be replaced with any other type. The above datatype makes use of the
following type synonyms:

type Time = Rational


type Arc = (Time, Time)
type Event a = (Arc, a)

Time here is a represented as a rational number, of arbitrary precision. An Arc is a


pair of Time values, to represent a start and stop Time. An Event is a value that occurs
over a particular Arc.
A Pattern may behave in two distinct ways, depending on whether it represents
discrete or continuous patterning. Figure 4 illustrates the behaviour of a discrete colour
pattern in visual terms. A Pattern should only return events active for some part of a giv-
en query, although they may start or end beyond the query. Note also that the returned
events may overlap, in order to represent polyphony.

Fig. 4: A discrete colour pattern, showing that a pattern, as a function, may return a number of colour
events active within the given arc, each occurring within their own arc.

This allows a simple representation of transitory values, each of which exists for a
discrete period within a timeline. The timeline is notionally infinite, and we can probe
for events using any Arc within it. As implied by the name Arc though, time is not only
conceived as linear, but also cyclic. As Figure 5 illustrates, a cycle has a period of 1, which
can be subdivided with arbitrary precision. This does not preclude polyrhythmic struc-
tures, but a fundamental loop, of period 1, is the focus. This accords with experimental
evidence provided by London (2004), supporting his hypothesis that humans only attend
to one meter at a time (although may have control over which they attend to).

Fig. 5: A visual conception of a timeline as a spiral or coil, along which repeating


patterns unfold and develop.

85
So far we have talked only of discrete patterns, but the same representation can be
used for representing analogue, continuously-varying patterns. This relies upon the sim-
ple intuition that the closer you look at a continuous pattern, the more detail you are able
to see. So, to represent a sinewave, a Pattern may return the average value of the given
arc. In this way we are able to represent continuously varying values (as in Functional
Reactive Programming; Elliott 2009) accurately, choosing what granularity or rate we use
to sample values from it later.
The distinction between these two kinds of behaviour is the same as the distinction
between analogue and discrete views of texture, as discussed earlier. They are distinct,
but can be combined in mutual support. Tidal is built around ways of using discrete and
continuous together in rich, multidimensional, musical patterns. This amounts to the
melding of the analogue and digital in computer language, but we will not go into further
technical detail, instead turning towards some examples of use.

5.
Tidal in action

It is difficult to get music across on paper, so in sympathy with the present medium, the
following patterns will be of colour. Please consider the the horizontal axis as time, and
the colour onsets and blends to construct temporal structures, which as music would be
explorable through bodily movement.
The following code, shown above its output, multiplies a sinewave with a triangular
wave (which has half the period), and applies the resulting signal to darken a sequence.
The sequence here is described as superimposed sequences of colour, which are separated
by commas. The important thing to observe is that simple continuous and discrete pat-
terns can be combined, and that we can simultaneously perceive a continuous transition
over a discrete pattern.

density 10 $ flip darken


<$> [black blue, grey ~ navy, cornflowerblue blue]*2
<*> (slow 5 $ (*) <$> sinewave1 <*> (slow 2 triwave1))

The following pattern is similar, but takes two sequences, and uses a sine wave to
blend between them.

density 12 $ (blend
<$> blue navy
<*> orange [red, orange, purple]
<*> (slow 6 $ sinewave1)
)

86
The following pattern uses a continous pattern to modulate the opacity of one se-
quence that has been placed over another.

density 32 $ flip over


<$> ([grey olive, black ~ brown, darkgrey])
<*> (withOpacity
<$> [beige, lightblue white darkgreen, beige]
<*> ((*)
<$> (slow 8 $ slow 4 sinewave1)
<*> (slow 3 $ sinewave1)))

Finally, the following pattern blends between two instances of the same pattern, at
different densities.

density 2 $
do let x = [skyblue olive, grey ~ navy, cornflowerblue green]
coloura <- density 8 x
colourb <- density 4 x
slide <- slow 2 sinewave1
return $ blend slide coloura colourb

6.
Conclusion

We have seen various family resemblances between knitting and programming, provid-
ing fertile metaphorical ground to explore the X of programming, taking an alternative
view from the usual metaphors which are often born from commercial and military

87
contexts. In particular, it allows us to consider both the experience and purposes of pro-
gramming in terms of binding analogue as well as discrete forms. The code examples
above may be simplistic, but are offered in support of this view, where the composed,
discrete text of source code may evoke rich experience, much as the text in a novel evokes
rich scenes within a narrative. Where we work with such code live, to which groups of
people come together to dance, we have a good place to search for the elusive X.

Bibliography

Agre, Philip E. Toward a Critical Technical Practice: Lessons Learned in Trying to


Reform AI. In Social Science, Technical Systems, and Cooperative Work: Beyond the
Great Divide (Computers, Cognition and Work Series), ed. Geoffrey Bowker, Susan L.
Star, Les Gasser, and William Turner. Psychology Press. 1997.
Collins, Nick, Alex McLean, Julian Rohrhuber, and Adrian Ward. Live coding in
laptop performance. Organised Sound 8: 321330. 2003.
Elliott, Conal. Push-pull functional reactive programming. In Proceedings of 2nd ACM
SIGPLAN symposium on Haskell 2009. 2009.
Emmerson, Simon. Postscript: the Unexpected is always upon usLive Coding. In
Living Electronic Music, 115. Ashgate Pub Co. 2007.
Essinger, James. Jacquards Web: How a Hand-Loom Led to the Birth of the Information
Age. Oxford University Press, USA. 2004.
Gold, N. Knitting Music and Programming: Reflections on the Frontiers of Source
Code Analysis. In Source Code Analysis and Manipulation (SCAM), 2011 11th IEEE
International Working Conference on, 1014. IEEE. 2011.
London, Justin. Hearing in Time: Psychological Aspects of Musical Meter. Oxford
University Press, USA. 2004.
Paivio, Allan. Mental Representations: A Dual Coding Approach (Oxford Psychology
Series). Oxford University Press, USA. 1990.
Ward, Adrian, Julian Rohrhuber, Fredrik Olofsson, Alex McLean, Dave Griffiths,
Nick Collins, and Amy Alexander. Live Algorithm Programming and a Temporary
Organisation for its Promotion. In read_meSoftware Art and Cultures, ed. Olga
Goriunova and Alexei Shulgin. 2004.

88
Are Luminous Devices Helping Musicians to Produce
Better Aural Results, or Just Helping Audiences
Not To Get Bored?

Vitor Joaquim
vjoaquim@porto.ucp.pt
Research Center for Science and Technology of the Arts (CITAR)
Portuguese Catholic UniversitySchool of the Arts, Porto, Portugal

lvaro Barbosa
abarbosa@porto.ucp.pt
Research Center for Science and Technology of the Arts (CITAR)
University of Saint JosephFaculty of Creative Industries, Macau SAR, China

Keywords: Performance Studies, Electronic Music, Laptop Performance, Interfaces,


Gestural Information, Perception, Conformity.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


Abstract: By the end of the 90s a new musical instrument entered the stage of all stages
and since then, played a key role in the way music is created and produced, both in the
studio and performing venues.
The aim of this paper is to discuss what we consider to be fundamental issues on how
laptopers, as musicians, are dealing with the fact that they are not providing the usu-
al satisfaction of a typical performance, where gesture is regarded as a fundamental
element. Supported by a survey conducted with the collaboration of 46 artists, mostly
professionals, we intend to address and discuss some concerns on the way laptop mu-
sicians are dealing with this subject, underlined by the fact that in these performances
the absence of gestural information is almost a trademark.

89
1.
Introduction

Hearing represents the primary sense organhearing happens involuntarily.


Listening is a voluntary process that through training and experience produces
1. h
 ttp://faculty.rpi.edu/node/857 culture. All cultures develop through ways of listening1.
accessed January 18, 2013.
Pauline Oliveros

Since the first moment a laptop appeared on stage, there is no musical genre that has not
been influenced by it, in one way or another. It may have been a fundamental piece in the
process, or the least an important one, yet it is very difficult to ignore its vitality in the
actual process (praxis) of making music. It may be in a studio while composing a music
piece, drafting sounds on the road or processing a live performance for contemporary
creators. It is already part of our days, nights, best dreams, and worst nightmares. The
history of creativity in music performance has changed, not only because of the laptop,
but also because of the laptop. The same happened with visuals in live contexts where
fundamental burst of new talents are carrying and implying new ways of doing, new
aesthetics results, and above all, new ways of production.
In 1934, Walter Benjamin wrote a text that would become central to the history of arts:
The Author as Producer (Benjamin 1992). After 79 years, this text is still a fundamental
object of reflection when we consider the role of the artist, simultaneously, as an art-
ist-producer of himself.
Benjamin could not figure how much he would be reproduced and replicated among
authors, journalists, naive writers in forums, essays, blogs, books, articles, etc.
Despite the sharpness and contemporaneity of the German author, we must say that
it is not our intention to evaluate the global content of his work. From Benjamin, we
would like to emphasize the role of the artist as producer, and the question of the repro-
ducibility, to a certain point.
Thereby, following the dichotomy introduced by Benjamin (author/producer), we will
try to create a framework of discussion around the laptoper, our subject of study, as an
example of a model-no-model (creation versus production) wherein he is observed and
examined as carrier of the subject raised by Benjamin, when he talks about the author
as someone that in a certain circumstance, has had revolutionary evolution, from the
point of view of the convictions, without being at the same time, capable to reflect in a
truly revolutionary way about his own work, their relationship to the means of produc-
tion, or on their technique . (Benjamin 1992, 143)
What we would like to emphasize, here, is the perception that we are living in a world
in which the artist must face the implications of the perception that he is generating.
Therefore, he should not turn his back to the world where he is producing his own ar-
tistic work.

90
2.
Brief resonances from the laptop world of music

With electronic/digital media developments, especially in computer technology


the possibility to control every parameter that modifies sound became possible.
Yet, even today there is a tendency to recreate traditional music instruments in-
teraction model which focus on pitch and dynamics. (Barbosa 2006, 93)

2.1.
Flashbacking from academia
In 2000, Kim Cascone, an acknowledged composer and activist in the field of electron-
ic music, released The Aesthetics of Failure: Post-Digital Tendencies in Contemporary
Computer Music (Cascone 2000), an article that became one of the groundbreaking texts
about failure in electronic music, with a brief chapter on glitch, back then a relatively
recent line of action started around 1995, with great glow around Mego, a music label
based in Vienna. Farmers Manual, General Magic, Peter Rehberg, Fennesz and Tina Frank
on live visuals and graphics2, are only a glimpse from what could be a long list of artists 2. http://www.lovebytes.org.
uk/2003/docs/pages/tina.htm
using the laptop, on and off stage.
Christian Fennesz reported September 1995, on Flex Club (Viena) as his first time with
a laptop in a live performance. (Joaquim 2012/13).3 3. Sergi Jord (b. 1961) r eported
1989/90 in Vitoria, Spain, as
Least recognized on that article, is the relevance that Cascone applied to the idea of
his first concert with a laptop
Power Tools in what concerns the proximity between creation and production. For the in live context, and Atau
Tanaka (b.1963) pointed 1993 at
first time in history he wrote, creative output and the means of its distribution have
Etablissement Phonographiques
been inextricably linked. (Cascone 2000) From that time on, dozens of articles included de lEst, Paris, as his first live
performance with a laptop.
the term laptop on the header to invoke all kinds of qualifications and solutions around
(Joaquim 2012/13)
the systematic doubt raised from the absence of visual feedback and gestural information.
David Wessel and Matthew Wright, pointed out in an article from 2002, an obser-
vation from Zicarelli (1991) in which he would consider the association that office work
computers may bring to the realm of electronic music made with computers (Wessel
and Wright 2002).
2003, was probably one of the most prolific year of the decade with Glenn Bach writing
about the laptop as dwelling, as a vessel and as loom (Bach 2003). Tad Turner, Nick
Collins, Caleb Stuart and Tara Rodgers are also on this years list of authors writing spe-
cifically about the laptop and the relationship with the audience. It was a very productive
year in terms of critical mass in the realms of the academia.
Sergi Jord, who introduced the term digital lutherie (Jord, Digital Lutherie Crafting
musical computers for new musics performance and improvisation 2005) is probably one
of the most consistent authors about this significant asset, with a long list of reflexive
articles starting in 2001, in which he questioned the practice of electronic music, bring-
ing to the upfront of the discussion some fundamental issues, pointing out at the same
time, possible paths and practical solutions. One of those solutions emerged on the Music
Technology Group of the Universitat Pompeu Fabra in Barcelona with the Reactable, a
project started in 2003 with many goals in mind. He wrote: The foremost goal was to
design an attractive, intuitive and non-intimidating musical instrument for multi-user
electronic music performance, suitable for everyone to start playing from the first minute

91
and yet capable of the more subtle and the more complex. (Jord, On stage: the reactable
and other musical tangibles go real 2008).
What Reactable was also capable of, was to establish a visual relationship with the
user in a way that it could simultaneously became interesting to the audience (listen-
er-viewer) and by this way, induce a blurring in the problem of the visual feedback and
the absence of gesture.
In recent years, we have observed a tremendous increase in the commercial produc-
tion of visual solution for laptopers, and also a big investment from the scientific world
in finding ways to make new approaches in human computer interaction (HCI), accom-
panied by big efforts in tangible user interfaces (TUI), and all imaginable ways to reduce
or neutralize the ghost-machine, which prevents man to express himself gesturally in
all its fullness and splendor (read: irony).
Marcelo Wanderley, recognized researcher in the field of gesture, on his article
Gestural Control of Music, reveals the issue that became one of his central motivations
for research in music field. He says:

Digital musical instruments do not depend on physical constraints faced by their


acoustic counterparts, such as characteristics of tubes, membranes, strings, etc.
This fact permits a huge diversity of possibilities regarding sound production, but
on the other hand strategies to design and perform these new instruments need
to be devised in order to provide the same level of control subtlety available in
acoustic instruments. (Wanderley 2001)

A few years later, in 2006, Mark Zadel pointed on his research, a solution that gave the
name to his thesis: a Software System for Laptop Performance and Improvisation. The aim
was to bring a sense of active creation to laptop performance (Zadel 2006a). Through the
use of drawing, among other operations, the performer could imprint a sense of freshness
and create an impression close to the experience that we have when attending a regular
concert with regular musicians. Zadel expressed that quality of imprint as a process of
infusing the music.
4. Examples: www.emefestival.org, Simultaneously to the attention dedicated in academia, there was also a myriad of
http://www.aec.at,
events in which the laptop started to gain protagonism, like occasional concerts, festivals,4
http://www.transmediale.de,
http://www.sonar.es and publications. The Wire Magazine5 documented in the cover, the arrival of new stars
like Pole and Merzow in 2000, Oval and Kid 606 in 2001, Autchere, Matmos, Aphex Twin
5. http://www.thewire.co.uk and Raster-Noton in 2003, Fennesz, Wilco and Ikue Mori in 2004, etc.
The web was also very active in the analyses of the laptop phenomenon, in 2006, Marc
6. http://disquiet.com Weidenbaum, musical journalist, editor and publisher of Disquiet6, wrote Serial Port: A
Brief History of Laptop Music,7 an article with approx. 6.600 words and a large number of
7. h
 ttp://www.newmusicbox.org/ pictures. It was an extensively and well documented report on the activity, mentioning
articles/Serial-Port-A-Brief-His-
the work of artists such as Joshua Kit Clayton, Matmos, Taylor Deupree, Fennesz, Kid
tory-of-Laptop-Music
606, Monolake, Ikue Mori, Scanner and many more, exposing simultaneously some his-
torical information about software and hardware, not leaving behind some historical
perspectives about key role players like Leon Theremin and Pierre Schaeffer. The article,
generated a series of reactions on the web, about what does it mean being a laptop musi-
cian. In reaction to those questions, Weidenbaum felt compelled to explain and justify in

92
another text8, what was in consideration within the concept of the article. On his original
text, Weidenbaum emphasized, among other aspects within the laptop performance, that 8. h
 ttp://www.newmusicbox.org/
articles/Upwardly-Mobile-What-
laptop music isnt really a genre, and since the laptop can run such a variety of music
we-talk-about-when-we-talk-
software, it may be inappropriate to simply call it an instrument. He stated as a phe- about-laptop-music

nomenon. (Weidenbaum 2006)


Among others, Weidenbaum is referenced by Rebecca Fierbrink, Ge Wang and Perry R.
Cook, in the article Dont Forget the laptop: Using Native Input Capabilities for Expressive
Musical Control (2007).
One year before, in 2005, on the same web page (New Music Box9, a web page dedicated 9. NewMusicBox, is a multimedia
publication from New Music
to the music of american composers and improvisers) , Roddy Schrock, dedicated also an
USA, dedicated to the music
article to the subject on Laptop Music for Beginners10. of American composers and
improvisers.

Miniaturization and increased performance render the personal computer por- 10. h
 ttp://www.newmusicbox.org/
articles/Laptop-Music-For-
table, the desk environment (desktop) is now located in the lap (laptop) or in the
Beginners
palm (palmtop) of the user. (Grossmann 2008)

Throughout this brief overview that was not intended to be comprehensive, we may
have observed that the generalization of the laptop in the musical scene, was accompa-
nied with the problem of the visual feedback, lack of action and absence of gestural in-
formation in performances. This problem is extensively reported since the first moment
of its appearance on stage, by the mid 1990s, when people like Oval, Pita, General Magic
and Farmers Manual started to introduce laptops on stage. Has Peter Worth states on his
Ph.D. thesis, The release of the G3 PowerBook in 1997 was roughly the point at which it
became possible (and affordable) to do the same kind of audio processing on something
a fraction of the size and weight. (Worth, Technology and ontology in electronic music:
Mego 1994-present 2011, 30).
Atau Tanaka pointed 1998 as the turning point with the arrival of the Powerbook G3,
a portable computer that allowed to do real time audio signal processing native on the
laptop, and with that no longer needing hardware synthesizers and samplers in conse-
11. Information retrieved from
quence, it was also possible to pass from Max to MaxMSP (then later to live visuals with
the surveys addressed to laptop
NATO and Jitter) (Joaquim 2012/13)11 practitioners.

2.2.
Key strokes from laptop artists

Working with electronic music has come a long way: from the humble begin-
nings of the early frequency- / synthesizer-music pioneers to todays ubiquitous,
ultra-flexible, emergent, personal audio production environments and custom-
izable sandboxes. (Popp 2011)

Navigating in a completely different map from the academia, artists began to feel all the 12. h
 ttp://www.discogs.com/
search?q=General+Magic
problems arising from the lack of visual information on their performances.
&type=all, accessed January,
In 1995, General Magic (duo of Ramon Bauer and Andi Pieper, co-leaders of the Mego 17, 2013.

label at the time)12 was already mapping a circuit of concerts placing the laptop in front
of the stage, sharing audiences with Peter Rehberg producing sound, and in some occa- 13. h
 ttp://www.lovebytes.org.
uk/2003/docs/pages/tina.htm
sions, with Tina Frank on visuals .13
accessed January, 17, 2013.

93
In 1999, Mego released the second Pita album, Get Out (Mego 029), an album that was
14. http://editionsmego.com/ made using an Apple Powerbook 1400cs/13314, often considered a benchmark in the
release/eMEGO+029
laptop genre, evidenced by descriptions such as a milestone in early laptop music (Sohns
2008) or the first major musical laptop statement (Keenan 2008). (Worth, Technology
and ontology in electronic music: Mego 1994-present 2011, 3031)
15. http://florianhecker.blogspot.pt Florian Hecker15, another laptop pioneer and Mego affiliated, referred also the
Powerbook 1400cs/133 as his first portable machine16. During private correspondence,
16. Private conversation.
Hecker was very prudent in avoiding being identified as a laptop musician, while ex-
pressing deep concerns on the subject, despite all the public documentation showing him
with hands on laptops. What Hecker may indicate with this concern, shared by other
musicians, is probably what we have pointed out in our Introduction, as a concern reflex
on the subject of the author as a producer of himself. Hecker explains: Ive always been
critical about a coinage such as Laptop Music, an invented genre, where thinking beyond
genre would be fruitful () In most of my performances since 2006 () with a few excep-
17. Private conversation. tions, I stepped back from working intuitively with real time DSP during a performance17.
Probably one of the most popular caricatures about the laptop performance started to
rise in the turn of the millennium: the artist as someone that may be reading e-mails or
playing files from the hard-drive, while everything looks meaningful to audience.

18. http://www.annetteworks.com/ A common complaint about many electronic improvisers is the lack of obvious
artist/worksmade/mimeo/
action on stage, the they might as well be reading their e-mail up there line of
index.htm accessed January,
17, 2013. criticism.18 (Abbey 2002)

In another level of production, moved by other forces, Brian Eno, when comparing past
and present of musical studio, wrote The Revenge of the IntuitiveTurn off the options,
and turn up the intimacy, an article in which he stated:

() now Im struck by the insidious, computer-driven tendency to take things


out of the domain of muscular activity and put them into the domain of mental
activity. This transfer is not paying off. Sure, muscles are unreliable, but they
represent several million years of accumulated finesse. (Eno 1999)

We presume that when Eno invokes several millions of years as an argument against
the mental activity, he is in fact, trying to convey the idea that without gesture, musical
performance is losing something that is innate since the beginnings of times. Transposed
to the world of the laptop production, this premonition does not seem to prelude a great
future for its proponents and practitioners. However, over the years, it seems that his-
tory has not given reason to Eno. On the contrary, laptops (the machines with no gesture
behind) are now spread over the world, and it is hard to imagine a stage without a laptop,
from one side to the other of the musical spectrum, considering all levels of production,
from clubs to stadiums, from experimental to contemporary music. Laptops are around us,
and behind each one, there is always someone making choices, whatever they might be.
In 2013, we are now on the verge of imagination, facing a multitude of options and
reactions, where every artist is confronted with a myriad of opportunities. Ranging from
commercial products to custom made patches of Synthesis Software, from hardware

94
solutions to plug-in miracles, the laptoper has many more options than he can imagine
or afford. Eventually, part of those solutions, end up having a significant visual impact
(and effect) on stage leaving on the background the main reason why a sonic solution
has been implemented: help the musician to achieve a better aural result.
Madeon (b.1994) a very young star in the world of electro house and pop music, pre-
sented himself at the MTV EMAs, 2012 surrounded by 3 Launchpads (Ableton) a laptop and
a Xone controller (A&H). The event was extensively advertised on the web and Madeon
was promoting the concert with pictures of himself with the 192 pads (from the 3 con-
trollers) blinking like luminescent lamps in a party. Obviously, it does not make sense
to question the quality of his work, or the reasons for choosing this or that equipment.
What is really important to investigate on Madeon, is: why is he positioning the con-
trollers towards the audience and not towards himself? What we can infer is that he
might be interested in deliver visual feedback from what he is doing, as a way to engage
the audience in the process. In his fieldthe show businessthat determination to
please the crowd, is regularly recognized as an entertainment quality, and represents a
heritage that may find its roots in the old Greek theater tradition, where artists, above
all, should please the audience.
We must underline that one single video from Madeon called Pop Culture (live mash-
up), with him pressing pads on the Launchpad, filmed with one single shot in close-up
over the handsno cuts!is now, hitting over the 16 million plays in Youtube.19 19. h
 ttp://www.youtube.com/
watch?v=lTx3G6h2xyA accessed
Not on the same artistic range, but with the same type of motivations, Sergi Jord
January, 17, 2013.
and all the pioneers of live coding, each one on his side, have arrived at another type of
solution. According to their own points of view and aesthetic options, they choose to ex-
press themselves in different ways. However, they have in common the same motivation
that compelled Madeon to the glamour of blinking lights (i.e. please themselves and the
audience). The basic problem was/is persistently omnipresent, and what they opted to
do, was/is only another variation on the angle of approach.
Jord, opted to research and write about the subject, and also to develop his own
digital lutherie (Jord, Digital Lutherie Crafting musical computers for new musics per-
formance and improvisation 2005) that culminates in the realization of the Reactable.
Coders, on the other side, started to play everywhere, whenever and whatever possible
delivering to the audience, via video projection, all the elements implied on the process
of making their own music. Instead of generating entertainment for the masses, the
live coder generates information in real time about the processes that are being carried
throughout the performance. His goal is to turn the attention of the audience into the
information generated on the moment, using code, the same way that a guitar player
uses the strings: to generate sound.
To summarize, we can look into these facets, and observe three completely different
types of reactions (solutions):
a) Reactable as a result of academic research
b) live code as a political statement
c) triple dose of fancy luminous controllers
Still, we may observe and conclude that the responses are, metaphorically speaking,
like three sides of the same triangle. They are facing different directions, but they are reac
ting to the same stimulus. In this way, they behave like different parts of the samebody.

95
As if one part was thinking, the other was pushing and the other was kicking.
Through the diversity of examples, we hope we have drawn attention to the fact that
so many artists, working in all kinds of aesthetic fields, share this concern far beyond
the limited area of compositional and aural motivations. It is a general concept that the
stage is an immense space of exposure and for exposure, but we must keep in mind also,
that every space has, by definition, boundaries circumscribed by the will of the author,
as the fundamental drive of the event.

2.3.
Resonances from the will

Not only do different people listen differently, but also the very temporality of
our presence in a place is a form of editing. (Lpez 1998)

Affected by the prospect of a boring performance, some laptop artists introduced (and
are still introducing) several types of solutions to keep audiences interested. One of these
solutions, is observed with the use of luminous controllers to interact directly with the
software and, indirectly, with the audience. By this way, the artist can generate also a
visual feedback that facilitates the momentum of the performance by turning the result
into something much more pleasant and communicative.
Considering this option, we shall face all the elements of the equation (authors will,
audiences desire, aims of the piece, space of the event, etc.) and raise one simple ques-
tion: by this way, are controllers helping musicians to produce better aural results or just
entertaining the audience?
Throughout the years, we have listened and read a large amount of justification and
arguments on this large field of speculation, but it is not easy to find a straight and com-
mon perspective that can be shared by the whole community implied on the process. That
may happen, probably, because we are dealing with a high level of uncertainty in which
a large number of ideas are not anchored on facts but on ideas, taken as facts. That con-
fusion is determining in the end, a complex triangulation of facts, mutual expectations,
and even fiction.
It is not easy to define and turn tangible what in general is not tangible.
So, let us rephrase the problem again, from another angle: are performers affecting or
changing what they do live, because of the audience? Because they are concerned about
what might be the correspondence to a certain model of delivering content? Because
they are afraid that they might be not accepted, or at least in a condition wherein they
feel unsafe or unsecured about a satisfactory aural performance? And because of that, not
so well accepted? And, in consequence, affecting all the work based in a preconception
of what is the right model?

What is behind that curtain?


20. F
 rom Born, Never Asked from Laurie Anderson20
the album Big Science (Warner
Bros. 1982)

96
3.
Do laptopers have something in mind?

3.1.
laptop artists under survey

I can honestly say that I do not recall ever feeling better about the quality of a
performance because of the presence of an audience. (Glenn Gould, Mach 1980)

If we look deep inside the universe of the performance space, recognized as stage, we must
consider a general overview into the reasons why laptopers still want to perform, despite
all the indicators pointing to the fact that audiences can be unsatisfied with the apparent
lack of activity and lack of visual cues it sometimes offers (Zadel e Scavone, Different
Strokes: a Prototype Software System for laptop Performance and Inprovisation 2006b)
Through knowledge of the reasons which remain at the option of each laptopers on
playing live, no styles or categorizations included, we will find in all of them, a com-
mon ground that resonates in the deep desire to do it, to go on stage, and just do it. Deep
analyses on the subject should turn in itself a research in the fields of the psychology,
associated with all the performativity activities as common ground.
In the specific field of the laptop performance, what we can infer, and that is our point
of departure in this article, is that we are in presence of a will to do it. That will, may be
stronger in some cases than in others, but there is always a will, an energy that compels
a normal person to become a communicator, and through this option go on stage.
The reason of the will, we believe, is different in every case, relatively impenetrable,
and not necessarily associated with an obscure desire to be admired, or adored. In fact,just
because someone likes to play live, it does not mean necessarily that the person likes to be
on stage. It only means that the person goes on stage. All the rest are conjectures. It is possi-
ble that the person wants to show his or her own work in a live context instead of pressing
a CD or uploading a file to the World Wide Web and chooses the stage as a way to do it. Or,
the person is possibly seduced by the physical experience of hearing through a powerful
and highly qualified sound system, something that we normally do not have at home.
So, in order to learn directly from musicians, and what they think about the experi
ence of using a laptop in live performances, we conducted a survey, via direct enquiry,
with six questions and an open item for observations. The survey targeted active prac-
titioners of laptop and ex practitioners, both genders, ranging from 29 (Jos D. Correia
(Re:Axis)) to 65 years (Carlos Zngaro), with geographical origin in 15 different countries,
from almost all continents (Australia not included).
On this list of practitioners, we include 6 visual artists (Alba G. Corral, Hugo Olim, Lia,
Laetitia Morais, Sladzana Bogeska and Tina Frank) to enrich the information, gain per-
spective and wide open to other experiences.
Intentionally, because it was an open questionnaire, the issue of the absence of ges-
ture was keep out of the frame. That was in fact, the primary reason to do the survey in
a non-directed format, with open questions.
List of questions:
Q1When did you first acquired your first laptop?
Q2When and where did you first used a laptop in a live performance? Please,
specify with all possible detail.

97
(Can you provide a picture/video-link of that first performance?)
Q3What made you start using a laptop in live performance?
Q4From your point of view, what are the qualities of a laptop? Please, try to spec-
ify your points in order. (10 points to fill)
Q5From your point of view, as a user, what are the inconvenients of a laptop?
Please, specify in order.
Q6How many concerts you did since you started playing live with a laptop?
Observations

3.2.
Preliminary results and analysis (part 1)
A general evaluation is being conducted in a long term research project, but we would
like to present some preliminary results, particularly connected with a few ideas shown
in this paper. With special relevance: what is the idea that laptopers have of themselves
and what are the critical insights that they have on the choices they make.
So, from the outcome of these 46 questionnaires we would like to highlight 20 indi-
vidual allusions to descriptions that conveyed, directly or in similar meanings, the expe-
rience of the laptop performance as a boring experience. This result, represents 43,4%
of total respondents, excluding repetitions of the same idea from the same responder.
We should underline that the survey does not allude, in any way or moment, to this par-
ticular factor. Under evaluation, are only the inconvenients of the laptop (Question 5).
The responses are given within a frame that corresponds to the accumulated expe-
rience of the responders as practitioners but we must not forget that they have also the
experience (not negligible) of attending concerts while they are touring. According to
this possibility, we should consider the experience as a global experience, not only as a
practitioners point of view.
Considering the number of concerts performed with laptop, from the responses, we
estimated an average of 133 concerts per each personsince the first concert with lap-
top until the last one. This average, excludes 2 extreme cases: one with an estimated
number above 500 (Christian Fennesz) and the other with an estimated number above
800 (Julien Ottavi).
7 other subjects did not reply to that point. The tendency on these 7 elements for not
responding was associated with lack of information (countless, dont remember any-
more, never counted).
This information, considering estimated values by the artists, corresponds to a global
number of 6.582 performances.
From this block of information, grounded in the announced numbers, we can extrap-
olate that experienced laptopers, in general, are aware that there is in fact, a problem of
perception derived from a problem of non-expression that is inherently to the nature of
the instrument.
21. W
 e will keep the identities This conclusion, is consistent with the examples presented on 2.1 (academia) and
under privacy.
2.2 (laptopers), and raises a significant list of issues that are presented in our final part.
22. C
 an we infer from this obser- Not included on the list of 46 subjects who replied to the survey, 2 artists (L1, L2)21 ex-
vation that they are reacting in
pressed in correspondence that they did not respond to the survey because they cannot
a projective way? Anticipating
and avoiding a possible coinage identify with the idea of being laptopers, or their music associated with the concept of
of their work as laptop music,
laptop music (we never used that term on the survey or in the correspondence leading
thus consistent with the argu-
ment that we expose? to the survey)22.

98
Two responders (L3, L4) from the group of 46, tried to avoid or skip the written format
of the survey, and showed interest to approach the subject using personal contact or by
the way of an interview, outside of the framework of this survey.
Going now into the 20 previously mentioned allusions (descriptions that conveyed,
directly or in similar meanings, the experience of the laptop performance as a boring
experience) we would like to emphasize some lines of thought presented autonomously
by the responders.

3.3.
Preliminary results and analysis (part 2)
Oswald Berthold, from Farmers Manual23, one the first musicians to go on stage with a 23. h
 ttp://web.fm/twiki/bin/view/
Fmext/WebHome
laptop, at least in a consistent way, mentioned that standing in front of a computer (no
matter what type) is not an attractive mode of performing.
As he mentioned:

I perceive it as somewhat shortsighted and pop-culture related to emphasize


the objectness of the instrument too much. Use of a particular emblematic object
(electric guitar, laptop, ) somehow is driven by pragmatic concerns, develops
and intrinsic aesthetic and cultural dynamic, which is a feeback process with
symbol (as in icon) iteration and discourse in culture. The question is rather, how
much processing power can conveniently be put into one place (or some coherent
perceptual domain) and how much of that is put to use for the generation of un-
foreseen dynamics. Regardless of using a laptop or not, of doing an interactive
or autonomous machine performance, the main item of interest is how well the
intricacies of the processes involved are represented in the perceptual channels.
Oswald Berthold
(Joaquim 2012/13)

This association with perception was also highlighted by Marc Behrens24, when ob- 24. http://www.mbehrens.com

serving the laptop computer as an object primarily designed to use while seated, That
is why, in Behrens words, it can be a hermetic machine and not give any indication
to an audience of what the performer is doing. Thus, from his point of view, he likes
to over-emphasize the performative by repeatedly lifting the laptop around, moving its
support, climb chairs and tables etc.
We believe that this challenge as stated is substantiated in the idea that this particular
musician has about laptop performance, a misleading term for a group of people who
mostly perform in the way they would when typing. (Joaquim 2012/13)
Andr Aselmeier, from Incite25, is very clear about this problem and what can be a
25. h
 ttp://www.incite.
possible solution. He says: fragmentedmedia.org

I think the Laptop should not be in the center of the show, the artist and his/her
work should be. With Incite, we thus always cover the glowing apple-logo as it
would be the brightest spot on stage and it carries a message that has no relation
to the art involved.
Andr Aseilmeier
(Joaquim 2012/13)

99
26. http://endliche-automaten.de Marek Brandt, member of the Endliche AutomatenLaptoporchester Berlin26 regard-
ing a certain impact on the performer and observing the performance from the inside,
confesses that is too much staring at the monitor (and) static (disconnected with the
rest of the bodyexcept hands and head) while live performing (Joaquim 2012/13).
27. h
 ttp://www.random- Sebastian Meissner27 man of multiple artistic personas (Autokontrast, Autopoieses,
industries.com
Bizz Circuits, Klimek, Open Source, Random Industries, Random Inc) expressed this gen-
eralized concern about what might be happening behind the screen with some humour:
you have to answer questions if you are playing solitaire. Sense of humor, is in fact,
a characteristic that we can find with some regularity in the replies. In another tone,
Meissner emphasizes that it all depends on what kind of music you want to play. Also, if
you want to perform and entertain people (expose yourself in a physical way on stage) or
if you want to play and present your work to audience which have the patience to listen
to instrumental music. (Joaquim 2012/13).
By this way, underlining the act of listening (having a more attentive audience)
Meissner introduced a shift in the perspective. In fact, audiences are also part of the
equation and should not be left behind.
28. h
 ttp://www.simonwhetham. As Simon Whetham28 said, audience can be left feeling unengaged. That is why he
co.uk
makes the decision of changing his setup, contrary to the way how audiences usually
attend concerts:

I now tend to play either from behind or within the audience, (a) to control what
they hear more accurately, and (b) so there are no expectations of my presence
on a stage or in front of the audience.
Simon Whetham
(Joaquim 2012/13)

According to Whetham, there is no reason for regret or complain about the use of the
laptop on stage; actually he finds the laptop the perfect tool for performance and com-
position when using field recordings and pre-recorded material.
29. http://www.helenagough.net In a very close position to Whetham, Helena Gough29, admitting the absence of physi-
cal or gestural aspect mentions that many audiences are unable to adjust their expecta-
tions to this and focus on listening alone. When referring to the audiences, she declares:

They come to a concert with expectations that are still connected to the classical
music traditionthey want to see the music and the performer. The assump-
tion from this perspective is that a laptop is lacking something because it doesnt
offer this visual aspect.
Helena Gough
(Joaquim 2012/13)

Helena Gough, which has a great experience as violin player, noted that being behind
a laptop, involves tedium and discomfort, some problems with posture and repetitive
strain injuries. Despite the list of inconveniences, she sees the laptop as her own studio
and place to compose, and does not give special credit to the expectations of the audiences.
More, she has a response for that:

100
The response I have to this is quite simple: change your expectation and come to
a performance involving a laptop open to the idea of listening and being absorbed
in sound. From here you will realise that focusing on only one sense can be an
intense and rich experience, and that when you close your eyes, you see with
the mind and the imagination. I consider my performances to be visual only in
this particular manner.
Helena Gough
(Joaquim 2012/13)

Ramon Bauer from General Magic30, another laptop pioneer with reported concerts 30. www.metropop.eu

starting in 1995, states that the problem is anchored in the fact that a laptop is not a
purpose-build instrument, resulting in a non-appropriate haptic interface to play. Like
other laptopers, he finds that the relation with the audience needs to be questioned and
relocated in a proper context. In Bauers words:

Keyboard and mouse are not adequate at all. Even with fancy external control-
lers, the laptop musician is still (often) stuck in a physical position that hampers
the performer to actually perform (physically). This, in my opinion, hampers the
communication with the audience, which (often) has no clue about cause and
effect of what they hear (or/and seein an (audio-)visual context).
Ramon Bauer
(Joaquim 2012/13)

Despite the general perception conveyed by the 46 laptopers and by 20 in particular,


one specific case attracted our attention: Keiko Uenishi, did not conform to the rest of
the inquiries, and introduced a contradictory argument. When asked about what made
her start using a laptop in live performance? (Question 1) she replied: to look boring, so
audience may stop looking at me/performer on stage. (Thats what I hoped for)31. 31. P
 art of the content correspond-
ent to the survey answered by
In this way, she derived the answers in a completely unexpected direction if we con-
Keiko Uenishi (o.blaat).
sider the average reactions from the other responders. Uenishi example became quite
surprising, pointing to further discussion on how to find space for various personal
tendencies in approach to performance. Later, on question number 4, when listing the
qualities of the laptop, Uenishi stated on first place, (i.e. as a positive statement) the idea
that the laptop is boring to look at (unimpressive-looking plain machine).
This idea is complemented and clarified when Uenishi (question number 5, about
the inconvenients of a laptop) states that people are still trying to look at performers
sitting in front of laptop on stage (and complain if theyre not entertained by looking at
them.). On the same line of explanations, she stresses that maybe, its better to give up
looking at them or, otherwise, maybe performers and/or organizers of the event may
need to restructure different ways to present them (if theyre interested to be seen).
(Joaquim 2012/13)
We may infer from these examples that some artists are aware of the impact caused
by a performance with no visual feedback, in which the gestural information is almost
absent. But in practical terms, they tend to conform to the norm, even if we admit that
they react in personal terms and in gradient ways.

101
Beyond this conclusion, we can recognise that each one is reacting to the issue in
different ways but the vast majority tends to conform.
Thus, a possible speculation may arise: what are the necessary conditions to trigger
a change in the way events are being conceived and produced?
As we have seen on chapter 2, academia tried to address the problem by implement-
ing new solutions. We have seen also that artists have found ways to overcome and
adjust themselves to the problem, but in this particular survey, we presented a case of
a laptoper that where others see a flaw or a problem, she sees a virtue and an advantage.
For future work, we plan to select some individuals and conduct personal interviews
with the objective of determining the specific ways how each one stands in particular
contexts and situations.

The will, the will to do that


32. C
 oronel Kurtz, role played by coronel Kurtz (in Apocalipse Now)32
Marlon Brando on the long fea-
ture film, Directed by Francis
Ford Coppola (1979).

4.Conclusions

Going back to our top question, in accordance to the elements part of this article, we as-
sume that there are strong indicators pointing to a positive answer. Yes, we believe that
artists are too much concerned about the visual satisfaction of the audience, and leaving
their own aural expectations being compromised by what can presumably be a desire of
the viewer. Not the listener, but the viewer.
Besides, nobody proved until now that a flashing interface with 64 buttons in sync
with the BPM of the track is bringing added value to the aural program. What we can
prove with this is that there are more lights turning on and off on stage.
If the artist, as performer, is concerned with his aural impact on audiences, he or she
should take into consideration, more than ever before, the fact that it is fundamental to
think not only in artistic terms, but also in production terms. Like Walter Benjamin said:

We all must bear in mind the vastness of the horizon, from which must be re-
thought forms and categories (...) consistent with the technical circumstances
of our current situation, to get to the forms of expression. (Benjamin 1992, 141)

As we have seen through the examples, according to the model of conformity devel-
oped by B. Douglas Bernheim, the problem of interaction in groups exists and is recognized
for a long time; therefore artists should keep in mind that audiences tend to conform to
the norm despite what each person may think individually.
And because laptopers are also part of the population, they are also under pressure to
keep on the same homogeneous standard of behavior (Bernheim 1994) of the audienc-
es. Therefore, it is so difficult to establish and impose an operative model in the form of
another format of performance in which the aural content is the center, and the only
information to be perceived in the space.
For future work, we envisage a deeper debate around conformity in the frame of the
electronic music en general, and in particular in recent genres and processes associated

102
with the laptop performance, with special emphasis on non-idiomatic genres like, glitch,
drone, ambient, live coding, generative, etc.
We end by formulating and synthesizing 3 fundamental issues in the form of open
questions.
One of the fundamental issues is: are we in presence of a phenomenon of conformity
in which audience tends to replicate what is the average tendency of preferring a certain
degree of visual entertainment (served mostly by the gestural information) in detriment
to the absolute value of the aural performance?
Furthermore: Is this tendency to conforming occurring also with the laptoper, i.e.
is he or she, also worried about the social interaction as a fundamental aspect of his
status as well as intrinsic utility (which refers to utility derived from consumption)?
(Bernheim 1994, 841).
In fact, as Bernheim puts it, status is assumed to depend on public perceptions about
individuals predispositions rather than on the individuals actions.
Third and finally: why have we not already implanted in our global model of perfor-
mances, one type of performance that consists uniquely in an aural experience of content?
Some people would argue that is happening already at home, where many people en-
joy listening music in the dark. But we argue that is not the same, at all. And is not the
same, basically because sound is form in itself, manifested in SPL, and a private room
and a home sound system are not comparable in any circumstance to a venue or a sound
system with large speakers. Hearing is physics, not only, but firstly, and without pressure
level there is no sound. The situation is a similar, but the experience is absolutely different.
We may compare it with a picture from Guernica in a book, and the real Guernica in a wall.
Plus, where is the crowd, that fundamental element in all live performances?
We hope to have raised through this study, a broad debate on the issue that gathers
in the same arena creators and audiences, observed under the microscope that represent
the perceptions of both sides, as well as the perceptions on the perceptions of others.
As Walter Benjamin underlined about the artist, the laptop performer must imply
him more on the production process and keep in mind that the production of his own
work is a fundamental step towards a better aural result. Laptops do not need to turn
themselves into luminous lamps and do not need more light. What they need is to sat-
isfy their primary needs in terms of sound, if they work with sound; and on visuals, if
that is what they do.

Acknowledgements: We would like to say Thank You, to all the artists involved in thesur-
vey for their generous collaboration. Thank you, sound artists: Antye Greie (AGF), Andr
Aselmeier (Incite), Atau Tanaka, Carlos Santos, Carlos Zngaro, Christian Fennesz, Evgeniy
Vaschenko, Fernando Corona (Murcof), Francisco Lopez, Frank Bretschneider, Fried Dhn,
Geir Jenssen (Biosphere), Helena Gough, Jason Forrest, Jerome Faria, Jos Diogo Correia
(Re:Axis), Jorge Haro, Juanjo Palacios, Julien Ottavi, Keiko Uenishi (o.blaat), Kera Nagel
(Incite), Kim Cascone, Marc Behrens, Marek Brandt, Mark Fell, Mark Spybey, Miguel
Carvalhais (@c), Oswald Berthold (Farmers Manual), Pedro Almeida, Peter Votava (Pure),
Ramon Bauer (General Magic), Robert Henke (Monolake), Robin Rimbaud (Scanner), Robin
Storey (Rapoon), Sebastian Meissner, Sergi Jord, Simon Whetham, Stephan Mathieu,
Tarek Atoui, Tim Hecker.

103
Thank you, visual artists Alba G. Corral, Hugo Olim, Lia, Laetitia Morais, Sladzana
Bogeska and Tina Frank for the visual angle.
Thank you, Florian Hecker, Markus Popp, Peter Rebherg and Marc Behrens for the
extra input.
Special thanks for the spontaneous insights and motivation, to: Oswald Berthold
(Farmers Manual), Ramon Bauer (General Magic), Mark Fell, Peter Worth, Sergi Jord.
Last but not least, thank you lvaro Barbosa for the confidence.
The author Vitor Joaquim is sponsored by national funds through the Fundaco para a
Cincia e a Tecnologia, Portugal; grant number SFRH/BD/62082/2009 and project PEst-OE/
EAT/UI0622/2011.

References

Abbey, Jon. MIMEO review. The Wire 219 (2002).


Bach, Glenn. The Extra-Digital Axis MundiMyth, Magic and Metaphor in Laptop
Music. Contemporary Music Review. Vol. 22. 2003.
Barbosa, lvaro. Displaced SoundscapesComputer-Supported Cooperative Work
for Music Applications. Barcelona: Universitat Pompeu Fabra Departament de
Tecnologia, 2006.
Benjamin, Walter. Sobre Arte, Tcnica, Linguagem e Poltica. Lisboa: Relgio
Dgua,1992.
Bernheim, B. Douglas. A Theory of Conformity. Journal of Political Economy 102,
1994:841877.
Cascone, Kim. The Aesthetics of Failure Post-Digital Tendencies in Contemporary
Computer Music. Computer Music Journal 24 (2000): 1218.
Eno, Brian. The Revenge of the IntuitiveTurn off the options, and turn up the
intimacy. Wired Magazine 7.01 (January 1999).
Grossmann, Rolf. The tip of the iceberg: laptop music and the information-
technological transformation of music. Organised Sound. 2008.
Joaquim, Vitor. 6 questions to a laptoper. Survey with 2 demographic questions and 4
open-ended questions, 2012/13.
Jord, Sergi. Digital Lutherie Crafting musical computers for new musics performance
and improvisation. Ph.D. thesis, Departament de Tecnologia Universitat Pompeu
Fabra. 2005.
. On stage: the reactable and other musical tangibles go real. Int J. Arts and
Technology 1 (2008): 268287.
Lpez, Francisco. Environmental sound matter. April 1998.
Mach, Elyse. Glenn Gould Turns His Back on the Audience. 1980. http://www.
laphamsquarterly.org/voices-in-time/glenn-gould-turns-his-back-on-the-audience.
php?page=all (accessed January 17, 2013).
Popp, Markus. Oval-O-Full circle lecture v1.2. 2011.
Wanderley, Marcelo M. Gestural Control of Music. Proceedings of the International
Workshop on Human Supervision and Control in Engineering and Music, 2001: 101130.
Weidenbaum, Marc. New Music Box. May 2006. http://www.newmusicbox.org/articles/
Serial-Port-A-Brief-History-of-Laptop-Music/ (accessed January 07, 2013).

104
Wessel, David, and Matthew Wright. Problems and Prospects for Intimate Musical
Control of Computers. Computer Music Journal 26 (2002): 1112.
Worth, Peter. Technology and ontology in electronic music: Mego 1994present. Ph.D.
thesis, The University of York Music Research Centre. September 2011.
Zadel, Mark. A Software System for Laptop Performance and Improvisation. Masters
thesis (McGill University), 2006a.
Zadel, Mark, and Gary Scavone. Different Strokes: a Prototype Software System
for laptop Performance and Inprovisation. Proceedings of the 2006 International
Conference on New Interfaces for Musical Expression (NIME06), 2006b: 168171.

105
106
The Human Fingerprint in Machine Generated Music

Arne Eigenfeldt
arne_e@sfu.ca
Simon Fraser University, Vancouver, Canada

Keywords: Generative Music, Machine-Learning, Heuristics, Aesthetics of Generative Art.

Abstract: Machine-learning offers the potential for autonomous generative art creation.
Given a corpus, the system can analyse it and provide rules from which to generate new
art. The benefit of such a musical system is described, as well as the difficulties in its
design and creation. This paper describes such a system, and the unintended heuristic
decisions that were continually required.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

107
1.
Introduction

Machine-learning offers the potential for autonomous generative art creation. An ideal
system may allow users to specify a corpus, from which the system derives rules and
conditions in order to generate new art that reflects aspects of the corpus. High-level cre-
ativity may then be explored, not only by the careful selection of the corpus, but by the
manipulation of the rules generated by the analysis.
Corpus-based re-composition has been explored most famously by Cope (Cope 2005), in
which his system, EMI, was given representations of music by specific composersfor
example, Bach and Mozartand was successful in generating music within those styles
(Cope 1991). Lewis used autoethnographic methods to derive rules for the creation of free
jazz in his Voyager real-time performance system with which he, and other improvising
musicians, interacted in performance (Lewis 2000). My own work with genetic algorithms
used musical transcriptions of Indonesian Gamelan music to generate new works for
string quartet (Eigenfeldt 2012). In the above cases, artistic creation was of paramount
concern; as such, no attempt would have been made to avoid aesthetic decisions that
would influence the output of the system (in fact, they would have been encouraged).
Using machine-learning for style modeling has been researched previously (Dubnov
et al. 2003), however, their goals were more general in that composition was only one of
many possible suggested outcomes from their initial work. Their examples utilized var-
ious monophonic corpora, ranging from early Renaissance and baroque music to hard-
bop jazz, and their experiments were limited to interpolating between styles rather than
creating new, artistically satisfying music.
The concept of style extraction for reasons other than artistic creation has been re-
searched more recently by Collins (Collins 2011), who tentatively suggested that, given
the state of current research, it may be possible to successfully generate compositions
within a style, given an existing database. This paper will describe our efforts to do just
that, albeit with a liberal helping of heuristics.

2.
Background

People unfamiliar with the aesthetics of generative art might be somewhat perplexed as
to why any artist would want to surrender creative decision-making to a machine. Just
as John Cage pursued chance procedures to eliminate the ego of the artist (Nyman 1999),
I would suggest that generative artists have similarly turned to software in a search for
new avenues of creativity outside of their own aesthetic viewpoints. The benefit of corpus-
based generation avoids Cages modernist reliance upon randomness, and investigates a
post-modernist aesthetic of recombination.
As a creator of generative music systems for over twenty years, I have attemptedas
have most other generative artiststo balance a systems output between determinism
and unpredictability. In other words, I approach the design process as both a composerI
want some control over the resulting musicand a listenerI want to hear music that

108
surprises me with unexpected, but musically meaningful, decisions. Surprise is generally
agreed to be an integral condition of creative systems (Bruner 1992).
Following in the footsteps of forerunners of interactive music systems (Chadabe 1984,
Lewis 1999), my early systems equated surprise with randomness, or, more specifically,
constrained randomness (Eigenfeldt 1989). Randomness can generate complexity, and
complexity is an over-reaching goal of contemporary music (Salzman 1967).
However, it becomes apparent rather quickly that while randomnesseven con-
strained randomnessmay generate unpredictability, the resulting complexity is, using
a term posited by Weaver in 1948, disorganized (Weaver 1948), versus organized complexity
that results from interaction of its constituent parts. In other words, randomness could
never replicate the musical complexity exhibited in a work of music that plays with lis-
tener anticipations and expectations (Huron 2006). These expectations potentially build
upon centuries of musical practice that involve notions of direction, motion, intensity,
relaxation, resolution, deception, consonance and dissonancenone of which can be
completely replaced by random methods.

2.1.
Machine-Learning and Art Production
It makes sense, then, that in order to replicate intelligent human-generated artistic
creation, it would be appropriate to apply elements of artificial intelligence towards
this goal. Machine-learning, a branch of AI in which a system can learn to generalize
its decision-making based upon data on which it has been trained, seems ideal for our
purposes: not surprisingly, adventurous artists already have explored its potential, and
with some initial success.
However, as is often the case with AI, such moderate initial successes have tend-
ed to plateau, and tangible artistic production examples are harder to find. ISMIR1, the 1. http://www.ismir.net/

long-running conference concerned with machine-learning in music, has, since 2011,


included concerts of music that incorporate machine-learning in some way; based upon
attendees informal responses, these concerts have proven to be somewhat unconvincing
artistically. Music Information Retrieval (MIR), as evidenced by the vast majority of papers
at ISMIR, is currently focused upon music recommendation and content analysis, two
avenues with high profit potential. Those few papers with a musicological bent usually
include a variation on the following caveat: the audio content analysis used here cannot
be claimed to be on a par with the musicologists ear (Collins 2012).
The problem that is facing researchers in this particular field is that it is extremely
difficult to derive meaningful information from the necessary data: audio recordings.
Computational Audio Scene Analysis (Wang and Brown 2006) is a sub-branch of machine-
learning that attempts to understand soundor in this case musicusing methods
grounded in human perception. For example, an input signal must be broken down into
higher level musical constructs, such as melody, harmony, bass line, beat structures,
phrase repetitions and formal structuresan exceedingly difficult task, one which has
not yet been solved. Our own research into transcribing drum patterns and extracting for-
mal sections from recordings of electronic dance music (EDM) generated no higher than
a 0.84 success rate, a rate good enough for publication (Eigenfeldt and Pasquier 2011), but
lacking in usability. Therefore, we have resorted to expert human transcription: graduate
students in music were hired to painstakingly transcribe all elements of the EDM tracks,

109
including not only all instrumental parts, but signal processing and timbral analysis as
well. This information can then be analysed as symbolic data, a much easier task.

3.
The Generative Electronica Research Project

The Generative Electronica Research Project (GERP) is an attempt by our research


2. http://www.metacreation.net/ group2a combination of scientists involved in artificial intelligence, cognitive science,
machine-learning, as well as creative artiststo generate stylistically valid EDM using
human-informed machine-learning. We have employed experts to hand-transcribe 100
tracks in four genres: Breaks, House, Dubstep, and Drum and Bass. Aspects of transcription
include musical details (drum beats, percussion parts, bass lines, melodic parts), timbral
descriptions (i.e. low synth kick, mid acoustic snare, tight noise closed hihat), signal
processing (i.e. the use of delay, reverb, compression and its alteration over time), and
descriptions of overall musical form. This information is then compiled in a database,
and analysed to produce data for generative purposes.
Applying generative procedures to electronic dance music is not novel; in fact, it
seems to be one of the most frequent projects undertaken by nascent generative musi-
cian/programmers. EDMs repetitive nature, explicit forms, and clearly delimited style
suggest a parameterized approach. As with many cases of creative modeling, initial
success will tend to be encouraging to the artist: generating beats, bass lines, and synth
parts that resemble specific dance genres is not that difficult. However, progressing to a
stage where the output is indiscernible from the model is another matter. In those cases,
the artistic voice argument tends to emerge: why spend the enormous effort required
to accurately emulate someone elses music, when one can easily insert algorithms that
reflect ones personal aesthetic? The resulting music, in such cases, is merely influenced
by the modela goal that is, arguably, more artistically satisfying than emulation, but
less scientifically valid.
Our goal is, as a first step, to produce generative works that are modeled on a corpus,
and indistinguishable from that corpus style. There are two purposes to our work: the
first purely experimental, the second artistic. In regards to the first, can we create high
quality EDM using machine-learning? Without allowing for human/artistic intervention,
can we extract formal procedures from the corpus and use this data to generate all aspects
of the music so that a perspicacious listener of the genre will find it acceptable? We have
already undertaken validation studies of other styles of generative music (Eigenfeldt et
al. 2012), and now turn to EDM.
It is, however, the second purpose which dominates the motivation. As a composer,
I am not interested in creating mere test examples that validate our methods. Instead,
the goals remain artistic: can we generate EDM tracks and produce a full-evening event
that is artistically satisfying, yet entertaining for the participants?

3.1.
Initial success
As this is an artistic project using scientific methods (as opposed to pure scientific re-
search), we are generating music at every stage, and judging our success not by quan-
titative methods, but qualitative ones. When analysis data was sparse in the formative
stages of research, we had to make a great deal of artistic hypotheses. For example, after

110
listening to the corpus many times, we made an initial assumption that a single 4-beat
drum pattern existed within a track, and prior to its full exposition, masks were used
to mute portions of it (i.e. the same pattern, but only the kick drum being audible): our
generative system then followed this assumption. While any given generated track re-
sembled the corpus, there was a sense of homogeneity between all generated tracks. With
more detailed transcription, and its resulting richer data, the analysis engine produced
statistically relevant information on exactly how often our assumption proved correct,
as well as data as to what actually occurred within the corpus when our assumptions
were incorrect (see Table 1). This information, used by the generative engine, produced
an output with greater diversity, built upon data found within the corpus.

Table 1. Actual data on beat pattern repetition within 8 bar phrases.

Phrase patterns are the relationships of single 4-beat patterns within an 8-bar phrase.

Unique beat patterns in track Unique phrase patterns in track Probability


1 1 .29
>1 1 .21
>1 >1 .5

4.Heuristic Decisions

What has proved surprising is the number of heuristic decisions that were deemed nec-
essary in order to make the system produce successful music. New approaches in AI,
specifically Deep Learning (Arel et al. 2010) suggest that unsupervised learning methods
may be employed in order to derive higher-level patterns from within the data itself; in
our case, not only should Deep Learning derive the drum patterns, but should be able to
figure out what a beat variation actually is, and when it should occur. While one of our
team members was able to use Deep Learning algorithms to generate stylistically accurate
drum beats, the same result can be accomplished by my undergraduate music technol-
ogy students after a few lessons in coding MaxMSP3. I would thus suggest that the latest 3. A common music coding
language, available at www.
approaches in AI can, at best, merely replicate a basic (not even expert) understanding of
cycling74.com
higher-level musical structures. In order for such structures to appear in corpus-based
generative music, heuristic decisions remain necessary. One such example is in deter-
mining overall form.

4.1.Segmentation
As music is a time-based art-form, controlling how it unfolds over time is of utmost im-
portance (and one of the most difficult aspects to teach beginning composition students).
While it may not be as apparent to casual listeners as the surface detailssuch as the
beatform is a paramount organizing aspect that determines all constituent elements.
As such, large-scale segmentation is often the first task in musical analysis; in our hu-
man transcription, this was indeed the case.

111
All the tracks in the repertoire exhibited, at most, five unique segments:
Lead-inthe initial section with often only a single layer present: synth; incom-
plete beat pattern; guitar, etc.;
Introa bridge between the Lead-in and the Verse: more instruments are present
than the Lead-in, but not as full as the Verse;
Versethe main section of the track, in which all instruments are present, which
can occur several times;
Breakdowna contrasting section to the verse in which the beat may drop out, or
a filter may remove all mid and highfrequencies. It will tend to build tension, and
lead back to the verse;
Outrothe fade-out of the track.
Many of these descriptions are fuzzy: at what point is does the Lead-In become the
Intro? Is the entry of the drums required? (Sometimes.) Does one additional part con-
stitute the change, or are more required? (Sometimes, and sometimes.) Interestingly,
during the analysis, no discussion occurred as to what constitutes a segment break:
they were intuitively assumed by our expert listeners. Apart from one or two instances,
none of the segmentations were later questioned. Subsequent machine analysis of the
data relied upon this labeling: for example, the various beat patterns were categorized
based upon their occurrence within the sections, and clear differences were discovered.
In other words, intuitive decisions were made that were later substantiated by the data.
However, attempts to derive the segmentations autonomously proved less than success-
ful, and relied upon further heuristic decisions as to what should even be searched for
(Eigenfeldt and Pasquier 2011).

4.2.Discovering repetition
EDM contains a great deal of repetitionit is one of its defining features. It is important
to point out that, while the specific patterns of repetition may not define the specific style,
they do determine the uniqueness of the composition. Thus, for generative purposes, as
opposed to mere style replication, such information is necessary for successful genera-
tion of musical material.

Table 2. Comparing the number of beat patterns per track, by style.

Style Average # of patterns per track Standard Deviation


Breaks 2.58 1.82
Dubstep 2.5 1.08
Drum & Bass 2.33 2.14
House 1.58 0.57

For example, Table 2 displays some cursory analysis of beat patterns per track, separat-
ed by style. Apart from the fact that House has a lower average, and there is significantly
more variation in Drum & Bass, the number of patterns per track does not seem to be a
discriminating indicator of style (see Table 2).
However, in order to generate music in this style, the number of patterns per track will
need to be addressed: when do the patterns change (i.e. in which sections), and where do

112
they change (i.e. within which phrase in a section)? As we were attempting to generate
music based upon the Breaks corpus, further analysis of this data suggested that pat-
terns tended to change more often directly at the section break, or immediately before it.
Statistical analysis was then done in order to derive the probability of pattern changes
occurring immediately on the section change, at the end of the section, or somewhere
else within the section. Generation then took this into account.
The decision to include this particular feature occurred because we were attempting
to emulate the specific musical characteristics of a style, Breaks; as such, it became one
(of many) determining elements. However, it may not be important when attempting to
generate House. House, which relies much more upon harmonic variation for interest,
will require analysis of harmonic movement, which isnt necessary for Breaks. As such,
heuristics were necessary in determining which features were important for the given
style, a fact discovered by Collins when attempting to track beats in EDM (Collins 2006).

4.3.Computational Models of Style


vs.Corpus-based Composition
As mentioned, our research is not restricted to re-creating a particular style of music, but
creating music generatively within a particular style. The subtle difference is in intention:
our aim is not to produce new algorithms in machine-learning to deduce, or replicate,
style, but to explore new methods of generative music. As such, our analysis cannot be
limited to aspects of style, which Pascal defines as a distinguishing and ordering con-
cept, both consistent of and denoting generalities (Pascal 2013). As discussed in Section
4.2, how beat patterns are distributed through a track is not a stylistic feature, but one
necessary for generation.
Pascal also states that style represents a range or series of possibilities defined by a
group of particular examples: this suggests a further distinction in what we require from
the data. Analysis derives the range of possibilities for a given parameter. For generative
purposes, this range becomes the search space. Allowing our generative algorithms to
wander through this space will result in stylistically accurate examples, but ones of lim-
ited musical quality. This problem is more thoroughly discussed elsewhere, but can be
summarized as the generated music being successful, but lacking surprise through its
homogeneity (Eigenfeldt and Pasquier 2009).
Our new approach considers restricted search spaces, particularly in regard to consec-
utive generated works: composition A may explore one small area of the complete search
space, while composition B may explore another area. This results in contrast between
successive works, while maintaining consistency of style (see Figure 1).
in contrast between successive works, while maintaining consistency of style (see
Figure 1).

Restricted search space


(composition A)
Restricted search space
(composition B)
General (complete) search space

Fig. 1.search
Fig. 1. Restricting Restricting
spacessearch spaces for purposes.
for generative generative purposes.

5 Future Directions 113

Our current goal is the creation of a virtual Producer: a generative EDM artist that is
capable of generating new EDM works based upon a varied corpus, with minimal
human interaction. Using the restricted search space model suggested in Section 4.3,
5.
Future Directions

Our current goal is the creation of a virtual Producer: a generative EDM artist that is ca-
pable of generating new EDM works based upon a varied corpus, with minimal human
interaction. Using the restricted search space model suggested in Section 4.3, a wide vari-
4. soundcloud.com/loadbang ety of output is being generated, and can be found online4. The next step will be to create
a virtual DJ: a generative EDM performer that assembles existing tracks created by the
Producer into hour-long sets. Assemblage would involve signal analysis of every gener-
ated tracks audio in order to determine critical audio features; individual track selection
would then be carried out based upon a distance function between the track data and a
generated timeline, which may or may not be derived from analysis of a given corpus
consisting of DJ sets. This timeline could be varied in performance based upon real-time
data: for example, movement analysis of the dance-floor could determine the ongoing
success of the selected tracks.

6.
Conclusion

This paper has described the motivation for generating music using a corpus, and the
difficulties inherent in the process. Our approach differs from others in that our moti-
vations are mainly artistic. While attempting to eliminate the propensity to insert cre-
ative solutions, we have noticed that heuristic decisions remain necessary. We propose
the novel solution of restricted search spaces, which further separate our research from
style replication.

Acknowledgements: This research was funded by a grant from the Canada Council for
the Arts, and the Natural Sciences and Engineering Research Council of Canada.

References

Arel, Itamar, Derek Rose, and Thomas Karnowski. Deep Machine LearningA New
Frontier in Artificial Intelligence Research. IEEE Computational Intelligence Magazine,
November, 2010.
Bruner, Jerome. The Conditions of Creativity. Contemporary Approaches to Creative
Thinking. H.E. Gruber, G. Terrell, and M. Wertheimer. USA: Atherton Press, 1962.
Chadabe, Joel. Interactive Composing. Computer Music Journal 8:1, 1984.
Collins, Nick. Towards a style-specic basis for computational beat tracking.
International Conference on Music Perception and Cognition, 2006.
. Influence In Early Electronic Dance Music: An Audio Content Analysis
Investigation. Proceedings of the International Society for Music Information
Retrieval, Porto, 2012.
Collins, Tom. Improved methods for pattern discovery in music, with applications in
automated stylistic composition. PhD thesis, Faculty of Mathematics, Computing
and Technology, The Open University, 2011.
Cope, David. Computers and Musical Style. Madison, WI: A-R Editions, 1991.
. Computer Models of Musical Creativity. Cambridge, MA: MIT Press, 2005.

114
Chadabe, Joel. Some Reflections on the Nature of the Landscape within which Computer
Music Systems are Defined. Computer Music Journal. 1:3, 1977.
Dubnov, Shlomo, Gerard Assayag, Olivier Lartillot and Gill Bejerano. Using machine-
learning methods for musical style modeling. Computer, 36:10, 2003.
Eigenfeldt, Arne. ConTour: A Real-Time MIDI System Based on Gestural Input.
International Conference of Computer Music (ICMC), Columbus, 1989.
. Corpus-based recombinant composition using a genetic algorithm.
SoftComputingA Fusion of Foundations, Methodologies and Applications, 16:7,
Springer, 2012.
Eigenfeldt, Arne and Philippe Pasquier. A Realtime Generative Music System using
Autonomous Melody, Harmony, and Rhythm Agents. Proceedings of the XII
Generative Art International Conference, Milan, 2009.
. Towards a Generative Electronica: Human-Informed Machine Transcription
and Analysis in MaxMSP. Proceedings of Sound and Music Computing Conference,
Padua,2011.
Eigenfeldt, Arne, Philippe Pasquier and Adam Burnett. Evaluating Musical
Metacreation. International Conference of Computation Creativity, Dublin, 2012.
Huron, David. Sweet Anticipation: Music and the Psychology of Expectation. Cambridge,
MA: MIT Press, 2006
Lewis, George. Interacting with latter-day musical automata. Contemporary Music
Review, 18:3, 1999.
. Too Many Notes: Computers, Complexity and Culture in Voyager. Leonardo
Music Journal 10, 2000.
Nyman, Michael. Experimental Music: Cage and Beyond. Cambridge University
Press,1999.
Pascal, Robert. Style. Grove Music Online. Oxford Music Online. Oxford University Press,
accessed January 13, 2013,
Salzman, Eric. Twentieth-Century Music: An Introduction. Englewood Cliffs, New Jersey,
Prentice-Hall, 1967.
Wang, DeLiang and Guy Brown. Computational Auditory Scene Analysis: Principles,
algorithms and applications. IEEE Press/Wiley-Interscience, 2006.
Weaver, Warren. Science and Complexity. American Scientist, 36:536, 1948.

115
116
Formalization Using Organic Systemization in
Musical Applications

Jingyin He
jingyinhe@alum.calarts.edu
California Institute of the Arts, Valencia, United States of America

Ajay Kapur
akapur@calarts.edu
California Institute of the Arts, Valencia, United States of America

Keywords:Artificial Intelligence, Conways Game of Life, Cellular Automata, Electronic


Music Performance, Generative, Robotic Musical Instruments, Sound Art.

Abstract: This paper presents the application of Conways Game of Life within the field of
music in a live performance, addressing concerns such as setup, control and aesthetics.
A discussion of selected works identifies the limitations in hardware and software,

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


andexplains the approach about these constrains to the realization of a system in a
recent work.

117
1.
Introduction

With the advancement of audio technology since 1945, the shift in performance aes-
thetics of electronic music has been significant. This began with Pierre Schaffers
Programme de la Recherche Musicale (PROGREMU) in the late forties (Dack 1999); Karlheinz
Stockhausens Gesang der Jnglinge that is based on aleatory, serialism and emphasis on
sound spatiality (Ungeheuer and Decroupet 1998) and Edgard Varses multimedia per-
formance of Pome lectronique during the mid-late fifties (Ouellette 1973); David Tudors
emergent behaviors within electronic circuits; and Iannis Xenakiss Unit Polyagogique
Informatique CEMAMu (UPIC) system and his integration of probability, statistics and
physics in music in the seventies (Xenakis 1971). It is evident that the introduction of new
technologies extends the aesthetics of performance and composition in electronic music.
Within the field of generative music and the use of biological algorithms in compo-
sition, the aesthetics has been shifting towards its ability to self-organize and generate
emergent behaviors (Dorin 2001). The use of artificial intelligence in musical systems
allows us to explore the new and unexpected from the known (Rosenboom 1990). This
may also be applicable in uncovering new aesthetics within the practice of contempo-
rary sonic arts.

The game made Conway instantly famous, but it also opened up a whole new
field of mathematical research, the field of cellular automata Because of Lifes
analogies with the rise, fall and alterations of a society of living organisms, it
belongs to a growing class of what are called simulation games (games that
resemble real life processes). (Gardner 1970).

Since the publication of the Game of Life in 1970, there have been many variations of
the system and its integration in other disciplines. One example of the integration of the
Game Of Life is by a philosopher and cognitive scientist, Daniel C. Dennett. In his book,
Consciousness Explained, he used the Game of Life as an analogy to illustrate how hu-
mans philosophical constructs, such as consciousness, can evolve based on the physical
laws of our universe (Dennett 1991). Within the field of music, Cellular Automata Music
generator (CAMUS) uses Conways Game of Life to determine the two intervals between
three notes (Burraston et al. 2004). Automaton by Audio Damage uses the Game of Life
to drive modulation effects onto audio signal. Other musical applications that feature
the Game of Life algorithm as a pattern generator include Game of Life Sequencer Bank
by Grant Muller, Newscool in Reaktor by Native Instruments, GlitchDS, Runxt Life and
Tehns Conways life for Monome.
Most applications focus their time in the use of Game of Life as a tool for composition
and in post-production works, focusing less on its live performance aspect. Furthermore,
a review (Burraston and Edmonds 2005) has been written on the historical and techni-
cal aspects of cellular automata in generative electronic music and sonic art. Many re-
searchers in this field currently focus more on partitioning their time between different
systems of cellular automata (often in its different applications in composition), and
less on its performative aspect. Instead of following popular research or commentaries,

118
this paper aims to discuss the aesthetics and methodology on utilizing Conways Game
of Life in a live performance setting.
Section 2 briefly reviews the basic concepts of Conways Game of Life to allow suffi-
cient understanding of the subject for this discussion. The third section presents the
aesthetics and perspectives that motivate the idea. Thereafter, selected works are dis-
cussed in a chronological timeline leading up to the case study of a recent performance,
Bots Formalization. Bots Formalization is the authors milestone in research and study of
integrating the Game of Life in a live performance that involves human-robotic interac-
tions. This paper concludes with a brief overview of future works and applications that
extend the current research and practice.

2.
BackgroundConways Game of Life

The Game of Life is a two dimensional cellular automaton1, devised by mathematician


John Horton Conway. It is a simulation based on the births and deaths of living organ-
isms in a system (Gardner 1970). A two dimensional cellular automaton is a mathe-
matical model, in which cells are assigned a particular state, which then changes by 1. Cellular Automaton is created by
John von Neumann and Stanis-
turn according to specific rules conditioned on the states of the neighboring cells. Two- 1
law Ulam to study the process
dimensional simply notates the movement of the cells in both x and yaxis. (Krink 2003) of reproduction and growth.
(Weisstein 2012)
Theoretically, the cellular automaton is based on an infinite square grid lattice; however,
the size of the board is usually defined so that the number of cells present in the arrays
is finite. In the automaton, a cell has two possible states: living or dead. These states
are usually represented by colors. Black counters usually represent living cells, while
white counters represent dead cells. (Gardner 1970) The state of the cells is determined
by the state of the 8 neighboring cells surrounding it at every generation. The rules that
determine the state of the cell for the next generation are as follow: (Gardner 1970)
Let the number of neighboring cells be n,
1. A dead cell becomes alive if n 3. (Birth)
2. A living cell becomes dead if n 1 (Death by exposure)
3. A living cell becomes dead if n 4 (Death by overcrowding)
4. A living cell stays alive if n = 2 or 3 (Survival)
The automaton begins with an initial pattern. Rules of birth and death are applied
throughout the array to form the next generation. These rules are applied to the new
generation that results from the initial pattern again. Here is an example of a simple
pattern:
Let generation be g, hence at initial pattern, g=0

g=0 g=1 g=2 g=3

Beehive

Fig. 1. Illustration of the life history of a simple pattern. (Conway 1970).

119
Figure 1 (above) shows the life history of a simple pattern of tetrominoes, four rook-
wiseconnected counters. (Gardner 1970) At g=3, the automaton ceases. This is because
the resultant pattern of the cells in the subsequent generations is constant. This pat-
tern produced is called a still life. The automaton will cease, when any of the following
occurs (Gardner 1970):
1. All the cells on the board are dead.
2. The cells settle into a stable pattern that remains unchanged in the subsequent
generations.
3. The cells oscillate in a cycle of two or more periods.

3.
Aesthetics and Perspectives

While the aesthetics differs with its applications, it stays within its fundamental of
formalization using organic systemizations, specifically Conways Game of Life, to bring
about structures. This extends to applications, the phenomenal of organic systemization
in which an initial configuration evolves and brings about emergent behaviors based
on the algorithms grammar.
The outlook to performing and composing with the mentioned methodology can be
explained in an analogy as such:

The perfect rhythm of the last slogan breaks up in a huge cluster of chaotic shouts,
which also spreads to the tail. Imagine, in addition the reports of dozens of ma-
chine guns and the whistle of bullets adding their punctuations to this total dis-
order. The crowd is then rapidly dispersed, and after sonic and visual hell follows
a detonating calm, full of despair, dust and death. (Xenakis 1971)

The initial patterns can be perceived as the initial state of order. By starting the
automaton and applying the rules of the game to all the cells on the board, the initial
pattern breaks into chaotic generations of births and deaths. It ends in one of three ways:
cells fading away completely, settling into a stable pattern that remains unchanged, or
going into a stable oscillating phase with two or more periods of cycle. (Gardner 1970)
It is important to note that one has to have a good understanding of the Game of
Life to utilize it strategically within compositions. (Burraston, Edmonds 2005) With the
mastery of theory and practicum, one can alter the system to do the following: prolong
or shorten the generations of births and deaths, resume lives, or put the system to a
stop. If one is able to control the system amidst chaos, one should be able to manage
a series of events, or in musical terms, articulate musical gestures eloquently. In such
cases, the theory refers to the grammar and vocabulary of Conways Game of Life, while
the practicum refers to its performancethe deliberate strategy of making choices
that are aesthetically successful within the composition and the Game of Life in a live
performance.
The main aesthetic of performance using Conways Game of Life is driven towards the
search and discovery of new aesthetics in contemporary sonic art practices. The use of
Game of Life establishes a unified and equal field, setting decisions free from the bound-
aries of stylistic influences. The performers are able to perform music in the analogy

120
of the Game of Life, blurring rhythmic rigidness, structure, while disregarding the un-
wavering radiance of tonality and harmony. The criterion of musicality is two-fold. The
first is how two or more sonic materials interact with one another to create different
sonic textures and timbres. The second is how these different sonic textures interact
with each other. As such, the classification of music as being either ugly or beautiful
is disregarded. This also implies that any interaction between two or more sonic mate-
rials can be considered musical. However, this should not be taken for granted, as the
strategic choice of play used for the organizations and interactions are crucial points to
yield a performance of valid musicality.

4.Case Studies

4.1.FD2.111209
An initial musical application with the use of the Game of Life to further examine the
plausibility of taking the Game of Life to a performance stage is FD2.121109.2 Created in
2009, FD2.111209 is an electronic composition based on the organization of musical mo- 2. https://soundcloud.com/
/jprecursor/fd-2-121109
tifs using Conways Game of Life. (Figure 2) While this work focuses on the compositional
aspect; it set the foundations of studying to the Game of Life in a performance setting.
The main objective of FD2.111209 is to explore how events in the Game of Life relate to
the intensity of musical events that takes place, and how parameters from the Game of
Life can be mapped to musical attributes such as spatial location and pitch. In this work,
two gliders set in a head-on collision path is used as the initial seed configuration, in
an 11 by 11 grid array. The sequence lasts for 12 generations and ended with a 2 by 2 still
life, more commonly known as the block.
FD2.121109 further ascertains the importance of how certain cell configurations
bring about different musical outcomes. Parametrical Thinking involves the use of vari-
ables from a systemization to control the values of musical attributes that are bounded
by upper and lower limits. (Cope 1991) It brings to attention that Parametrical Thinking
is an essential element to integrating the Game of Life in musical applications.

121
Fig. 2. FD2.121109 Score Sequence.

4.2.Deboulerait
Prior to designing a unique system for performance utilizing the Game of Life,
3. http://vimeo.com/19728681 Deboulerait3 utilizes the commercially available midi controller, Novations Launchpad,
and a port of the Game of Life algorithmwhich was originally created for the Monome
(Carbtree 2008). Based on the Monomes version of the code, a version specifically for the
Launchpad was ported in ChucK. ChucK is a new (and developing) audio programming
language for real-time synthesis, composition, performance, and analysis (Wang 2008).
It is chosen for its highly precise scheduler that has no compromise on the dynamics
and expressiveness of the control rates.
Premiered at COLAB 2010 (LASALLE College of the Arts Graduation Showcase,
Singapore)with two performers, Deboulerait features the Conways Game of Life as an
instrument and focuses on its ability to play the game of life as an instrument during
the piece. It consists of a sequential track of events that guides the performers in their
improvisation by sending visual cues to the performers instruments. Conways Game
of Life performs the role of an instrument in this piece. An overview of the system set-
up is shown in Figure 3. The motivation drives towards the discovery and exploration
of using the algorithm and the rules of an organic mathematical model as the basis of
an instrument.

122
Fig. 3. Overview of the system: Deboulerait.

While in most electronic performance, the audience does not get to see what is hap-
pening on the screens and controllers of the performers. Deboulerait features a projected
live video stream showing the performers on their instruments. This adds an additional
element of visual performance aesthetics, as well as draws attention to the performers
aesthetic choices in the Game of Life during the structured improvisation.
The following limitations were found in the progress:
1. The interval between each generation is too consistent, resulting in the rigidity
of rhythmic structure and texture.
2. The dynamics are either too consistent or chaotic (if a random function is in place),
resulting the performance to be musically bland in color or too incoherent.
3. The grid array is finite, therefore limiting the performers in precision and diver-
sity in control.
4. Only one instance of the Game of Life can be run on a singular device.
Henceforth, the above limitations became guidelines to the realization of a new per-
formative setup in the most recent performance, Bot Formalization.

4.3.Bot Formalization
Bot Formalization sets out to test the newly customized performative system that inte-
grates Conways Game of Life. It explores the perform-ability through the formalization
of custom-built robotic musical instruments. A designed systemization interfaces the
mechanical onsets of actuators to a controller that breathes the Game of Life. This is
similar to the agent-based system for robotic musical performance (Eigenfeldt 2008),
4. Further information:
but in addition to that, extends it to an array of robotic musical instruments with the
http://dev.karmetik.com/
use of the Game of Life cellular automata. /labs/robotics.

The custom-built robotic musical instruments4 (Figure 4) residing in the Machine Lab
at California Institute of the Arts (Kapur 2011) include seven independent robotic units

123
that have a total of 170 actuators, consisting of idiophones and membranophones. They
are connected to a main server, communicating with the users in Musical Instruments
Digital Interface (MIDI) through a Local Area Network.

Fig. 4. MahaDeviBot (left) and GanaPatiBot (http://www.karmetik.com/labs/robots)

The most important basis to the successful utilization of the Game of Life is mapping
the automatas parameters to musical attributes as mentioned in the earlier section.
These parameters from the Game of Life and the robotic instruments are shown in
Table 1.

Table 1. Parameters from Game of Life against robotic musical instruments

Game of Life Robotic Musical Instruments


State of cell On/Off (0, 1) No. of Actuators 170
Coordinates of cell x axis/y axis Volume softloud
(integer)
Interval between time (float) Speed slowfast
generations

The Game of Life is setup in the program to send and receive either MIDI or Open Sound
Control (OSC) messages. The outcome of each cell can either be its position coordinates
and current state (x, y, state) or a cumulative message consisting of position coordinates
converted to midi notes and the state scaled to midi velocity (midi note, velocity).
This allows the communication between the Game of Life and an external device.
The use of a controller interface bridges the user and the game itself, while enabling the
performer to figuratively play away from the computer screen. The interface controller

124
used in this setup is the grayscale64 by Monome. It consists of 64 buttons and an accel-
erometer that registers 2 axes. (Figure 5).

Fig. 5. The Monome Grayscale64. (http://www.monome.org)

To further extend the compatibility of the controller closer to the number of actua-
tors in the robotic instruments, the controller is setup to run 2 instances of Game of Life
synchronously and independently.

Table 2. Overview of Mapping

Controller Game of Life Robotic Instruments


Buttons (64 2) Cells Actuators (128)
Accelerometer: xaxis Duration of each Speed of Actuators /
generation / State Hitting Velocity
Accelerometer: yaxis State / Duration of each Hitting Velocity / Speed of
generation Actuators

In summary, the buttons are mapped to the cells in the Game of Life. The additional
accelerometer sensors in the grayscale64 allow us to add further control to the Game of
Life system. In this case, the xaxis is mapped to the time duration of each generation
and y axis is mapped to the MIDI velocity. The mapping of the axes is switched in the
second instance of the Game of Life. The state of each cell also acts as a gate to allow the
passing of accelerometer data for MIDI velocity (Table 2). These mappings give the user
additional control to dynamics and rhythm, which increases the articulation.
Bot Formalizations5 is driven towards a structured improvisation of the analogy men- 5. S elected Video Excerpt of
Performance: http://vimeo.com/
tioned above in Xenakiss quote of the series of events that proceed one after another
channels/vie/55973717.
from the state of order. It also aims to explore the discovery of new organization in
timbres and rhythm using the Game of Life.

125
5.
Conclusion

The system used in Bots Formalization overcame the limitations mentioned in Section
4.2. Dynamics and the fluidity in rhythm and structure are achieved by mapping the
temporal and velocity attributes to a volatile parameter in the Game of Life. By increasing
the number of instances of the Game of Life that runs synchronously and independently,
the restriction of diversity in control is reduced.
In San Francisco (1996), Brian Eno referenced Metaphors We Live By by George Lakoff
and Mark Johnson. Eno mentions that the use of different metaphors for a situation will
change ones perspective towards the situation. (Eno 1996) The metaphorical representa-
tion of living organisms will evoke different insights to performance, uncovering vibrant
dynamics within humanGame of Life interaction. This may also extend to influence
the creation of different timbres and textures, as well as rhythmic structures that one
may overlook during conventional process.
6. O
 ther applications that extend
This paper addresses the live performance aspect of using the Game of Life automata,
further the ideas mentioned
during this discussion can be bringing crucial elements such as dynamics and temporal parameters into discussion,
found at: https://vimeo.com/
which are often not discussed. Focusing more on the perspective that resulted when
/channels/vie/
using the Game of Life in a live performance, a discussion of selected works6 leads to
the realization of a designed systemization7 that addresses the limitations of array size,
7. A
 ll works mentioned are to date dynamics and rhythmic structures.
as of December 31st, 2012.
Future works include designing a universal system that works in MIDI and OSC,
extending this further to other forms of cellular automata and perhaps its integration
and application other performing arts, and pedagogy for performance using Conways
Game of Life.
By way of conclusion, while there may be areas of considerations that may be omitted
in this discussion, this paper presents a specific methodology and perspective that leads
to the realization of a tool that brings the musical application of Conways Game of Life
to a live performance setting.

Acknowledgements: Special thanks to the Machine Lab Team (Eric Singleton, Nick Suda,
Kameron Christopher and Michael Darling) at California Institute of the Arts for their
patience and technical assistance with the robotic musical instruments, and Monome.

References

Burraston, Dave, Edmonds, Ernest, Livingstone, Dan and Miranda, Eduardo Reck.
Cellular Automata in MIDI based Computer Music. Plymouth, University of Plymouth, 2004.
. Cellular Automata in generative electronic music and sonic art: a historical and
technical review. Digital Creativity 16, no. 3: p.16585, 2005.
Carbtree, Brian. Conways life App::life [monome]. March 31, 2008. Accessed
December25, 2012. http://docs.monome.org/doku.php?id=app::life.
Cope, David. Computers and Musical Style. Madison: AR Editions Inc., 1991.
Dack, John. Systematizing the unsystematic. Lansdown Centre for Electronics Art,
UK: Diffusion vol. 7, 1999.
Dennett, Daniel. Consciousness Explained. Boston: Back Ray Books, 1991.

126
Dorin, Alan. Generative processes and the electronic arts. Organised Sound, vol. 6, no. 1:
p. 4753, 2001.
Eigenfeldt, A. and Kapur, A. An Agent-based System for Robotic Musical Performance
Genoa, Italy: International Conference on New Interfaces for Musical Expression
(NIME), June 2008.
Gardner, Martin. Mathematical Games: The fantastic combinations of John Conways
new solitary game life. Scientific American (223), p. 120123, 1970.
. The Game of Life, Part III. In M. Gardner, Wheels, Life and Other Mathematical
Amusements, W.H. Freeman. p. 246, 1983.
Ge Wang. The ChucK Audio Programing Language: A Strongly-timed and On-the-fly
Environ/mentality. PhD Thesis, Princeton University, 2008.
Kapur, Ajay, Darling, Michael, Diakopoulos, Dimitri, Murphy, Jim W., Hochenbaum,
Jordan, Vallis, Owen and Bahn, Curtis. The Machine Orchestra: An Ensemble of
Human Laptop Performers and Robotic Musical Instruments Music Computer Music
Journal vol. 35, no. 4: p. 4963, Winter 2011.
Krink, T. Cellular Automata. 2003. Accessed December 31, 2012 (The Gale Group Inc.)
from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3404200041.html
Ouellette, Fernand. Edgard Varse: A Musical Biography. London: Calder and Boyers,1973.
Rosenboom, David. The Performing Brain. Computer Music Journal vol. 14, no. 1:
p.4866, Spring 1990.
Ungeheuer, Elaine, and Decroupet, Pascal. Through the Sensory Looking-glass:
theAesthetics and Serial Foundations of Gesang des Jnglinge. Perspectives of New
Music, vol. 36, no. 1: p. 97142. Princeton: Princeton University Press, 1998.
Weisstein, Eric W. Cellular Automaton. MathWorldA Wolfram Web Resource.
Accessed December 10, 2012 h
ttp://mathworld.wolfram.com/CellularAutomaton.html
Xenakis, Iannis. Formalized Music: Thought and Mathematics in Composition.
Bloomington: Indiana University Press, 1971.

127
128
What Are You Telling Me? How Objects Communicate
Through Dynamic Features

Sara Colombo
sara.colombo@mail.polimi.it
Design Department, Politecnico di Milano, Italy

Lucia Rampino
lucia.rampino@polimi.it
Design Department, Politecnico di Milano, Italy

Sara Bergamaschi
sara.bergamaschi@mail.polimi.it
Design Department, Politecnico di Milano, Italy

Keywords: Design, Communication, Dynamic Products, Sensory Features.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


Abstract: Product sensory features are handled by designers to convey implicit messages
to users. However, thanks to technology advances, traditional static product features are
becoming dynamic, able to actively change over time. Exploring how these new proper-
ties can communicate a different layer of information is the aim of the study presented
in this paper. To achieve the goal, a case study analysis was performed, by collecting real
products, prototypes and concepts which present dynamic sensory features. The analysis
of the selected samples led to the identification of a number of categories of dynamic
products, within which it was possible to stress some parameters and criteria useful for
designing such artefacts. Relations among the senses activated, the contents of the com-
munication and the source of the information have been identified, and insights have
been proposed as results.

129
1.
Introduction

Artifacts have the ability to communicate messages to users through different languages
and media. Product form has always been considered as a communication means: prod-
ucts convey messages to users through their sensory properties (visual, tactile, auditory,
etc.), and their communicative potential has been widely investigated in the last decades
by the field of product semantics (Krippendorff 1989, 2004; Demirbilek and Sener 2004).
However, as Krippendorff and Butter (1984) affirm, products convey messages not only
through their physical features, but through three main channels: information displays,
graphic elements fixed to product surface and product form, shape and texture.
We can thus affirm that the information which products convey is static and related
to the product itself (affordance, mode of use, symbolic meaning, character) when the
medium is the product form. But such information can also be dynamic and connected
to external situations, phenomena and sources: this happens when the medium is a dis-
play or an interface. Indeed, displays and interfaces are able to communicate informa-
tion that change over time, but in order to do this, they traditionally use a language that
is outside the domain of product semantics (Krippendorff and Butter, 1984): the verbal,
iconic or numeric language.
However, recent advancements in electronics, computation and material technolo-
gies, revolutionized the concept of product aesthetics and form as traditionally conceived.
Sensory properties (shape, colour, sound, smell, texture, surface, etc.) of artifacts can
in fact be transformed over time, becoming dynamic (e.g. a kettle which indicates that
water is boiling by showing a texture on its surface. Fig.1). These new features actively
transform artifacts forms in response to either external stimuli, users interactions or
automatic pre-programmed schemes.

Fig. 1. One Kettle by Vessel Design. The product changes its own surface when the water boils.

From the product design point of view, the possibility to create dynamic features gives
designers additional material to work with:

Designing such products and systems requires an aesthetic that goes beyond tra-
ditional static form aspects. It requires a new language of form that incorporates
the dynamics of behavior. (Ross and Wensveen 2010)

The emotional content of these dynamic products seems to be very high and stems
from their capacity to surprise and delight users senses. For this reason, in many cases,

130
where dynamic sensory features are embedded into products, the aim is mainly to en-
gage, surprise, or provoke users. Nonetheless, changes in the product form (intended as
the mix of products sensory properties) may be a language through which it is possible
to convey information and messages to users in a more intuitive and less conventional
way than using verbal and iconic language. The advantage is that the communication,
even if less complex, may become more engaging for users, and the interaction with
products more pleasurable.
The potentials of this revolution in the design field are very high, but it seems that
research in this area is still lacking a theoretical base that could support the adoption of
these new communication possibilities by the design practice.

1.1.Objectives
The present study analyses, through the collection of a number of case studies, the pos-
sibility to communicate messages through product dynamic and active sensory features.
The final objective is to shed some light on the issue of dynamic sensory features from
the product design perspective, in order to outline a first theoretical framework in this
area of research. In more details, our study intends to answer the following questions: is
it possible to communicate to users through dynamic changes in the product features?
What kinds of contents can be conveyed? To what extent different senses can be activated
in conveying a message? Have senses different roles in the transmission of the message?
The answer we intend to give is theoretical and in form of hypothesis. In the next
section, the research process we followed is described in details.

2.
The research process

Our starting assumption was that nowadays, in order to communicate a message to the
final user, the designer can exploit also a product physical change. Indeed, in recent years,
a number of commercial products, prototypes and concepts showing dynamic sensory
features have been developed, and the interest towards this topic seems always growing.
However, research in this field is still at a embryonic stage, and there are no theoretical
approaches to the analysis of this new artefact category.
In order to have an overview of what has been occurring in this area, we decided
to adopt a case-study strategy, through the collection and examination of a number of
concrete examples. As Baglieri et al. (2008) state, this research strategy is appropriate
when the research subject is still emerging, to suggest some propositions to be verified
afterwards in different contexts, in order to reach a shared theory. Through this pro-
cedure, we intended to extrapolate some theoretical insights by an inductive process,
starting from what has already been done in the design field in terms of both products
and concepts.
The case-study research process followed three steps:
1. Selecting samples
2. Describing and classifying samples
3. Analyzing results and shaping hypotheses

131
2.1.
Step 1:Selecting samples
The samples selection was performed among design concepts, prototypes and commercial
artifacts. The samples sources were the following:
papers and journal articles (i.e. the International Journal of Design and Design
Issues).
concepts that have entered international design contests (i.e Red-Dot and Samsung
Young Design Award)
design blogs (i.e. Design Boom, Core77, Yanko Design)
well-known design universities and design research centres (i.e. TU Delft, TU
Eindhoven, Cambridge Consultants).

At the end of the first selection process, 70 samples were collected. In figure 2 some
examples are shown: solid poetry concrete tiles (fig.2a) change their colour when wet,
creating different patterns; flower lamp (fig. 2b) changes its shape on the basis of the
electricity consumption in the house; scent of time (fig. 2c) clock releases a different smell
in the environment at each hour; wearable detect air (fig. 2d) is a jacket that lights up and
vibrates when detecting too much pollution in the air.


Fig. 2. a. Solid Poetry by Studio Molen; b. Flower lamp by Interactive Institute Swedish ICT;
c.Scent of time by Hyun Choi; d. Wearable detect air by Genevieve Mateyko and Pamela Troyer.

On these 70 samples, a further selection process was performed, on the basis of a


number of parameters hereafter described.
First of all, we evaluated the communicative intent of the product. This way, we iden-
tified two different categories of dynamic products:
communicative products, which aim at transmitting a message to users through
changes in their sensory features (e.g. Flower lamp, which indicates the electricity
consumption through its changing shape; fig. 2b)
expressive products, in which the dynamic change has just an expressive, aesthetic
or emotional aim (e.g. Solid Poetry is not designed to convey a specific message, but
just to pursue an aesthetic intent; fig. 2a).

Thus, we decided, on the basis of our objectives, to discard expressive products and
to focus our analysis on the category of communicative products, that were further eval-
uated on the basis of the novelty factor. This way, we discarded products which adopt
standardized dynamic signals, such as common LED lights or sound alarms embedded
in appliances. At the end of the selection process, we obtained 45 samples.

132
2.2.
Step 2: Describing and classifying samples
In this second step, our aim was to identify some parameters useful for the classification
of dynamic products. The three parameters we considered were: who or what is sending
the message (i.e. the message source); the nature of the message; the stimulated senses.
The classification of the samples according to these three parameters helped us in
understanding in what situations dynamic products can be adopted to inform the user,
what kinds of messages they are able to convey and which senses can be activated in
order to convey a message.
2.2.1.
The source
The information source is the sender of the message. According to this parameter,
samples were classified into three different categories:
products transmitting messages coming from the product itself (e.g. when they
communicate their internal states, the progression of their works, their energy con-
sumptions, and so on. An example is the Coral cooking, a pot that changes color from
blue to red to indicate the increase of its temperature; fig. 3a)
products transmitting messages coming from the external environment which they
are part of (an example is the E-Plant, that lights up and changes colour to indicate
the electricity consumption in the house; fig 3b)
products transmitting messages coming from a person that wants to keep in touch
with another one or wants to communicate his/her own emotions to others (in this
case we talk about human-human interaction. For instance, Firefly is a soft sphere
which reproduces the heart bit of the beloved person, emitting a pulsating light; fig. 3c)

Fig. 3. a. Coral cooking by William Spiga & Juliana Martins; b. E-Plant by The Signers; c. Firefly by Secil Ugur

2.2.2.
The message
The content of the message can vary a lot, going from the temperature of a room, to the
emotion of a person, to the reminder of an action that has to be undertaken by the user.
Even though the content is so varied, messages can be classified on the basis of their pur-
poses. Indeed, from our analysis, it emerged that a message can be aimed either at just

133
informing the user about something (in this case, we talk about cognitive messages) or
at exhorting the user to take an action (in this case, we talk about exhortative messages).
In the first case, the product aims at transmitting an information that does not de-
mand any immediate intervention (e.g. the room is warm, fig. 4a). In the second case,
the product requires the user to do something (for instance you are dehydrated, drink
water!, fig. 4b).

Fig. 4. a. Heat-sensitive wallpaper by Shi Yuan b. I-Dration by Cambridge Consultants.

2.2.3.
The stimulated senses
Human beings decode information with their senses, thus, in the communication process,
senses can be defined as the receivers of the message (Crilly 2004). For this reason, in
the selected samples, we analyzed which senses are stimulated by the dynamic features.
To do so, we divided all the samples into sensory categories, identifying visual, tactile,
auditory and olfactory products. Then, for each sense, we classified the stimuli adopted
by the products to activate it; for instance, the visual modality is stimulated by changes
in product colour, shape or light, while the tactile modality by changes of temperature,
pressure, position and vibration (fig. 5).

Fig. 5. Map of the sensory stimuli.

2.3.
Step 3: Analyzing results and shaping hypothesis
In order to extrapolate results and shape hypothesis from the case-study analysis, we
summarized each sample into a card (fig 6). In it, the source, the kind of message and
the activated sense are indicated. Subsequently, graphics were created in order to link
both the source and the message to the activated sensory modality. From this, hypothe-
sis were shaped and, finally, some considerations on the differences between prototypes
and commercial artifacts have also been made (fig 7).

134
Fig. 6. Card sample.

Fig. 7. Products and concepts distribution. Each coloured area corresponds to a sample.

Source vs. senses


Each sample has been represented on the sensory map according to the source of the message
and the sense it activates (fig. 8). Hereafter, for every sense, some considerations are drawn.

Fig. 8. Relations between the sensory stimuli and the messages sources.

135
VISUAL STIMULI. The majority of the case studies uses visual stimuli to transmit mes-
sages. The change of light intensity is the most used stimulus in the selected samples,
but it is employed just to convey messages coming from environment and person; indeed,
messages coming from products (e.g. internal state or work progression) are conveyed
only by shape and colour changes. Colour is an important stimulus as well, but it is not
adopted to transmit personal messages. To investigate if these results are casual or de-
pending on semantic reasons, a further study may be necessary.
TACTILE STIMULI.Tactile stimuli are the second in use and they are mostly adopted to
transmit messages coming from a person; in this case, the employed stimuli are pres-
sure and temperature changes. Basing on the studies of Gallace (2010), who describes the
touch stimuli like an affection expression, we can interpret pressure and temperature
like a simulation of the beloved persons touch. Vibration is employed when the sender
is the environment, for instance to communicate that there is too much pollution in the
air (fig. 2d).
AUDITORY AND OLFACTORY STIMULI. Sound and smell turned out to be the less used
senses in our selected samples. In regards to sound, this can be explained by the fact that
one of the parameters for the selection was the novelty factor: since the use of sound is
already well established in the market, it is likely that, when developing new concepts, its
investigation results less stimulating. Indeed, sound is used just in commercial products.
On the contrary, smell is used only in concepts and prototypes (fig.7). Generally, smell is
the most overlooked sense in design, despite its ability to convey messages and its high
emotional potential. In olfactory products, the fragrances used to communicate mes-
sages are chosen by the user; this can stem from the assumption that smell is strongly
connected to peoples memories (Cavalleri 2009): by choosing ones favorite fragrance,
one can more easily remember the information the product wants to convey. This is,
for instance, the case of Scent of Smell (fig. 2c), in which every hour releases a different
fragrance chosen by the user.
Message vs. senses

Fig. 9. Relation between sensory stimuli and message nature.

According to figure 9, most of messages conveyed by dynamic sensory features are


cognitive, i.e. aimed at transferring some knowledge, instead of exhorting to do some-
thing. Specific sensory stimuli are associated to a particular kind of message. For instance,

136
within the touch category, vibration is used in order to exhort users to take an action,
while pressure is chosen to convey exclusively cognitive messages.

3.
Conclusions

The samples analysis confirmed us that the designer, in order to communicate a


message to the final user, can design a product physical change. Through such changes,
the product can transmit messages that originate from either itself or the environment
or a person who wants to communicate with someone else.
The case studies analysis confirmed that dynamic products can rely on all the sensory
modalities. Indeed, also transformations in tactile and olfactory features can commu-
nicate specific kinds of information to users. However, sight is still the most employed
sense, likely because it has always been the dominant modality in human perception
(Hekkert 2006). Moreover, it emerged that designers do not pay equal attention to the dif-
ferent sensory modalities. Touch and vision, linked to the materiality of the product, are
usually the main focus of designers activity. Hearing and smell, on the contrary, perceive
qualities that are linked to immaterial features, and are often added to the product in
the final steps of the design process (e.g. for digital sound). This might be the reason why,
so often, these two senses are overlooked in product design practice and are left to spe-
cialists, that design these features as added properties (this is the case for sound design).
The results we propose in this work are based on the case studies analysis, with ref-
erences to previous research. The direct verification of these hypotheses, for instance
through tests with users, may be a subsequent step of the study.

References

Cavalleri, Rosalia. Il naso intelligente, cosa ci dicono gli odori. Roma: Editori Laterza,
2009 (Italian text)
Crilly, Nathan, James Moultrie and P. John Clarkson. Seeing things: consumer
response to the visual domain in product design, Design Studies 25 (2004): 547577
Demirbilek, Oya, and Bahar Sener. Product design, semantics and emotional
response. Ergonomics (2004): 13461360
Gallace, Alberto and Charles Spence. The science of interpersonal touch: An overview.
Neuroscience and Biobehavioral Reviews 34 (2010): 246259
Hekkert, Paul. Design aesthetics: Principles of pleasure in design Psychology Science,
48(2) (2006): 157.
Krippendorff, Klaus and Reinhart Butter. Product Semantics: Exploring the
Symbolic Qualities of Form. University of Pennsylvania press (1984)
Krippendorff, Klaus. On the essential contexts of artifacts or on the proposition that
design is making sense (of things) . Design Issues, 5(2) (1989)
. The semantic turn: A new foundation for design. Boca Raton: Taylor & Francis,
2004
Ross, Philip R. and Stephan A. G. Wensveen. Designing aesthetics of behavior in
interaction: Using aesthetic experience as a mechanism for design. International
Journal of Design, 4(2) (2010): 313.

137
138
Recursive Digital Fabrication of Trans
Phenomenal Artifacts

Stephen Barrass
stephen.barrass@canberra.edu.au
University of Canberra

Keywords:Recursive, Generative, CAD, 3D, Fabrication, Bell, Acoustic, Sounding Object,


TransPhenomenal.

Abstract: The concept of a transphenomenal artifact arose from a project to digitally


fabricate a series of bells, where each bell is shaped by the sound of the previous bell.
This paper describes the recursive process developed for fabricating the bells in terms of
generic stages. The first bells fabricated with this process raised the question of whether
the series would converge to a static attractor, traverse a contour of infinite variation,
or diverge to an untenable state. Reflection on these early results encourages further
development of the recursive fabrication process, and lays groundwork for a theory of

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


transphenomenal artifacts.

139
1.
Introduction

Digital fabrication is typically considered a one-way process, from the digital to the
physical object. But could the process be considered as a transition between different
states of the same artifact? The difficulty is that the 3D structure of a physical object
is static, frozen in time. It cannot morph in response to changes in parameters like a
digital structure can. However there is an aspect of every physical object that is tem-
poral and dynamicthe sounds it makes. Physical acoustics are influenced by shape,
size, material, density, surface texture and other properties of an object. Larger objects
produce lower pitched sounds, metal objects are louder than plastic, and hollow objects
produce ringing sounds. The acoustic properties of an object may be analysed with
spectrograms and other signal processing techniques. A spectrum contains all the in-
formation required to re-synthesise the sound from simple sine tones, and this is the
theoretical basis for the electronic music synthesizers. Could the spectrum recorded
from a sounding object also contain the information to reconstruct the object that made
the sound? This speculation lead to the idea to digitally fabricate an object from a sound
recording. A sound could then be recorded from the new object. What would happen if
another object was then fabricated from that sound? This recursive process of digital
fabrication would generate an interleaved series of shapes and sounds shown in Fig. 1.

Fig. 1. An interleaved recursive series of shapes and sounds.

The rest of this paper describes experiments that explore this idea. The background
section describes related concepts of synaesthetic transformation in painting, music
and sculpture. It also describes previous work on sculptural 3D representations of mu-
sic, and the digital fabrication of acoustic phenomena. The following section describes
a first experiment to digitally fabricate a bell. This is followed by an experiment that
develops a recursive method for generating a series of bells in which each bell is shaped
by the sound of the previous bell in the series. The process is broken down into stages
with parameters that can be adjusted to explore the space of possible outcomes. The
discussion reflects on the results of the experiments, identifying theoretical issues and
directions for further research.

2.
Background

Wassily Kandinskys invention of abstract painting was inspired by the abstract struc-
ture of music, and in his writing he refers to the synaesthetic composer Alexander
Scriabins 1915 score for Prometheus: a Poem of Fire which included a colour organ that
projected arcs and waves of colour onto overhead screen in time to the music. The first

140
abstract paintings in Australia were also inspired by music. Roy de Maistres painting
Rhythmic Composition in Yellow Green Minor featured in a controvertial exhibition in
Sydney in 1919 (Edwards 2011). His interest in relations between sound and colour may
have been inspired in part by his attendance one year beforehand at recitals on the co-
lour organ by Alexander Hector in 1918. De Maistre developed a formal Colour Sound the-
ory in studies such as Rainbow Scale D# minorF# minor, and his works were popularly
known as paintings you could whistle. Some of his other musical paintings include
Arrested Phrase from a Haydn Trio in Orange-Red Major, Colour Composition Derived from
Three Bars of Music in the Key of Green, and The Boat Sheds, in Violet Red Key.

Fig. 2. Rhythmic Composition in Yellow Green Minor

In 1993 the Australian coder Kevin Burfitt released the open source music visual-
ization program Cthuga that was the forerunner of the visualization plugins in media
players such as iTunes, Windows Media Player and VLC today (Music Visualization
2013). Music visualizations map the loudness and frequency spectrum of sound into 3D
graphics and image effects. The peer competition within the Cthuga community, and
the ongoing commercial competition between large companies has resulted in high
production values and well developed aesthetics in music visualizations.

Fig. 3. Music Visualisation from MilkDrop

141
Computer programs have also been used in the inverse transformation from graphics
into sounds. The UPIC program, developed by algorithmic composer Iannis Xenakis in
1977, allowed waveforms and volume envelopes to be drawn on a computer screen with
a tablet to be electronically synthesized. HighC, shown in Fig. 4, is a graphic music cre-
ation tool modeled on UPIC that is available for download at http://highc.org/.

Fig. 4. Graphic Music composition using HighC.

The representation of sound in visual form is extended to three dimensions in the


Sibelius Monument created by Finnish sculptor Eila Hiltunen in 1967 to capture the
essence of the music of the composer Jean Sibelius. The unveiling of the sculpture con-
structed from more than 600 hollow steel pipes welded together in a wave-like pattern
sparked debate about the merits of abstract art that resulted in the addition of an effigy
of Sibelius.

Fig. 5. Sibelius Monument in Helsinki.

Digital fabrication provides a new way to create physical objects from sound. A search
for sound in the Shapeways.com community for digital fabrication returns a set of 3D
models titled 12Hz, 24Hz and 48Hz (shown in Fig. 6) constructed from images of vibra-
tions on the surface of water (Shuuki 2012).

142
Fig. 6. 48Hz sound vibration in water.

A further search for music on Shapeways returns several flutes, pan-pipes and
whistles that may be fabricated in either plastic or metal. There is also a wind-chime
fabricated in glass or ceramic. These examples show the potential to use 3D CAD tools
and personal fabrication services to custom design sonic objects and acoustic structures.
Neale McLachlan used a CAD package and computer modeling to design a set of 200
harmonically tuned bells for the Federation Bells installation in Melbourne in 2000,
shown in Fig. 7. He identified the geometric factors that influence the harmonics as wall
thickness profile, wall curvature, conical angle, the circumference of the opening rim,
the thickness of the rim, and the overall width and height of the bell (McLachlan 1997).
Bells are complex 3D shapes that flex in 3 dimensions, and they are much more difficult
to tune than one-dimensional wind or string instruments. Tuning a bell was tradition-
ally done by skilled craftsmen who manually lathed the thickness profile of a cast bell.
Due to the high costs of casting bells in the modern era, McLachlan manufactured CAD
bells by pressing sheet metal, which had the advantage of very consistent geometry.
The fixed thickness required tuning of harmonics by shaping the wall curvature, rather
than lathing the thickness (MCLachlan 2004).

Fig. 7. The Federation Bells in Melbourne.

Advances in digital fabrication technology have brought new materials, such as


stainless steel, bronze, silver, titanium, glass, and ceramics. The introduction of metal
shaping technologies in the iron and bronze ages resulted in the invention of bells,

143
gongs, singing bowls and other resonating musical instruments. Could the introduction
of metals in digital fabrication herald a new era of sounding objects that could not be
arrived at by manual crafting?

3.
Digital Fabrication of a Bell

This section describes an experiment to extend previous work on CAD bells by digital
fabrication, with a view to more complex sounding objects in the future.
Digital fabrication places constraints on size, thickness and level of detail, depending
on the material. The Shapeways.com service constrains stainless steel to a maximum
bounding box of 1000450250mm, wall thickness of 3mm, and detail of 0.6mm. This
is quite limiting but does allow for the fabrication of small bells.
A bell shaped 3D mesh was constructed from graphic primitives using the processing.
org open source environment for graphic programming. The outer hemispherical shell
with diameter 42mm and height 34mm was duplicated, scaled and translated to make
an inner shell. The rims of the outer and inner shells were stitched together to make a
watertight shape. A handle was added so the bell could be held without being damped.
The digitally constructed bell, shown in Fig. 8, was saved as a CAD file in STL format.

Fig. 8. Graphic rendering of the CAD mesh of Bell00.

The CAD file is limited to 64MB and the polygon count to less than 1,000,000 for up-
loads to the Shapeways site. The high resolution mesh was reduced in size and count
by merging close vertices in the Meshlab open source system for editing unstructured
3D meshes (http://meshlab.sourceforge.net/). The mesh was then checked to be water-
tight and manifold using the Netfabb software for editing and repairing 3D meshes for
additive manufacturing (http://www.netfabb.com/). This carefully prepared CAD file
was then uploaded to Shapeways, and fabricated in stainless steel with bronze colouring,
to produce the first prototype of a digitally fabricated bell shown in Fig. 9.

Fig. 9. Digitally Fabricated Bell.

144
When the bell was tapped with a metal rod it produced a ringing tone. The sound
was recorded at 48kHz sampling rate with a Zoom H2 recorder in a damped room. The
recorded waveform in Fig. 10. shows that it rings for about 1s.

0.2541

0.2016

0.1491

0.09663

0.04413

0
-0.008362

-0.06086

-0.1134

-0.1659

-0.2183

-0.2708
0 0.5 1 1.5
Time (s)

Fig. 10. Waveform of Bell 0.

The spectrogram, in Fig. 11, shows partials at 2971, 7235, 13156, 20359 Hz. The first
rings for 1.2s, second 0.75s, third 0.5s and fourth 0.2s. The temporal development
of these partials produces the timbral colour of the bell. Although the partials are not
harmonic, the bell does produce a clearly pitched tone.

2.205104
Frequency (Hz)

0
0 0.5 1 1.5
1
Time (s)

Fig. 11. Spectrogram of Bell 0.

The Long Term Average Spectrum (LTAS) is a 1D summary of the spectrogram.


TheLTAS in Fig. 12, shows the peak amplitude for the four main partials, along with the
four main regions of resonance that produce the ringing timbre of the bell.

145
80

Sound pressure level (dB/Hz)


-40
0 2.205104
Frequency (Hz)

Fig. 12. Long Term Average Spectrum (LTAS) of Bell 00.

The prototype demonstrates that a bell can be digitally fabricated, and opens the
door to more complex acoustic objects that cannot be manufactured or made manually.

4.Recursive Bells

This section presents an experiment to design of a recursive series of bells where each
bell is shaped by the sound of the previous bell in the series.
The stages of the recursive process are shown in Fig. 13. The process begins with
the CAD file specifying an initial bell, labeled as BELL 0. The CAD file is fabricated as a
physical shape, SHAPE 0, which is the stainless steel prototype bell constructed in the
previous section. The sound of SHAPE 0 is generated by tapping the bell, and recorded
as SOUND 0. This sound is then transformed into PROFILE 1 by a process labeled XFORM.
Then PROFILE 1 is added to BELL 0 and the new CAD file is fabricated as SHAPE 1, which
is the next bell in the series. SOUND 1 is then recorded by tapping SHAPE 1, and XFORMed
to create PROFILE 2, which is added to BELL 0 to create the second recursive bell. This re-
cursive process can be repeated ad. infinitum to produce a series of interleaved SHAPES
and SOUNDS generated from each other.

Fig. 13. Recursive fabrication process.

146
4.1.
XFORM
The XFORM is a mapping from sound into a thickness profile that can be added to a bell
shape to change the sound it makes.
The LTAS analysis of the prototype bell captures timbral features in a 1 dimensional
format that can be used to algorithmically construct a thickness profile as a 3D quad
mesh. The LTAS has low frequency and high frequency ends that could be mapped onto
the bell shape in two different directions. The physical acoustics of vibration mean that
lower frequency resonances are produced by larger objects, and higher frequencies by
smaller objects. This led to the decision to tonotopically map the low frequency end of
the LTAS to the large circumference at the opening rim, and the high frequency end to
the smaller circumferences towards the crown.
The first experimental series of bells generated using this XFORM is shown in Table
1. The first row shows the CAD rendering of the basic bell, a photo of the first prototype
fabrication, the waveform of the sound it produces when tapped, and the LTAS profile
with 4 partials. The second row shows Bell 1, with thickness PROFILE 1 constructed by
XFORM from the LTAS of Bell 0, and fabricated in stainless steel. The waveform rings
for 1.5s, and the LTAS shows 3 partials that produce a higher pitch, but lower timbral
brightness. The third row shows Bell 2 shaped by the XFORM of LTAS 1, and constructed in
stainless steel with gold colour. Bell 2 rings for 0.75s, but has only two main partials. The
pitch is higher than Bell 0 and lower than Bell 1, and the timbre is brighter than either.

Table 1. Recursive series of bells 0, 1, 2.

n SHAPE n BELL n SOUND n LTAS PROFILE n+1

4.2.Profile weighting
Bells 1 and 2 look and sound more similar to each other than expected. The weighting
of the shape profile relative to the bell template can be adjusted in the mesh generating
program. The ability to alter this weighting has been added to the process diagram as
a parameter labeled T in Fig. 14.

147
Fig. 14. Process with profile weighting T.

The next experiment tested the effect of varying parameter T on the sound of Bell 2.
An alternative Bell 2+ was fabricated with T double the previous level, thereby doubling
the geometric effect of the PROFILE generated from the sound of Bell 1. The results in
Table 2, show an amplitude modulation in the ringing sound that is heard as a tremolo
effect. There has also been an increase in the frequency of the two main partials. Bell
2+ is distinctly different in timbre from Bell 2, and Bell 1.

Table 2. Bell 2 with doubled parameter T.

n SHAPE 2+ BELL 2+ SOUND 2+ LTAS PROFILE

This result suggests that increasing T may generate more variation in the series
of shapes and sounds. To explore this further the value of T was raised to 3x and used
to generate the next bell in the series. The CAD rendering of Bell 3++, shown in Fig. 15.
has wide flanges that indicate that raising T too high could transform the geometry
beyond the point where it will function as a bell. On the other hand, these flanges may
introduce unusual timbral effects, such as tremolos and vibratos, that are not heard in
conventional bells. At this stage the bell has not been fabricated and the experiment is
still work in progress.

Fig. 15. CAD rendering of Bell 3++

148
4.3.Material
The Bells in the experiments have so far been fabricated in stainless steel. However, other
materials, such as ceramic and glass, also have good acoustic properties. The recursive
generation process is updated with a stage for materials in Fig. 16. What is the effect of
using these materials on the acoustics of the bell?

Fig. 16. Recursive process incorporating material

Bell 2 was re-fabricated in ceramic. This version of the bell is smoother and has less
detail, as can be seen in Table 3.

Table 3. Bell 02 fabricated in ceramic.

n SHAPE 2+ BELL 2 ceramic SOUND 2 ceramic PROFILE 3 ceramic

Tapping the ceramic Bell 2 produced a short, sharp, high pitched, percussive sound
very different from the ringing produced by the stainless steel version. The LTAS profile
has 3 partials that look generally similar to previous bells. However the short duration
makes it difficult to hear spectral details. The reduced detail of the ceramic fabrication
effectively low pass filters the LTAS profile. Does this reduced detail have a perceptible
effect on the sound the bell makes? This could be answered by fabricating a low-pass
filtered version of Bell 2 in metal, and then comparing the sounds produced by the
smoothed and original bells.

149
5.
Discussion

The effect of varying the T parameter raises the question of whether the series will con-
verge to an attractor shape, traverse a contour of endless variation, or diverge to a point
of destruction? Is there a value of T on the boundary between convergence and diver-
gence? Is the recursive process a random walk or does it have a trajectory of some kind?
If the series does converge, the bell will produce a sound that has an LTAS profile that
is identical to its own thickness profile. The shape of this bell is a blueprint for the sound
it produces, and the sound contains the blueprint for the bell that produced it. This attrac-
tor bell and its sound would be bilateral transformations of the same transphenomenal
object. Does such an object actually exist, and can it be found with this process?
The XFORM mapping between the sound and shape in these experiments has been a
simple mapping of LTAS to thickness profile. The decision to map the LTAS in one direc
tion raises the question of whether mapping it in the opposite direction would make
a difference. There are also other ways that features of a recorded sound could modify
the acoustics of a bell. The audio waveform could be wrapped in a spiral down the bell
shape, etching into the profile in a manner similar to a needle groove on a wax cylin-
der or record. The frequency axis of the 2D spectrogram could be assigned to the radial
angles of the bell with the amplitude affecting the profile in the radial directions. Other
kinds of timbral analysis could be used, such as mel frequency cepstral co-efficients
(MFCC), or granular centroid, flux, kurtosis, and skew.

6.
Conclusion

These experiments to generate a recursive series of bells and sounds have identified ge-
neric stages in a systematic process. The XFORM stage is a mapping between sound and
shape. The T parameter controls the level of feedback in the recursive circuit, and the
amount of variation in the shapes and sounds that are generated. This parameter may
also affect whether the series converges, traverses a contour of variation, or diverges
to destruction. The material has a significant effect on the acoustics of the object, and
different materials may cause convergence to particular attractor nodes, for example
the lack of detail in ceramic shapes and sounds may cause rapid degeneration to a sin-
gular point.
The bells in these experiments open the door to the design of more complex shapes
than can be made with conventional manufacturing techniques. The geometry of
acoustic shapes could be generated using a 3D fractal such as the Mandelbulb, or a rule
based L system. These shapes can have complexity that is beyond the state of the art in
acoustic simulation with finite element meshes. Digital fabrication allows rapid proto-
typing of physical objects that could allow research on the acoustics of shapes that are
more complex than hitherto been possible.
These experiments have raised many theoretical questions to guide further exper-
iments which are still in progress. Can the recursive process be used to find a trans
phenomenal artifact where the acoustic response contains the blueprint of the object
that produced it? What new shapes and sounds will be generated through this process?

150
References

Edwards, D. Colour in Art: Revisiting 1919. Art and Australia II: European Preludes and
Parrallels, Diploma Lecture Series, Art Gallery of NSW, 2011.
McLachlan, N. Finite Element Analysis and Gong Acoustics. Acoustics Australia,
25,3,103107, 1997.
. The Design of Bells with Harmonic Application of New Analyses and Design
Methods to Musical Bells. in Proceedings of the 75th Conference of the Acoustic
Society of America, New York, 18, 2004.
Music Visualization. In Wikipedia. Retrieved January 20, 2013, from
http://en.wikipedia.org/wiki/Music_visualization
Shuuki, 48Hz, Retrieved January 20, 2013, from
http://www.shapeways.com/designer/shuuki

151
152
Rhythm Apparatus For the Overhead Projector:
aMetaphorical Device

Christian Faubel
c.faubel@khm.de
Academy of Media Arts Cologne, Cologne, Germany

Keywords: Embodied Cognition, Philosophical Toys, Audiovisual Performance.

Abstract: The rhythm apparatus for the overhead projector is a robotic device that can be
used to demonstrate core concepts of the theory of embodied cognition. At the same time,
it is also an instrument for audiovisual performances. Combining the communication of
scientific insight with amusement and entertainment, it stands in the tradition of phil-
osophical toys. Such a device is introduced here and used to illustrate, in a step-by-step
manner, principles of embodied cognition: emergence and the interplay of brain, body
and environment.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

153
1.
Introduction

In this paper I present a robotic device for demonstrating core concepts of the theory of
embodied cognition. At the same time this robotic device is used as an instrument for
an audio-visual lecture and performance using an overhead projector. It can be seen in
the tradition of philosophical toys (Wade 2004) because it is designed to experimentally
show scientific insight while at the same time providing popular amusement through a
play of shadow, light and sound. The presented work also relates to contemporary artistic
expressions using the overhead projector as they have, for example, been featured at the
art of the overhead festival in 2005 and 2009 (Hilfling and Gansing 2005, 2009).

1.1.
Background: embodied cognition
The core claim of embodied cognition is that intelligent behavior in biological systems
results from real-time dynamics and interaction between nervous system, body and
environment (Johnson 1987, Port and van Gelder 1995, Thelen and Smith 1996). While
computational approaches to cognition focus on the brain as the central information
processing device, the embodied cognition perspective denies this single cause expla-
nation. Historically this paradigmatic shift in the understanding of cognition gained
momentum in the 1980s with a focus on explaining the cognitive aspects of movement
(Kelso, Schner 1988, Schner, Haken and Kelso 1986). Recently the field has moved to
higher cognition explaining more complex behaviors such as spatial working memory
(Johnson, Spencer and Schner 2008), object recognition (Faubel and Schner 2008) or
spatial language (Lipinski et al 2006).
A brilliant example for the type of insight this paradigm shift away from single cause
explanation offered is the work from Esther Thelen on the development of walking in
young infants (Thelen 1984). Newborn babies show a stepping reflex when held upright
on a support surface. This stepping reflex disappears after a few months of age only to
re-appear when the infant has already learnt to walk. The single cause explanation for
this interesting experimental observation was that some neural maturation process in
the brain would inhibit this reflex and that later higher level control would allow it to
re-appear (McGraw 1943). This explanation was accepted for almost 40 years until it was
challenged by Esther Thelen through a simple but insightful experiment. She put babies
that had just lost their stepping reflex into a water basin. Relieved from the weight of
their heavy legs in the water the stepping reflex re-appeared. Thelen argued that the dis-
appearing of the stepping reflex was not the result of brain maturation but the result of
gravity acting on the babies legs. In the early ages of development babies go through an
impressive gain of weight, within three month they almost double their weight. Having
to move their chubby heavy legs, babies naturally exercise and build up muscles. These
muscles are prerequisite for babies to learn to walk (Thelen 1984). Only once they start
walking, the stepping reflex re-appears as a result of training their muscles. In order to
learn to walk and to make the first voluntary steps, losing the stepping reflex seems to be
crucial and part of a developmental and intelligent learning process. Here intelligence
is as much to be found in fat legs as in the developing brain.

154
1.2.
A robotic device as philosophical toy
The term philosophical toy was used in the 19th century to designate technical devices
that provided scientific insight while at the same time providing amusement and enter-
tainment (Wade 2004). Typically such devices were dealing with perceptual effects, and
many are predecessors of todays cinema, such as for example the Thaumatrope or the
Phenakistoscope (see Figure 1).

Fig. 1. The first three images, a Thaumatrop: two images, flowers and a vase, are fused into a single
image by quickly spinning the disc. The last picture shows a Phenakistoscope disc by E. Muybridge that
animates a dancing couple when put in rotation and watched through the slits in front of a mirror.

The robotic device I propose relates to philosophical toys in that a real-time animation
is created with an overhead projector. The projection shows the shadow of the apparatus,
its moving motors and legs (see Figure 2 for an overview of the setup). It does not demon-
strate a visual or psychophysical effect. Instead it operates on a more abstract level and
makes a theoretic concept comprehensible by using visual and auditive effects. The idea
of the rhythm apparatus for the overhead projector is to demonstrate the interdependence
of brain, body and environment. Similar to a biological organism there are three subsys-
tems: An analog electronic controller, motors with legs and the environment. The analog
electronic controller was developed by Hasslacher and Tilden (1995) and is inspired by
simple neural networks that model central pattern generators (Bssler 1986). The mini-
malist electronic controller uses only 12 basic electronic components. The structure of the
motors and legs is equally minimalist, just simple dc-gearbox motors with sticks out of
acrylic glass as legs. The design is chosen to render an interesting projection that resem-
bles more a machine than an organism. This is to underline that the device is clearly an
abstraction of any living organism and that the apparatus operates on a metaphorical
level to make key insights of embodiment accessible.

1.3.
Overview
The paper is organized following the key concepts of embodiment that can be demon-
strated with the apparatus.
How structured patterns emerge out of the interaction of simple units.
That functional modularity fails to account for the interaction of subsystems.
How everything matters: the nervous system, the body, the environment and their
real-time interaction.

155
Fig. 2. Overview of the full setup: in the foreground is actual device on the overhead projector. The back-
ground shows the image that is produced through the projection.

2.
The core circuitemergence

Emergence signifies the property of a system to produce new structures out of the in-
terplay of its constituents. Importantly the constituents alone cannot produce such
structures and the new quality can only result from the interplay. This property can be
paraphrased with the whole being greater than the sum of its parts.
In case of the electronic circuit, this new property is a pattern that only appears when
the constituents, simple units resistor-capacitor pairs coupled to an inverter (see Figure
3.a) are connected into a loop. Each basic unit alone only acts as a change detector for
rising activation at its input. Only when there is significant change of the input voltage
an output signal is produced and the duration of the output signal is independent of the
length of the input signal. The behavior in time of such a basic unit (see Figure 3.b) is
similar to a biological neuron with two functional aspects: First a neuron only produces
a spiking output when stimulated to a sufficient level (Abbott and Dayan 2001). Second, a
neuron adapts to its input: on constant input it stops producing output spikes. The latter
property we experience for example when we are exposed to a bad smell: even though
the concentration of the molecules producing the odor is constant, after some time we
do not smell it anymore (Cometto and Cain 1995). Similarly vision depends on eye move-
ment: we see only because our eyes are in constant movement. We saccade three times
a second and make tiny micro-saccades when fixating on an object. If the eye movement
is stopped, our vision fades (Martinez et al. 2006).

Fig. 3: a) The basic unit: a capacitor (C), a resistor (R) and the inverter (Inv). b) Temporal behavior of a basic
unit: given an input signal at point I, the circuit follows the rise of the signal at point II, but then decays back
to zero. At point III the output goes to zero at the rise of the input and then switches back when the decay goes
below a threshold. A negative pulse is produced for every rising edge of the input. c) The microcore circuit:
four basic units are connected into a loop. d) Illustration of the two dynamic patterns: the two top rows show
the pattern with two traveling pulses, the four bottom rows the pattern with a single traveling pulse. Each dot
represents the off (gray) or on (white) state of the output of a basic unit.

156
The emergent property is a pattern that appears when two or more of these basic units
are connected into a loop. For the rhythm apparatus, I use four basic units (see Figure 3.c).
This circuit, called the microcore, has been developed by Marc Tilden as a very simple
model of a central pattern generator and was used to drive the leg movement of walking
robots (Tilden 1994). The microcore can produce three different stable patterns: one static
pattern where all units are off and two dynamic patterns, one with a single traveling
pulse and one with two traveling pulses (see Figure 3.d)

2.1.
Audiovisual presentation
The electronic circuits are built following a modular design that allows the reconfigura-
tion of the network structure on the fly. Two basic units are assembled into one module,
which is housed in a die cast aluminum enclosure with several interface connectors.
On top are simple brass sticks that connect to the input and to the output of the basic
units (see Figure 4). During a presentation it is possible to reconfigure the circuit using
crocodile clips. As the whole setup is put on the overhead projector, the creation of a new
connection is directly visible. The brass sticks are also used to connect to a set of strong
light emitting diodes that visualize the pattern within the projection. When the overhead
projector is dimmed, the light of the LEDs is clearly visible in the projection (see Figure 5).

Fig. 4. Die cast aluminum enclosure with brass sticks as interface connectors. On the front
arepotentiometers to modify internal parameters of the electronic circuit.

In addition to the brass sticks, a module has mini-jack audio connectors to directly
connect to an active speaker or a mixing desk. This way the pattern is made audible in
a straightforward way: By directly using the pattern to move the speaker membrane, a
beat is created.

Fig. 5. Displaying the pattern with light emitting diodes on the overhead projector. The two frames show
the dynamic activation pattern. In the left frame the outputs number 2 and 4 are active, in the next
frame outputs number 1 and 3 are active.

157
3.
Adding motors and legsno functional modularity

Adding motors to the system illustrates two more important concepts of embodiment.
First, to really understand the function of a complex system breaking it down into func-
tional modules can be totally misleading. Attributing modular structures to a system is
often tempting because it seems to simplify understanding, but when modules interact
with other modules, this can fully alter the way they function. Second, dividing behavior
into the chain of sense-plan-act processes is not the best description for behaving organ-
isms. For example the processes of sensing acting and planning may be interdependent
and intermingled.
When a motor is connected to the outputs of two neighboring basic units of the mi-
crocore, it receives alternating pulses from them. In theory this should produce an al-
ternating movement. The motor should swing from left to right and then back. However
this is not what happens when two motors are connected to the outputs of the four basic
units. Instead the dynamic pattern disappears and both motors just rotate into a single
direction without alternation or pattern.
With a modular perspective one might be tempted to conclude that the modular
system is now broken and defunct. But a simple experiment reveals that it has actually
gained an important property: it has become sensitive. When one adds legs to the mo-
tors, so that it is easy to interact with them with your own fingers, one realizes that if
one stops both motors at the same time they will flip direction. The legs seem to feel the
performers fingers, the motors behave as sensors.
This raises again the second point from above. If the same device that produces the
movement also senses, does it really do it in the order of sense-plan-act? An analysis of
the interaction of the motors with the electronics reveals that it is the specific combina-
tion of the electronic circuit with the motors that produces a behavior which includes
sensing and acting or rather acting and sensing. The pattern disappears because the
motors are directly connected to the electronics without an intermediate driver stage. As
a matter of fact they directly influence the behavior of the electronic circuit. The inertia
of the mechanical parts of the motor produces an opposing force to the current from
the electronic circuit. Because every electric motor also functions as a generator, when
it moves through an external force, such as inertia, it produces a current. The motors
override the pattern of the microcore.

3.1.
Audiovisual presentation
The motors are added by simply fixating them with a clay-like material onto the screen
of the overhead projector. They are connected to a motor connector on each module. Once
connected they immediately begin to rotate, and one sees the shadows of the legs rotat-
ing in the projection. The sound changes accordingly, as the motors are connected they
become audible and one hears the sound of continuously rotating motors. Moving the
finger into the projection to stop the motors causes them to flip direction, which again is
audible as a beat. The device becomes an instrument that partly plays on its own.

158
Fig. 6. The dc-gearbox motor with an acrylic glass stick as leg

4.
Adding external structureeverything matters

The last lesson showcases how complex patterns result from the real-time dynamics and
interaction of a simple controller, a body and the environment.
The environment here is simply created by introducing piezo pickups (Collins 2009)
as obstacles for the legs. As the motors feel these obstacles they reverse when touching
them. The pattern re-appears as rhythmic movement. The rhythm itself can be modi-
fied by changing the positions of the piezo-pickups. When they are placed to constrain
the movement the rhythm accelerates, and when there is more space the rhythm slows
down. A second physical manipulation consists in adding rubber bands between the legs.
Through the rubber band the motors provide mechanical feedback onto each other which
stabilizes the rhythmic movement. A third manipulation modifies the internal param-
eters of the electronic circuit. By reducing the resistance of the resistance-capacitor pair,
the timing and thus the rhythm may also be changed. A forth manipulation controls
the degree of electrical feedback into the electronic circuit. When the feedback from the
motors is reduced, the legs react less to the environment and follow more the internal
pattern of the electronics.

4.1.
Audiovisual presentation
Introducing the piezo-pickups modifies the behavior of the apparatus, and a regular
rhythm re-appears. The pickups act as mechanical barrier but of course they also produce
a sound. Using different materials, such as for example felt or paper, they can be tuned to
sound lower or higher respectively. Adding the rubber bands introduces a new graphical
element to the projection, and the rubber bands appear as thin moving lines in the pro-
jection of the overhead. As the rubber bands influence the movement of the motors the
sound of the motors changes as well. In order to create more tonal variations a simple
analog synthesizer can be connected to the output signals of the core electronic circuit
so that it actually behaves as a sequencer.

159
Fig. 7. Key frames of the final demonstration with moving legs, rubber bands between
the legs and piezo pick-ups.

With all the parameters that can be modified on the fly, the overall demonstration
turns into an audiovisual performance. Its varying beat patterns that are always in sync
with the movement of the legs in the projection.

5.
Summary and conclusion

Combining the didactic wish to convey a complex scientific topic with a format that may
entertain and amuse the audience was once a standard approach in science referred to
as philosophical toys. The rhythm apparatus for the overhead projector picks up this
tradition to convey a scientific theory. It uses very simple analog electronics to create a
behaving system that produces complex movement patterns that are interesting to look
at and to hear to. The theory of embodied cognition offers an alternative approach to
understanding human and animal cognition. In a step-by-step assembly of the appara-
tus, core insights about embodiment are conveyed by using the device as a metaphor for
biological organisms. The metaphor lies in the fact that, analogous to living organisms,
the interactions between subsystems rather than the subsystems themselves create a
huge variety of new behaviors. Being far from as complex as any real living organism it
however can provide a glimpse of how complex behavior can emerge.

References

Bssler, U. On the definition of central pattern generator and its sensory control.
Biological Cybernetics, 54(1):6569. 1986
Collins, N. Handmade electronic music: the art of hardware hacking. Routledge, 2009.
Cometto-Muniz, J. and W. Cain. Olfactory adaptation. Handbook of Olfaction and
Gustation. New York: Marcel Dekker, 1995.
Dayan, P. and L. Abbott. Theoretical neuroscience, volume 31. MIT press Cambridge,
MA,2001.

160
Faubel, C. and G. Schner. Learning to recognize objects on the fly: a neurally based
dynamic field approach. Neural Networks Special Issue on Neuroscience and
Robotics, 21(4):Pages 562576, May 2008.
Hasslacher, B. and M. Tilden. Living machines. Robotics and Autonomous Systems,
15(1):143169, 1995.
Hilfling, L. and K. Gansing, The Art of the Overhead, http://overheads.org , 2005, 2009
Johnson, J., J. Spencer, and G. Schner. Moving to higher ground: The dynamic
field theory and the dynamics of visual cognition. New Ideas in Psychology,
26(2):227251,2008.
Johnson, M. The body in the mind: The bodily basis of meaning, imagination, and
reason. University of Chicago Press, 1987.
Kelso, J. and G. Schner. Self-organization of coordinative movement patterns. Human
Movement Science, 7(1):2746, 1988.
Lipinski, J., J. Spencer, L. Samuelson, and G. Schner. Spam-ling: a dynamical model
of spatial working memory and spatial language. In Proceedings of the Twenty-
Eighth Annual Conference of the Cognitive Science Society, 2006.
Martinez-Conde, S., S. Macknik, X. Troncoso, T. Dyar, et al. Microsaccades counteract
visual fading during fixation. Neuron, 49(2):297306, 2006.
McGraw, M. The neuromuscular maturation of the human infant. 1943.
Port, R. and T. van Gelder. Mind as motion: Explorations in the dynamics of cognition.
MIT press, 1995.
Schner, G., H. Haken, and J. Kelso. A stochastic theory of phase transitions in
human hand movement. Biological cybernetics, 53(4):247257, 1986.
Thelen, E. Learning to walk: Ecological demands and phylogenetic constraints. Advances
in infancy research, 1984.
Thelen, E. and L. Smith. A dynamic systems approach to the development of cognition
and action. MIT press, 1996.
Tilden, M. Adaptive robotic nervous systems and control circuits therefor, June 28 1994.
US Patent 5,325,031.
Wade, N. Toying with science. Perception, 33(9):10251032, 2004.

161
162
Between Thinking and Actuation in Video Games

Pedro Cardoso
pcardoso@fba.up.pt
ID+, Faculdade de Belas Artes, Universidade do Porto, Portugal

Miguel Carvalhais
mcarvalhais@fba.up.pt
ID+, Faculdade de Belas Artes, Universidade do Porto, Portugal

Keywords: Action, Actuating, Learning, Thinking, Video Games.

Abstract: Action involves thinking and actuating, processes that respectively rely on cog-
nitive and physical effort. When playing a video game, these processesthat may be
seen as two stages of player actiondo not need to be strictly ordered (thinkingactu-
ating) and they may not even be, in fact, interdependent. This paper explores three types
of player action that result from exploring the interdependences of thinking and actuat-

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


ing: from actions that are the consequence of a thought-out plan, to actions that are the
result of embodied or mechanized reflexes, and to actions that are visceral responses of
the body to external stimuli and internal mental activities or thoughts.
The dialectical relationship that the player and the game system establish is mediated
through these actions, undertaken in response to the challenges that the player needs to
overcome, through what we may call a learning process.
This paper pinpoints a new and still under development approach to game design
that aims at recognizing the player as a biological entity, and consequently at identifying
the need for the game system to interpret and transcode her biological traits. We believe
that multidisciplinary studies in affective computing, psychology, neurosciences, biology,
and game design are needed in order to raise a better understanding on how these can
affect gameplay.

163
1.
Introduction

In this paper we focus on the actions of the player and not those of the system. We also
regard action as the means through which the player can make changes to the game
state. (Bjork 2005, 20) In other words, actions are the way through which the player op-
erates within the game world.
This paper explores alternate modulations between the interdependences of concep-
tualizing a determinate action and its corresponding actuation. These two moments in
player action correspond to stages of preparation and of enactment, respectively. We may
say that conceptualization consists in the mental effort involved into ideating or con-
ceiving a determinate action. On the other hand, actuation consists in the effort that is
employed by the player when she tries to instantiate a certain action. We may say that
the first moment consists in the effort that is employed by the player when she forms
the model that her actuations will instantiate in the second moment, which is a physical
operation capable of sending information to the game system.
In The Art of Computer Game Design (1984, 44) Chris Crawford presents a taxonomy
of computer games that is organized in two major categories: skill-and-action (S&A)
games (emphasizing perceptual and motor skills) and strategy games (emphasizing
cognitive effort). We may say that our approach is based on a different perspective on
this subject. We believe these two categories are somehow still visible in contemporary
computer games, although positioned in different subcategories. Nevertheless, we think
that computer games have been discarding the fact that human players are biological
beings, with specific biological functions and operations. And a category that may en-
compass this fact is in order.
So, players actions can be the result of a conscious choice, of an unconscious reflex,
conditioned or trained behavior, or even emerge from the biological functions and oper-
ations of her own body. And each of the previously enunciated types is obtained through
different modulations between the conceptualization and actuation phases. In this paper
we describe three types of action that are based in these premises.
The work presented here is in a certain way related with the work of Donald Norman
demonstrated in Emotional Design: Why we love (or hate) everyday things, in which he
presents three levels of the brain that use alternative thinking processes, requiring differ-
ent styles of design. The three levels in part reflect the biological origins of the brain start-
ing with primitive one-celled organisms and slowly evolving to more complex animals,
to the vertebrate, the mammals, and finally, apes and humans. (2004, 21) Summarizing,
Norman defines the visceral level as prewired, preconscious, pre-thought, focused on the
present time, dealing with fixed routines; the behavioral level as unconscious, concerned
with use, experience and performance, also focused on the present time, and on routine
behavior; and the reflective level as slow, conscious, contemplative, vulnerable to vari-
ability through education and culture, and focused on long-term relations. But where
Norman is interested on usability and the relationship we establish with everyday objects,
we converge our attention to the phenomena of action in the context of video games.

164
2.
Premeditated Actions

We may call premeditated those actions that require the player to invest conscious mental
effort conceptualizing them. They result from the players conscious thought and may
be planned thoroughly. In other words, the player is aware of what she is going to do,
independently of how complex of her plan might be or how long it will take.
These are deliberate, intentional, controlled, and voluntary actions. The player takes
her time to consciously process information in order to deliberate the preferred course
of action.

() the human brain can think about its own operations. This is the home of
reflection, of conscious thought, of the learning of new concepts and generaliza-
tions about the world. (Norman 2004, 23)

The player resorts to these actions when she has to deal with complex or heavy loads
of information. Therefore, they are usually slow, because she has to analyze a given situa-
tion, deliberate her course of action, and only then actuate. And the more time is available
to her, the further she premeditates her actions. She may even premeditate complete sets
of actions instead of one at a time.
It is pretty common for strategy-based games to resort to this type of action due to
their orientation on heavy planning. In their case, play may be divided into turns, in
which players act alternately. In some of them, turns do not even have a temporal limit
in which their actions need to be enacted, rendering real time irrelevant in the overall
gameplay. Thus elevating the importance of planning, of the effort in making conscious
and rational decisions. Worms (Team 17, 1995), Sid Meiers Civilization (MicroProse, 1991),
Utopia (Daglow, 1981) are good examples of strategy games played in turns.
Real time strategy games maintain the overall characteristics of traditional strategy
games, but they use time as a gameplay element, providing immediate feedback, pressur-
ing the player into making decisions faster and coordinating several elements (almost)
simultaneously. Games like Populous (Bullfrog, 1989), Warcraft: Orcs & Humans (Blizzard
Entertainment, 1994), Age of Empires (Ensemble Studios, 1997), Black & White (Lionhead
Studios, 2001), Supreme Commander (Gas Powered Games, 2007), Starcraft II: Wings of
Liberty (Blizzard Entertainment, 2010) help illustrate this.
This kind of action doesnt need to always be related with strategy games. In many
other games, the player has to plan her actions no matter how brief that moment is. Even
action games require planning at some point, or some kind of premeditation. But the em-
phasis on this type of action that is evident in strategy games makes them good examples.
Besides games that have planning at their core, the player may also resort to these
actions when in other games she is confronted with an entirely new situation. The fact
that she is not familiar with a certain set of circumstances is enough to ignite an analysis
process, simply because that is the cautious decision.
Yet another situation that invokes premeditated actions occurs when the players
actions do not produce an expected outcome, as when she is constantly defeated at the
same location or by the same opponent, or when she simply fails to achieve her objectives.

165
Atthat point she may recognize the need to implement a new and better strategy (no
matter how simple or complex the plan may become).
On the other hand, when the player is confronted with familiar situations, she may
employ already tested or tried actions to produce expected and preferred outcomes. And
because a plan has already been outlined, the ideation stage is bypassed, resulting in
speedier response: her actions will be faster. When this process becomes fast enough,
resulting in unconscious processes, we discover another type of action.

3.
Trained Actions

We may call trained to the players unconscious actions that were learned through
instruction and practice. They are automated and sometimes choreographed acts. As
Antnio Damsio notes, not all the actions commanded by a brain are deliberated. We can
assume that most of the actions happening at a given moment in time are not deliberat-
ed at all, and that they constitute simple answers, from which reflex movements are an
example: a stimulus transmitted by a neuron that leads another neuron to act. (1994, 128)
For example, an experienced typist doesnt usually think about how her fingers hit
the correct keys on the keyboard when typing. Conversely, that usually happens to an
individual that has less experience, although with practice she may improve to a point
where typing does not require the attention and effort that it previously did. Thats what
we usually call experience. So, the player may refine her actuation, getting better and
faster with practice. And as her experience increases, so do the chances of her actions
effectiveness. And as her actions become more and more embodied they require less and
less mental effort, becoming unconscious, conditioned and automated processes.

If I asked you to describe how you got to work in the morning in some detail, youd
list off getting up, stumbling to the bathroom, taking a shower, getting dressed,
eating breakfast, leaving the house, and driving to your place of employment. That
seems like a good list, until I ask you to walk through exactly how you perform
just one of these steps. ()
Odds are good that you could come to an answer if you thought about it. This
is called a morning routine because it is routine. You rely on doing these things on
autopilot. This whole routine has been chunked in your brain, which is why you
have to work to recall the individual steps. Its basically a recipe that is burned into
your neurons, and you dont think about it anymore. (Koster 2005, 20)

These actions may be voluntarily ignited and terminated by the player, but they are
not consciously controlled or performed by her. We may rather say that they are invoked,
performed in correspondence to some sort of training the player has undergone.

The behavior level in human beings is especially valuable for well-learned, rou-
tine operations. This is where the skilled performer excels. (Norman 2004, 23)

They can be automated performances as when an experienced driver steers a car.


It seems that she does it without thinking, intuitively. They can also be conditioned

166
performances, as when we respond to perilous situations, such as the presence of a dan-
gerous animal or other physical threat.

Your body reacted in an attenuated replica of a reaction to the real thing, and
the emotional response and physical recoil were part of the interpretation of the
event. As cognitive scientists have emphasized in recent years, cognition is em-
bodied; you think with your body, not only with your brain. (Kahneman 2011, 51)

Therefore, games where the player must excel through speed or must somehow devel-
op some dexterity, often deal with this kind of action. They usually present increasingly
harder challenges, training the player into embodying several combinations of keys,
movements, etc.. Games as Super Mario Bros. (Nintendo Creative Department, 1985), Sonic
the Hedgehog (Sonic Team, 1991), Super Street Fighter II (Capcom, 1992), Tekken (Namco,
1994), Wipeout (Psygnosis, 1995) are just some of the many examples that explicitly use
these actions.

4.
Autonomic Actions

We may call autonomic to actions that are the result of automatic, mechanical or organic
responses enacted by the players body, and that occur without her direct control or will.
The players conscious thought is not directly entangled with this kind of actions; they
are a direct result of the players body biology and mechanical operation, regarding its
activities and behaviors.

When you stick your finger in the fire, you snatch it back before your brain has
time to think about it (seriously, its been measured).
Calling this muscle memory is a lie. Muscles dont really have memory.
Theyre just big ol springs that coil and uncoil when you run electrical current
through them. Its really all about nerves. Theres a very large part of your body
that works based on the autonomic nervous system, which is a fancy way of
saying that it makes its own decisions. Some of it is stuff you can learn to bring
under more conscious control, like your heart rate. Some of it is reflexes, like
snatching your fingers out of the fire. And some of it is stuff you train your body
to do. (Koster 2005, 28)

These actions may be triggered by actions of the same kind, but also by conscious
thought. For example, it is possible that the players heart rate goes up and her legs may
start to shake when she is reminded of a traumatic event she endured in the past. These
actions may also be heavily influenced by the mood or emotional state that player may
be under. For example, if she is feeling stressed, her heart rate may be higher than nor-
mal, or she may be sweating, etc.. As Damsio states, emotion is a collection of changes
in the state of the body, that are induced in several organs through the endings of nerve
cells under the control of a dedicated cerebral system, that responds to the content of
thoughts related with a certain entity or event. Although some of these alterations may

167
only be sensed by the person in whom they are occurring, many can effectively be per-
ceived by others. (1994, 189)
Here her body acts by itself, without her direct control, although some behaviors may
be shaped through proper training. Animals such as lizards operate primarily at the
visceral level. This is the level of fixed routines, where the brain analyses the world and
responds. (Norman 2004, 23)
The PainStation (Morawe and Reiff, 2001) is an interesting example that deals with
this type of action. This game is a variation of Pong (Atari Inc., 1972) in which the player
that looses points is physically punished through electro-shocks, whippings and extreme
heat applied to the left hand which, if removed from the game panel, leads the player
to loosing the game altogether. Thus, this game tries to measure the players resistance
to pain, and its rules force her to endure punishment in order to continue playing. Here,
the reflex of avoiding pain and the conscious decision to continue playing the game are
confronted and in constant turmoil.

5.
Conclusions and Future Work

Looking back into the history of video games we may notice how extensively they have
explored premeditated and trained actions. Since the early days, computer games were
divided into two major categories that seem to be close to the two types of action. Video
games have also excelled at manipulating the player into transforming premeditated
actions into trained ones.
Games force players into optimizing their performance, usually by presenting them
with challenges that grow increasingly more complex and harder to solve, requiring them
to master their current abilities. Overcoming these challenges unlocks new abilities, re-
starting the cycle. In most cases, this happens when players succeed in embodying basic
essential actions, freeing mental resources, thus allowing them to solve new and usually
more complex situations. In other words, throughout the game the player is trained into
increasing her skills, either physical or mental.
This increasing difficulty that is usually presented in video games is a good example
of how game systems teach their players something that is not necessarily related with
narrative or storytelling.

Games seem on the face of it to be very different from the stories and to offer
opposing satisfactions. Stories do not require us to do anything except to pay
attention as they are told. Games always involve some kind of activity and are
often focused on the mastery of skills, whether the skill involves chess strat-
egy or joystick twitching. Games generally use language only instrumentally
(checkmate, ball four) rather than to convey subtleties of description or to
communicate complex emotions. They offer a schematized and purposely re-
ductive vision of the world. Most of all, games are goal directed and structured
around turn taking and keeping score. All of this would seem to have nothing to
do with stories. (Murray 1998, 40)

168
Instead, they teach something intrinsic to their dynamics. And for players to progress
in the game they have to keep on learning, and in many games this happens until closure.

Moreover, the potential uses of video games extend far beyond the playing of
games. They could be excellent teaching devices. In playing a game, you have to
learn an amazing variety of skills and knowledge. You attend deeply and seri-
ously for hours, weeks, even months. You read books and study the game thor-
oughly, doing active problem solving and working with other people. These are
precisely the activities of an effective learner, so what marvelous learning could
be experienced if only we could use this same intensity when interacting with
meaningful topics. Thus, game machines have huge potential for everyone, but
it has not been systematically addressed. (Norman 2004, 44)

We can even state that this process has been a favored form of learning that players
have endured in video games up until now. Perhaps it is because of this learning pro-
cessthat is very advantageous to games when it comes to their replay potentialthat
games have been heavily focused on premeditated and trained actions. While the player
is capable of transforming the first type of action into the second, we dont think it is
possible to transform either of the previous into autonomic actions. We know that uncon-
scious and conscious thought influence them, but there seems to be no direct correlation
between the first two and the third. At least, not in the way that we are used to experience
between premeditated and trained actions.
Another aspect that as come to our notice is the fact that nowadays few games explore
autonomic actions. There is a huge gap here. It is very unusual for the player to be able to
influence the game system through autonomic actions. Traditional hardware in which
video games run is simply not equipped with the adequate sensors or even software that
is able sense and interpret most of these actions. And although the player keeps sending
information that derives from them (because it is in her nature), the game system is not
capable of receiving and interpret it. It literally goes to waste.
Another aspect that may have contributed to this is the fact that the player is not able
to consciously act on the game through these actions because she is not able to directly
control them. It is precisely because of that that this unlocks a new approach to game
design, an approach that can be closely linked with affective computing, psychology,
neurosciences, and biology. An approach that should perhaps start by asking: How can
a game be played if the player does not exert direct control over her actions? If the play-
er is a biological entity, how can a game system interpret and transcode her biological
traits transforming the outcomes into actions of play? And how can they influence the
game system?
Some experiments with brain-computer interface (BCI) devices seem to be focusing on
finding alternate ways for the player to send information to the system. Through these
devices the system is able to monitor players autonomic actions related with her brain
activity. Brainball (Smart Studio, 1999) is an experimental game that aims at inverting
conventional approaches to competitive games. Here the winner is the player that is able
to achieve the most relaxed mental state, the most passive and calm. Both players wear
on their heads a strap that contains biosensors that measure the electrical activity of

169
their brains. Depending on their brains activity a ball that sits on the table moves back
and forward until it reaches one of the players sides.
BrainBattle (ARS Electronica Futurelab, 2012) is an experiment in which players play
a version of Pong (Atari Inc., 1972), Space Invaders (Taito Corporation, 1978) and Pac-Man
(Namco, 1980) resorting exclusively to BCI devices. Here players are forced into a higher
level of concentration just to move the characters they are controlling with their mind
and in most cases success in controlling those characters is hardly guaranteed.
But BCI is not the only way to introduce autonomic actions into games. The spectrum
of means through which humans communicate is very wide and diverse. The human
body, particularly the face, is highly expressive, and computer vision (CV) devices, for
example, can be powerful tools to monitor those expressions. But, most of contemporary
video games primarily use CV for motion tracking, granting the player direct control over
certain game elementslike the Microsoft Kinect that visually traces the movement of
the players body. Kinect Star Wars (Terminal Reality, 2012) can serve as an example here.
Augmented reality has been another focus in CV based games, but, in this context,
it just seems to be another variation of the previous. LevelHead (Oliver, 2007/2008) or
Invizimals (Novarama, 2009) are examples of this.
Video games will only be able to include players autonomic actions into the gameplay
when they are capable of sensing and interpreting the modulations of their various states:
anxious, excited, relaxed, disoriented, aroused, for example, through bodily responses
such as heart rate, skin galvanic response, pupil dilation, facial and body expressions,
etc.. We believe that this may uncover an yet unexplored path to exploratory and multi-
disciplinary studies in computer games, that will not only expand our knowledge on these
but also on how our own biology interacts with computational systems and ultimately
will allow the development of innovative video games.

Acknowledgements: This work is funded by FEDER through the Operational Competi


tiveness ProgrammeCOMPETEand by national funds through the Foundation
for Science and TechnologyFCTin the scope of project PEst-C/EAT/UI4057/2011
(FCOMP-Ol-0124-FEDER-D22700);

References

Cited Works
Age of Empires, Ensemble Studios, 1997.
Black & White, Lionhead Studios, 2001.
Brainball, Smart Studio, 1999.
BrainBattle, ARS Electronica Futurelab, 2012.
Invizimals, Novarama, 2009.
Kinect Star Wars, Terminal Reality, 2012.
LevelHead, Julian Oliver, 2007/2008.
Pac-Man, Namco, 1980.
PainStation, Volker Morawe and Tilman Reiff, 2001.
Pong, Atari Inc., 1972.
Populous, Bullfrog, 1989.

170
Sid Meiers Civilization, MicroProse, 1991.
Sonic the Hedgehog, Sonic Team, 1991.
Space Invaders, Taito Corporation, 1978.
Starcraft II: Wings of Liberty, Blizzard Entertainment, 2010.
Super Mario Bros., Nintendo Creative Department, 1985.
Super Street Fighter II, Capcom, 1992.
Supreme Commander, Gas Powered Games, 2007.
Tekken, Namco, 1994.
Utopia, Don Daglow, 1981.
Warcraft: Orcs & Humans, Blizzard Entertainment, 1994.
Wipeout, Psygnosis, 1995.
Worms, Team 17, 1995.

Bibliography
Bjork, Staffan, and Jussi Holopainen. Patterns in Game Design. 1st ed, Charles River
Media Game Development Series. Hingham, Mass.: Charles River Media, 2005.
Crawford, Chris. The Art of Computer Game Design. Sue Peabody, 1984. Retrieved from
http://www.vancouver.wsu.edu/fac/peabody/game-book/Coverpage.html
Damsio, Antnio. O Erro De Descartes, Temas E Debates: Crculo de Leitores, 1994.
Kahneman, Daniel. Thinking, Fast and Slow. London: Penguim Group, 2011.
Koster, Raph. A Theory of Fun for Game Design. Scottsdale, AZ: Paraglyph Press, 2005.
Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace.
Cambridge, MA: MIT Press, 1997.
Norman, Donald A. Emotional Design: Why We Love (or Hate) Everyday Things. New
York: Basic Books, 2004.

171
172
Photography in Video Games: the Artistic
Potential of Virtual Worlds

Andr Carita
andrecarita@gmail.com
Universidade Lusfona de Humanidades e Tecnologias, Lisboa, Portugal

Keywords:Artists, Graphics, Immersion, Photography, Screenshots, Video Games.

Abstract: Photography has acquired a place and a growing meaning within video games.
To this has contributed the abrupt graphic evolution of video games, the spread of a grow-
ing number of virtual environments such as Second Life, and the creation of projects that
demonstrate the photographic potential of virtual worlds.
In this paper we aim to study the different ways in which photography may exist as
an artistic expression of video games. By facing them as imagery mazes containing an
undeniable creative potential, we explore the act of photography as gleaning and as a core
mechanic that enables gamers and artists to create an original view of their experiences.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

173
1.Photography and video games: a complex relationship

The relationship between photography and video games is extremely complex due to the
strong antagonism evidenced by their natures. Photographys analog nature is charac-
terized by a matrix of sand grains (minimal unit), while video games digital nature is
characterized by a pixel matrix (minimal unit), or binary information. Another issue
is the dichotomy of presence/absence of the concept photographic referent, introduced
by Roland Barthes:

I call photographic referent not to the optionally real thing that refers to an
image or a sign, but the real thing that was necessarily placed before the lens
without which there would be no picture. (Barthes 2008, 87)

Unlike digital images, in a photograph it is extremely difficult to manipulate its


visual information.

Once recorded, the visual information is irreversible. The image is individual


property, is frozen, static. Any movement can only be as much as an illusion.
The digital image represents the extreme opposite. Each component of the image
is changeable and adjustable. Not only can the image be controlled and manip-
ulated as a whole but also, and more significantly, each individual aspect of it.
(Weibel 2000, 2930)

Still, it is important to consider that the history of photography also shows a signifi-
cant openness to different techniques, trends and applications. Photography established
not only a close proximity to the truth through report photography (Bauret 2010, 334)
1. In the first half of the twentieth but also, and due to its association with various artistic movements1, acted as a decep-
century, photography has main-
tive illusion (Bauret 2010, 978).
tained a privileged relationship
with Surrealism, adding differ- Video games, as creators of digital images, explore the potential that their fantasy
ent perspectives to those defined
allows, introducing a major flexibility regarding their representations. Yet, the repre-
by the concept of truth and that
the nature of analog photogra- sentations shown in the majority of video games suggest an increasing proximity re-
phy highlights.
garding their real world referents. These representations distance themselves from the
abstract and arbitrary and become closer to the tangible and iconic; a more motivated
representation of real, of photography. As such, the distance between video games and
photography is becoming increasingly shorter.
For this reason, terms such as realism or photorealism are more associated with
video games (McCarthy et al. 2005, 85, 104). In many video games, the creation of three-
dimensional virtual worlds is informed by photographs of reality itself. That is the case
of Wheelman, in which the virtual representation of Barcelona was mainly based on
a set of real photographs. Barcelona became the photographic referent and the photo-
graphic referent became the digital referent. In such cases, the photographic image as
a reality remnant (Aumont 2009, 93) is an instrument that approximates the reality to
their virtual recreation. As Tim Shymkus points out, with the growing graphical repre
sentation it is possible to achieve a greater realism, making it more believable to the
eyes of anyone who plays (Morris and Hartas 2004, 24).

174
2.
From analog to digital

Since the first photograph of Joseph Nicphore Nipce in 1822, there has been a huge
evolution of images and photography in particular. The emergence and nurturing of
digital photography in the end of the twentieth century, has originated an increased
democratization of photography.

From analog to digital; the great mayhem that affects all forms of message as
well as the different treatment processes and communication has obvious re-
percussions on the world of photography. (Bauret 2010, 21)

Digital photography is a delicate concept. It combines two clearly conflicting op-


posites (analog photography and digital imaging) and raises a dangerous idea of
replacement. Regardless of the analysis of its nature or support, digital photography
should be understood as an evolution, as a modern image in the field of photography.
This image explores the modernization of the record itself to digital format. Digital pho-
tography is the result of hybridization between traditional phenomenology of analog
photography and the computerized nature of digital image.
With digital devices increasingly automated and with a quality arguably evolution-
ary, anyone has the opportunity to play with them. The look and spirit of the photog-
rapher are now free from any technical constraints (Bauret 2010, 21), so there are less
constraints on the photographic praxis (Flusser 1998, 745). The device is a toy and not
a tool. () The man who handles it is not a worker, but a player: its not homo faber, but
homo ludens (Flusser 1998, 44). Despite the evident automatism, some digital cameras
such as the Nikon D40, allow for the photographer to choose manual settings (focus, ap-
erture or shutter speed) before taking a picture. This extends the technical capabilities
of the device, as well as the knowledge of the photographer who handles the cameras
in order to overpower them and trust them with significant accurate and precise function
(Bauret 2010, 45). However, although it is possible to explore and assimilate a number
of applications of phenomenology analog photography in these devices, the indexical
character that has always been part of its definition and its analog nature is lost. The
captured images assume a digital nature similar to video games nature (pixel matrix),
reinforcing an undeniable proximity. While pressing the shutter button of the device,
the image is automatically scanned, converted into information in a JPEG file format
and stored on a memory card. In a video game, by pressing the printscreen button, the
computer temporarily registers in its memory the images information captured on that
screen. Later, the player can record this log to a file with the JPEG format and save it
on the computer disk. This possibility encourages an open field for experimentation, in
which players can simulate photographic acts.

3.
Imagery mazes

The three-dimensional virtual worlds of todays video games offer light, environment,
perspective and depth of field. They offer an aesthetic that invites the player to a closer
and attentive look, a deeper contemplation and visual immersion. Its interactive image

175
produces a virtually infinite set of other images, where contemplation is often super-
imposed to action. As in reality, the majority of the video games virtual worlds allow
for unlimited freedom imagery. 360 degrees simulate what we can visually grasp, not
simultaneously but through choices and intentions by the player. He is free to look to
any part of the image (as he is free to look at any part of the reality) (Aumont 2009, 163).
Some producers of video games seek to evolve their work in order to emphasize the
full potential of gaming visuals, primarily in how they are able to simulate realistic
effects in digital aesthetics. In this context, aesthetics must be understood as a reflec-
tion of experiences, since the players are invited to increasingly enjoy virtual environ-
ments that are capable of stimulating an insatiable sense of contemplation. This idea
of contemplation in video games resembles the idea of contemplation of reality. The
player will have to make choices and act accordingly, in order to select what he wants
to contemplate on the screen. Despite the dynamics associated with the interactivity of
video games images, they can still encourage a mental connection exercise that allows
players to get into the image while playing and thus contemplate, scrutinize, get into
(Barthes 2008, 1101) the digital matrix and explore its imagery labyrinth much like a
photographer explores reality (DuChemin 2009, 27). All photographs of the world form
a maze (Barthes 2008, 83), and each individual within it explore a path defined by per-
sonal readings and interpretations. In video games the same occurs with each player.
As he gets into the images, he becomes more immersed in the maze that holds them.

Immersion does not privilege images more than before; rather, it simply takes
images to another level. It is important to remember that immersion is only
possible if the immersant agrees to participate. (Burnett 2004, 77)

Sandbox video games such as Grand Theft Auto IV or Fallout 3, have mazes concep
tually opened to the capture of an endless set of images. The player immerses into these
mazes while controlling the character within the virtual world, and contemplates what
lies ahead of his eyes. All elements related to the composition of visual images define the
depth of the maze, and force the player to stay and explore it, facing a mental and con-
tinuous negotiation process. The interest becomes the image as a dimension, as a latent
history, and the players, unlike photographers, record these images while immersed in
the virtual situation.

For a photographer gazing through a viewfinder, reality is mediated by the cam-


era. Some describe a distancing sensation, one in which the photographer is dis-
engaged from a situation. (Albor 2010, see also Flusser 1998, 74)

Imagery mazes of most video games support a greater transparency. An observation


without camera in a virtual world may, in many cases, comprise several advantages;
there is no restriction for the captured image, no fear of approaching possible dangers,
or any kind of theatricality by the virtual characters in manufacturing poses or behav-
ior change before the lens (Barthes 2008, 189). Such advantages considerably extend
the authenticity in virtual worlds that, although from different perspectives, has been
exploited both by players and by various photographers and artists. The player actively

176
captures screenshots while involved in the events of the virtual worlds, while photogra-
phers and artists prefer to act as observers. The player tries to illustrate his experience
while photographers and artists seek to disclose the different experiences (especially
multiplayer) that occur in virtual worlds. The interesting aspect is to observe that, al-
though different, they prospects created by both are always the result of their presence
in these mazes. Whether the subject is active or passive the captured screenshots show
a similar artistic potential, since the vision of the photographer is not to see but to be
there (Barthes 2008, 58). As one can build a reflection from reality, also in video games
one can build a reflection from the virtual. This look is the gleaning of images of the
virtual in which one is and ones experience, allowing the construction and emergence
of a visual corpus holding meaning and consequently open to critics.

4.
The photographic act as gleaning

This idea is explored by Agns Varda in her 2000 documentary titled Les Glaneurs et la
Glaneuse, which refers that glean is to catch the debris after harvest, an ancient custom
that is still active today, although in other contexts. Proceeding from Jean-Franois
Millets painting Las Glaneuses, Varda builds a reflection at the persistence of gleaners
of contemporary society; those who live of other debris, which collects debris in our
satiated society. The authors critic is also a self-critical view of her insatiable desire to
show images of a reality that exists but nobody wants to see or take part. The author
draws an important analogy as she acknowledges being the main gleaner of her docu-
mentary. Her role is to glean the images of the observed reality:

In gleaning of images, impressions, emotions, there are no laws. Figuratively,


gleaning is a mental activity. Glean facts, acts and information. For me, as a
person with poor memory, the things I gather are the ones that summarize my
travels. (Varda 2000)

Fig. 1. Agns Varda in Les Glaneurs et la Glaneuse (2000)

The same holds true for photographers. The photographic gesture is a gesture that
involves gleaning. The photographer is, like Agns Varda was during the documentary,
a gleaner of images from reality, who seeks to establish new circumstances according
to the available technical possibilities (Flusser 1998, 51). Currently, perhaps because
there is a saturation of pictures of reality and therefore greater difficulty in capturing

177
new images, many photographers and artists seek to create new circumstances in vir-
tual environments. In their projects, the virtual is approached as an environment to
be explored for its creative and evolutionary ampleness demonstrated over the years.
An environment where people search, read, write, learn, meet, talk and play. In short,
an environment where they spend much of their daily time. In the 2006 exhibition
Photographs from the New World in New York, the English photographer James Deavin
presented a series of images captured within the virtual world of Second Life.

Second Life is wrongly named. Rather than a pale imitation of first or real life,
Second Life is best understood as a new extension of the human senses, and a
tool used in different ways by different people for different things. () Second Life
programmers believe that most users dont yet understand the full potential of
the environment in which they are currently gaming, chatting, shagging and so
forth. () This will change over time, one way to understand these photographs is
as a piece of Second Life history, markers of a time when people were still viewing
the new world through the eyes of the old. (Deavin 2006)

Eva and Franco Mattes, known as 0100101110101101.ORG, also presented some projects,
2. 0
 100101110101101.ORG, such as Portraits2, with several series of images taken in the virtual world of Second Life.
Portraits, available at:
Their work seeks to represent and explore the relationship between identity and public
http://0100101110101101.org/
/home/portraits/index.html, presentation in virtual worlds regarding the endless possibilities to create and fantasize.
last accessed on
They seek to document the existence of a (virtual) society in order to understand their
March18th,2013.
evolution (Bauret 2010, 5860).

Fig. 2. Portraits, by Eva and Franco Mattes, as known as 0100101110101101.ORG.

Marco Cadioli, known as Marco Manray, is a photographer of virtual worlds. In his


3. M
 arco Cadioli: Projects,available website3 he publishes projects on the images he captures in the virtual world that he
at: www.marcocadioli.com, last
discovers and explores, both on the Web and in many video games. Cadioli builds on the
accessed on March 18th, 2013.
theoretical foundations of photography to broaden the discussion on what he considers
to be an emerging form of artistic expression.

178
I travel across the net like a Japanese tourist in Europe. I jump from a place to 4. Marco Cadioli Internet Land-
scape, www.marcocadioli.com/
another. I travel across the net like a reporter to tell everything about a place
/internet-landscape, last
made of information. I take shots at the net.4 accessed on March 18th, 2013.

In 2005 he published ARENAE 5, a black and white report on various war scenar- 5. M
 arco Cadioli Internet Land-
scape: reportage from the net,
ios, summarized in a series of images captured on video games like Counter-Strike,
ARENAE, available at:
Wolfenstein: Enemy Territory and Quake III Arena. Even today, photographers participate www.marcocadioli.com/
/internet-landscape/arenae/
in all conflicts that occur on land, at sea and in the air, more or less protected, more or less
/index.html, last accessed on
respected (Bauret 2010, 334) and Marco Cadioli participated in conflicts that took place March 18th, 2013.

in the virtual space. The photographer seeks to discover never seen visions and wants to
discover them on the inside of the camera (Flusser 1998, 52). Likewise, Cadioli sought to
uncover insights within the virtual worlds of video games. In ARENAE, Marco Cadioli,
like Robert Capa, sought mostly action, the dynamics of the event, the conflict in virtu-
al scenarios fueled by players in online experiences. Unlike Robert Capa, Marco Cadioli
had the advantage of having a security that only virtual worlds allow by providing the
psychological experiences of conflict and danger while excluding their physical realizations.
In short, a game is a safe way to experience reality (Crawford 1997, 14). As Cadioli points
out, the images captured in video games are photographs of war, they dramatically re-
semble pictures of a real war, as well as photographs of actual war resemble video games.

When vision is spoken of in photographic terms, it is not spoken of merely as the


things you see but how you see them. Photography is a deeply subjective craft,
and the camera, wielded well, tells the stories you want it to. () You are central
to your photography,and the camera is merely the tool of interpretationnot
the other way around. (DuChemin 2009, 11)

Fig. 3. ARENAE, screenshots from Counter-Strike, by Marco Manray Cadioli.

Before pressing the printscreen key, Marco Cadioli had to plan what he wanted to
capture. To some extent, what he captured became as important as what he excluded
(DuChemin 2009, 14), because it involved a selection and therefore an intention. As a pho-
tographer and not a player, Cadioli immerged in the mazes of video games to glean infor
mation, experiences, and actions in the form of images. He built a meaningful corpus,
opened to multiple readings and interpretations, a corpus defined as a documentary
record showing the events occurred in these virtual worlds.

179
The work of these artists and photographers has been important in order to demon-
strate that the practice of photographic acts within virtual environments, although
simulated and technically limited, can be possible. Video games have evolved to extend
these photographic acts to increasingly accessible discoveries, also appreciated and re-
spected by the players. Just as Agns Varda collects things that summarize her travels,
players collect images that summarize their gaming experiences. Many like to exhibit
these images at virtual galleries on the Web. Some of these galleries are created for free
at sites like Flickr, where players can store and share their screenshots. Building upon
Flickrs slogan (Share your photos. Explore the world.), the players, besides showing
virtual worlds of video games, try to show how they see them personally. They invite
visitors to explore scenarios, characters, actions and events they experienced. All these
galleries result from every players insights. However, the occurrence of the photographic
potential in video games is subjective, as is photography (Barthes 2008, 367), to the
gameplay that each one experiences.

Faced with a reality, two photographers do not see the same thing or react the
same way, because the act involves their own photographic experience, sensitiv-
ity and culture. (Bauret 2010, 47)

For this reason there are a growing number of galleries created on the Web with a
substantial set of screenshots captured by several players during their experiences. The
website and gallery DeadEndThrills.com, created by Duncan Harris, is a very good exam
ple of this. Players, photographers and artists, seek to convey artistic and expressive
values in the digital images that they capture. They essentially show what they have
gleaned from these virtual worlds.

5.
The photographic act as a gameplay core mechanic

Photography is being increasingly explored in several ways. Sport video games, such
as the FIFA series, let the players watch the replays of various moments, celebrations or
even expressions of players. Others like WipeOut HD feature a photo shoot mode, allowing
the capturing of screenshots of undertaken races. But, most importantly, in many video
games, photography has emerged as being a gameplay mechanic. Dead Rising or Afrika
are examples that explore the process of capturing screenshots with the aid of virtual
cameras. The characters have at their disposal cameras, which allows players to control
a set of techniques (such as zoom, scale, depth of field) to enhance the results of various
visual compositions. In these titles, whenever one selects the camera, the perspective
changes from third-person to first-person and the player begins to see the virtual world
through the viewfinder. The photographer is not committed to change the world, but to
force the camera to reveal its potential (Flusser 1998, 43), and in video games, the player is
also not committed to modify the virtual world, but similarly to force the virtual camera
to reveal its potential. The diversity of images depends on the diversity of intentions by
each photographer. Although simulated, the act of taking pictures as a gameplay me-
chanic is in itself a sign of the photographic potential that video games possess.

180
Fig. 4. Frank West, the protagonist and photojournalist in Dead Rising.

In Dead Rising the player controls Frank West, a freelance photojournalist who, with
his camera, documents an invasion of zombies in a shopping center. Despite being free
to pick whatever he wants, the player must be aware that all pictures are evaluated ac-
cording to a scoring system that considers the captured elements, situations and actions.
The goal is to cover the entire event and report, through images, a story that is being
told through the progression of the game.
In Afrika, the character that the player controls is a professional photographer who
aims to record various moments of the animal world, in particular virtual scenarios of
the African continent.

The quality of a photograph in Afrika depends entirely on how the games cam-
era operates. Depending on shutter speed, lens type, and positioning of the six
axis (which controls the orientation of the camera as though it were the camera
itself), an animal in motion may be blurry, off center, or seemingly still. The
game world is perceived from within via the camera, not just from outside via
the screen. In game cameras immerse players in a unique way. () Afrika adds
depth by rewarding players money based upon the specific goals of a mission, as
well as angle, target, distance to the subject, and technique (likely a combination
of depth of field, exposure, and camera shake). (Albor 2010)

Fig. 5. The photographic act as a gameplay core mechanic in Afrika.

The monetary aspect of the game is of special importance as the player needs to
purchase new equipment and improve the quality of the captured images. Afrika is the
video game that greatly incorporates photography. In sum, all the above mentioned
titles demonstrate that, albeit simulated, it is possible to perform photographic acts in
various virtual environments.

181
6.
Conclusion

The last ten years have been extremely important to reinforce the closeness between
photography and video games. As we demonstrated, there is an undeniable photographic
and artistic potential that has recently gained greater recognition. This is confirmed by
projects of various photographers and artists like James Deavin, Eva and Franco Mattes
or Marco Cadioli and the numerous galleries created by players on the Web. Video games
such as Dead Rising and Afrika have explicit core mechanics that include the process
of capturing screenshots, enabling and motivating players to capture and share their
own experiences.
Even though we have seen a significant improvement on video game graphics, and
consequently on their photographic potential, technology evolution and new generations
of gaming consoles will certainly bring novelties to the gaming world. It is however
essential to understand that this artistic potential will only be noticed and explored
within the limits of photographic praxis. In essence, more important than technology,
video games or graphics evolution is the gamers and artists ability to recognize and
explore photography as an artistic expression of video games. Therefore, future work
within this area should focus on the impact of video games evolution on the perception
that gamers and artist have of their artistic potential.

References

Albor, Jorge. Photo Opportunities in Video Games, Moving PixelsPopMatters,


available at: <http://www.popmatters.com/pm/post/131562-photo-opportunities>,
2010, last accessed on January 18th, 2013.
Aumont, Jacques. A Imagem. Lisboa: Edies Texto & Grafia, 2009.
Barthes, Roland. A Cmara Clara. Lisboa: Edies 70, 2008.
Bauret, Gabriel. A FotografiaHistria, Estilos, Tendncias, Aplicaes. Lisboa:
Edies70, 2010.
Burnett, Ron. How Images Think. Cambridge, Massachusetts: The MIT Press, 2004.
Crawford, Chris. The Art of Computer Game Design. Vancouver: Washington State
University, 1997 [originally published in 1982].
Deavin, James. artist statements: :: James Deavin | Photographs From the New
World, published in the statement section of the site James Deavin | Photographs
From the New World @ Jen Bekman Projects, available at: <http://www.jenbekman.
com/shows/james-deavin-photographs-from-the-new-world>, 2006, last accessed
on January 16th, 2013.
duChemin, David. Within The FrameThe Journey of Photographic Vision. Berkeley:
New Riders, 2009.
Flusser, Vilm. Ensaio Sobre a FotografiaPara uma Filosofia da Tcnica. Lisboa:
Relgio Dgua, 1998.
McCarthy et al. The Complete Guide to Game Development, Art & Design. Cambridge:
TheIlex Press, 2005.
Morris, Dave and Hartas, Leo. The Art of Game Worlds, Cambridge: The Ilex Press,2004.
Varda, Agns. Les Glaneurs et la Glaneuse [movie], France, 82 min, 2000.
Weibel, Peter. El Mundo Como Interfaz, Elementos, n. 40, pp. 2333, 2000.

182
The Design of Horacle: Inducing Serendipity
onthe Web

Ricardo Melo
ricardo@ricardomelo.net
Porto, Portugal

Miguel Carvalhais
mcarvalhais@fba.up.pt
ID+, Faculdade de Belas Artes da Universidade do Porto, Portugal

Keywords: Information retrieval, information science, online services, social network


services, user interfaces, visualization.

Abstract: Is Serendipity designable? Are we able to induce it or do we end up destroying


it in the attempt? Horacle, a prototype hypothesis of a serendipitous system, is an explo
ration on digital serendipity accomplished through the facilitation of access to new and

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


uncommon content, presented in a way that allows for the occurrence of processes that
can be associated with serendipitous discovery. It is our objective, through this system and
the analysis of the concept, to help recover the limitless of the Web by breaking through
content bubbles and to assist the creation and discovery of insight through access to
meaningful information.

183
1.
Introduction

The seemingly infinite amount of content that is accessible on the Web has created the
necessity for tools that help to discover relevant and meaningful information. Tools such
as search and recommendation engines or social networks attempt to aid the discovery
and access to content to the user, and are constantly evolving. This is done through per-
sonalization (Montgomery and Smith 2009): learning increasingly more about its users
patterns and habits in an attempt to deliver ever more accurate results that relate with
the users interests and tastes.
This personalization of these tools may, however, end up limiting the possibilities of
the user, becoming a restrictive enclosure, an echo chamber of perpetuating tastes and
content. What Eli Pariser named a Filter Bubble (Pariser 2011), which restricts and limits
the diversity of content that the users have access to and their capacity to discover new,
uncommon and unexpected information from them. In other words: a decrease in the
potential for serendipity.
It is with this premise that we have created the Horacle prototype: a system that,
through the analysis of how serendipity may occur on the web, and its inherent char-
acteristics, may help to induce serendipitous discoveries by allowing access to new and
diverse information, in an permissive context.

1.1.
Understanding Serendipity
Horace Walpole coined the term Serendipity in 1754 (cit), but the process it describes is
one that is common through the history of human invention, from Archimedess famous
anecdote to the fortuitous discovery of penicillin by Fleming. It can be described as the
accidental discovery of meaningful information, made possible due to the sagacity of
the observer. This combination between accidental and sagacity is key to any attempt to
induce serendipity.
Studies of serendipity can be found associated to various areas of study, but for this
study we will focus on those regarding user interaction and information seeking, such
as those of Elaine Toms (2000) who observed how users approached a digital newspaper,
with hopes of finding serendipitous patterns or methods to trigger serendipity. Users were
asked to find an answer to a set of questions or to read or browse the newspaper for 20
minutes. Toms then observed that when the interaction was not guided by an objective,
user decisions seemed less definitive and less predictable, however, there is no mention
of any serendipitous discoveries.
A small study conducted by Andr et al. (2009), in an attempt to gather some new
insight on the frequency of serendipitous encounters, asked a selection of individuals
who considered themselves serendipitous to review their search history and report any
clicked results not directly related to a task and that lead to any specific discovery. Of the
eight participants, only two reported encountering something unexpected and none of
them gathered any particular new insight.
This is, of course, an example of the elusive nature of serendipity. Most efforts at-
tempting to observe it in a controlled fashion have been for naught. Only by applying
methods that could record the natural occurrence of serendipitous discoveries had a
degree of success, such as those of Foster and Ford (2003), who asked users to record

184
serendipitous experiences on a mobile diary, with positive results.
In an explicit attempt to induce serendipity, Max, a software engine developed by Jos
Campos and Antnio Dias de Figueiredo (2001) used information retrieval techniques and
heuristic search in order to discover information that is useful, and not sought for.
To do this, Max is informed of websites that are of the users interest and then submits
queries to a search engine as well as randomly chosen words, e-mailing the results to
the user. In a two-month evaluation, 100 messages were sent, of which 7 were considered
of interest. Its 7% success rate, while seemingly low, it is an encouraging number when
considering the fleeting nature of serendipitous experiences.

1.2.
Inherent characteristics of Serendipity

In an attempt to discover exactly what can be acted upon when attempting to induce
serendipity, we identified four broad characteristics that are intrinsic to the process.
Nature (accidental)
The accidental nature of it. For something to be considered as a serendipitous experience,
it has to happen in a random and unexpected way. This was one of the defining charac-
teristics since the creation of the term and is key to the whole process. It is also what we
may call an actionable characteristic, meaning that it can be acted-on in the attempts to
design for serendipity.
Context
The context of the user in the time it happens: there are peculiar physical and mental
circumstances that are common to serendipitous discoveries to happen. This is also an
actionable characteristic of serendipity, as we can identify and reproduce the context or
processes that are associated with serendipitous discoveries.
Mind
The third characteristic is the capacity to recognize the discovery and its inherent value,
what Walpole originally described as sagacity. As this is specific toand depending
ofthe one experiencing serendipity, it is not an actionable characteristic.
Value
If an event isnt in someway valuable to the user, then its not serendipity. While the value
is subjective to the user, we can attempt to increase the odds of a valuable outcome occur,
by increasing the relevant content that is made available to the user. The value itself is
largely depending of the experiencing user, so it is not actionable as well.

2.
Theoretical Framework

Not a mere coincidence


Serendipity may be mistaken for coincidence, and it can, indeed, occur due to it, however
it does not depend on the (im)probability of an event to happen for it to exist, if we resort
to Margaret Bodens definition of coincidence as a co-ocurrence of events having inde-
pendent casual histories, where one or more of the events is improbable and their (even
less probable) co-occurrence leads directly or indirectly to some other, significant, event
(Boden 2004, 235). It was not only probable, but inevitable that the water level would rise
on Archimedess bath, as such we cannot describe the process to happen as a result of a

185
coincidence, but of the capacity of the user to understand that seemingly unrelated event
as an apropos, serendipitous one.
Randomness as a creativity tool
Both serendipity and coincidence, however, have inherently a certain degree of ran-
domness. As we have seen, randomness is a prerequisite for serendipity, as per its
accidental nature. This assumes randomness in the event itself: unsought and uncon-
trolled. Randomness is a tool well documented on creative practices through history: a
method used to overcome creative barriers or to provoke the unexpected, such as Mozarts
1. Written in 1787 and published Musikalisches Wrfelspiel im C K516f 1, Iannis Xenakis development of his stochastic mu-
in 1793.
sic or the cut-up techniques employed by dadaist Tristan Tzara.
The value of Idle time
One particular aspect associated to serendipitous experiences is the recurring act of chang-
ing context. This could be referred as a necessity to wander or, as in many examples,
to simply go for a walk. These common recurring activities, such as, e.g.: gardening,
washing dishes or taking a bath, when associated with a creative breakthrough, describe
a period of incubation, when active research is halted and the researcher focuses on a
completely different activity, normally mechanical in nature. One interesting example
of this is of the physicist Hermann von Helmholtz, as reported by Graham Wallas (1926),
who said that ideas came to him unexpectedly and without effort and that rather than
occurring at his working table () they came particularly readily during the slow ascent
of wooden hills on a sunny day.
This concept was explored by Cskszentmihlyi and Sawyer (1995), who interviewed
nine individuals, 60 years or older and actively involved in creative work. All of them
mentioned the importance of a certain kind of idle time, crucial to creative insights.
Some of their interviewees actually scheduled a period of solitary idle time in order to
be creative, following a period of hard work.
Serendipitous browsing
Search has dominated our interactions with information seeking on the web. We no
longer surf the Web, but rather ferry across it, towards our goal and without detours.
To surf the web denoted an underlined exploratory state: to surf is not to dictate our
will upon the ocean, but to ride it, let it takes us in its currents, with minimal control on
direction, going, wave-like, from website to website.
While we are now much more precise when finding information (click through rates
2. h
 ttp://searchenginewatch.com/ plummet after the first page of Google2) there are still services that promote this wander-
article/2049695/Top-Google-
ing state. The most prominent are social networks such as Facebook or Twitter, which
-Result-Gets-36.4-of-Clicks-Study
facilitate an aimless wandering through its content, with easy visualized images and
videos. Another example is StumbleUpon, a discovery engine that combines machine
learning with human opinions, allowing its users to stumble upon web pages that relate
to their previously indicated interests. The user is unaware of what page is going to be
shown, although it can fine-tune the possible results by thumbing up or down each page.
As per Elaine Toms (2000) distinctive methods of approaching an online newspaper,
this type of wandering browsing opposes the goal-driven conscious browsing one might
engage when searching for a particular item. This distinction between purposive and
non-purposive browsing reflects the findings of Oscar De Brujin and Robert Spence (2008)
of a serendipitous browsing.

186
De Bruijn and Spence define serendipitous browsing as one which occurs when
browsing is done without a particular goal in mind, which may happen in two ways:
Opportunistic Browsing, where the user intentionally looks for content but without a
clear notion of what, in a state of seeing whats there, and an Involuntary Browsing,
goal-less as well but unintentional, when the users gaze moves naturally from a series
of fixations, and naturally focusing on a specific information item that might lead to a
specific, fortuitous insight or the answer to a longstanding query(De Brujin and Spence
2008).
This serendipitous browsing resulting in a breakthrough denotes a kind of ideation
as a result of a question in a state of incubation, akin to the breakthroughs described in
the value of idle time. This is in a way reminiscent of the psychoanalytical technique of
Free Association, developed by Sigmund Freud. In this technique, patients are encouraged
to verbalize their thoughts and feelings, without restriction or fear of embarrass. This
was done in the hopes of helping surface repressed thoughts, making the patient aware
of them as, then, being able to act upon them.
By considering these various concepts: Randomness, Idle Time and Serendipitous
Browsing, we begin to form a pattern of necessities that create a permeable state for
serendipity to occur. It is by attempting to reproduce a process that happens during an
opportunistic or involuntary browsing, occurring during a period of idle time, and being
confronted with new, uncommon and unexpected information, that one might be con-
fronted with a new item that, in turn, could lead to a breakthrough or insight. If we can
achieve this we have, indeed, induced serendipity.

3.
Designing Horacle

It is our intention to develop Horacle as an ever-evolving hypothesis of our study. It reflects


our concerns on the increased personalization of the web, and how it limits our access
to new information, as well as its purpose and as a method of creativity and discovery.
Being developed concurrently to this ongoing research, it is a reflection of our thoughts
and discoveries on the matter and, as such, is constantly evolving. An evolution that will
continue as new insights on the matter occur.

3.1.
Traits for serendipity
Our analysis of the available literature on Serendipity, as well as an observation of a series
of online systems that, intentionally or unintentionally help serendipitous discoveries
(Melo and Carvalhais, 2012), have allowed for identifying a series of common traits that
are recurrent on serendipitous systems. It is the implementation of these traits that direct
the course of the design of a serendipitous system.
Purposelessness
Purposelessness describes an interaction that is deprived of objective, as per De Brujin
and Spences Serendipitous Browsing. The system should allow for a casual wander-
ing of content, without a defined goal, providing thusly a context that is receptive to the
creation of unexpected relationships between data. By allowing a wander-like browsing
and exploration of content, we encourage the mind and gaze of the user to freely drift,
following a whim. This could lead to the discovery of something unexpected or allow

187
the user to disengage from an active thought on a problem and enter a state of idea
incubation. The system, in this case, would serve as the change of context referred by
Cskszentmihlyi and Sawyer (1995), and could allow for the uncovering of connections
by forming patterns between the sub-conscious processes and the confrontation of these
with the visualization of new content.
Immediateness
In order to maintain a state of wandering, purposelessness browsing, the system should
necessitate minimum interaction by part of the user. If an user is required to actively
interact with content, it engages the mind and removes it from an observing content to
one of active interaction.
Diversity
Increasing the diversity of the information available can increase the probability of a
discovery or connection between said information being made.
It is, as well, through access to a rich variety of content that we can hope to break
through the filter bubble and allow the user access to information that can help to
broaden their horizons and be truly surprised.
Curiosity and Playfulness
The user needs to be enticed to use a system in order to achieve a state of necessary en-
gagement for a purposeless, unconscious and serendipitous browsing to occur. And since
playfulness is recurrently associated to curiosity, creativity and ideation, by applying
these principals we encourage the mind to enter a state that is conductive for discovery.
Randomness
We have previously established the accidental nature of serendipity. It is one of the de-
fining characteristics of it. As such, we believe that by introducing a certain degree of
randomness into an interactive system, we can increase the probabilities of unexpected
and fortuitous events. The advantages of the introduction of randomness have been docu
mented by Leong, Vetere & Howard (2008), on their analysis of the shuffle functionality
of music players, noting a more relaxed experience of music by the users, when freed
from the burden of choice.
Designing decisions
The design of Horacle was guided by the attempt of implement the five different traits for
serendipity, with the clear intention of providing access to diverse and possibly relevant
information that could be accessed in an overview state: during idle time, in a state of
contemplation or wandering, on a goal-less, non purposive way, all in a playful interface
that would entice its users for a continued experience, allowing access to content with
minimum direct action by the user. For this, the system should present content fully,
when possible, with the capacity to allow for a specific focus on a particular item.
As such, and after other experimentations with other layouts, we decided upon on
a fluid, orbital-like layout, that represented three different types of content, as it re-
lates to the user: (1) Content that the user has marked as relevant; (2) Content that is
recommended to the user according to their demonstrated interests and (3) Random
information for the Web. The first two categories would be representations of the users
tastes and interests while the third would introduce that needed level of randomness
for serendipity.

188
With these three levels of content, we are able to provide the necessary context for the
possibility of interesting juxtapositions of information to occur, in an attempt to create
unforeseen relationships between them.

Fig. 1. Horacle wireframe with equal distribution of content.

These levels spread from the center, or its nucleus, in correspondingly degrees of di-
rect relationship to the user: closer to the center we find the saved content and farther
out the random content, with recommended in the middle. These levels are also visually
distinct from each other, through a color coding.
Initially, the system divides the content, and its respective levels, into equal amounts,
however the user is able to control this, by choosing to increase one particular catego-
ry (and respectively decreasing the other two). This allows the user to choose between
viewing an equal amount of variation, to more of one and less of the other two, as well
as being totally dedicated to one of the three variables.

Fig. 2. Horacle controller: 1/3, 2/3 and 3/3s, as well as the unstructered mode.

This is done through a controller found in the nucleus of the system, which also in-
corporates a shuffle mode that removes visual indications of types of content as well as
randomizing the value for each. This shuffle mode would be an useful method to remove
preconceived notions from the user towards the content.

189
3.2.
Conclusion and Future Work
The concept of serendipity and its implications on the web have been subject of a grad-
ually increasing concern as creators of online services and platforms realize its value
and implication on information seeking and access to content. Googles executive chair-
man Eric Schmidt, at the 2010 TechCrunch Disrupt conference, said that their company
hoped to one day tell people things they may want to know as they are walking down
the street, without having to type in any search queries (Krotoski 2011). Schmidt called
this a Serendipity Engine.
The capacity of the web to provide true serendipitous experiences have also been the
subject of diverging opinions (Darlin 2009), (Johnson 2011), but regardless of the current
limitations of the web regarding serendipity, it is our focus to understand how the process
of serendipity occurs and how this can inform us in in our design decisions, in order to
create better and deeper tools.
Through the review of existing literature on the subject, and particularly on its impli
cations on the web or in digital interactions, we have been able to define a series of iden-
tifiable processes and traits that guided our choices for the design of Horacle (and other
possible serendipitous systems).
Horacle is a work-in-progress in continuing development. Its development mirrors
our developments regarding the subject of designing towards serendipity and, as such, its
characteristics are in permanent mutation. In this current hypothesis we have attempted
to accommodate our five defined traits for serendipity, allowing for constraints of the
medium and implementation.
On future work, we will continue the development of Horacle as a working hypothe-
sis of a serendipitous system to a fully functional state, as well as conduct some initial
user-testing in order to evaluate its true capacity for discovery. We will, as well, continue
the examination of the serendipitous process and its implications on the web, informa-
tion discoverability and creativity.

References

Andre, Paul and Schrafel, M.C. Computing and chance: designing for (un)serendipity.
The Biochemist E-Volution, 2009.
Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 1990. Second Ed.
London: Routledge, 2004.
Bruijn, Oscar De. & Robert Spence. A new framework for theory-based interaction
design applied to serendipitous information retrieval. 2008.
Campos, Jose. and Antnio Dias de Figueiredo. Searching the unsearchable: Inducing
serendipitous insights. In R. Weber, & C. G. von Wangenheim (Eds.), Case-based
reasoning: Workshop program at ICCBR-2001.
Csikszentmihalyi, Mihaly. and Sawyer, K. Creative insight: The social dimension of
a solitary moment. In R. J. Sternberg & J. E. Davidson (Eds.), The nature of insight
329363. Cambridge, MA: MIT Press. 1995.
Darlin, Damon. Serendipity, PingThe Digital Age Is Stamping Out Serendipity, Lost
in the Digital Deluge, NYTimes.com, 2009. (http://www.nytimes.com/2009/08/02/
business/02ping.html)

190
Foster, Allen and Nigel Ford. Serendipity and information seeking: an empirical study.
Journal of Documentation, 2003.
Johnson, Steven. Anatomy of an Idea, 2011. (stevenberlinjohnson.com/2011/12/anatomy-
of-an-idea.html)
Krotoski, Aleks. Digital serendipity: be careful what you dont wish for. The
Observer, The Guardian 2011. (www. guardian.co.uk/technology/2011/aug/21/
google-serendipity-profiling-aleks-krotoski#_jmp0_)
Leong, Tuck, Vetere, Frank, & Howard, Steve. Abdicating choice: the rewards of letting
go. Journal of Digital Creativity, 19(4), 233243, 2008.
Melo, Ricardo & Carvalhais, Miguel. Designing For Serendipity: Systems and Methods
for Serendipitous Discoveries on the Web. Proceedings ARTECH 2012, 329332. 2012
Montgomery, Alan L. and Michael D. Smith. Prospects for Personalization on the
Internet. Direct Marketing Educational Foundation, 2009.
Pariser, Eli. The Filter Bubble: How the New Personalized Web Is Changing What We Read
and How We Think. Penguin Books, New York 2011. Kindle Edition
Toms, Elaine G. Serendipitous Information Retrieval. In Proceedings of the First DELOS
Network of Excellence Workshop on Information Seeking, Searching and Querying
in Digital Libraries, Zurich, Switzerland: European Research Consortium for
Informatics and Mathematics. 2000.
Wallas, Graham. The Art of Thought, New York: Harcourt, Brace, and World, 1926.

191
192
Sudthuringer-Wald-Institut: Knowledge Sharing
for the End of the World

Jason M. Reizner
info@reizner.org
Faculty of Computer Science & Languages, Anhalt University of Applied Sciences,
Kthen (Anhalt), Germany

Keywords: Doomsday, Caves, East Germany, Digital Archives, Cellular Computing, Portable
Interfaces, CouchDB, Apocalyptic Knowledge Sharing.

Abstract: Sudthuringer-Wald-Institut is an independent, distributed research organiza-


tion founded in a cave 200m deep below the Southern Thuringian Forest in the former
East Germany. Physically positioned as a default site of refuge from the possibly inevitable
collapse of the pervasive technological and social infrastructures that scaffold contempo-
rary existence, the conceptual agenda of the Institute is framed by the present luxury of a
world where discourse around mitigating unpleasant contingencies is still unhindered by

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


the profound stress of needing to survive them. Embracing the ethos of hope for the best,
expect the worst, the work of the Institute locates the creative potential of technocratic
doomsday fetishism within the service of a pragmatic functionalism.
At present, while a resident presence in the cave remains unnecessary, the Institutes
member researchers and practitioners throughout Europe, North America and the world
collaborate, contribute and share ongoing research through an open, distributed digital
architecture, consisting of both an internet-based Archive Platform and a growing num-
ber of personal Autonomous Node Devices. Scientific and creative output are maintained
online, as well in local archive nodes and replicated to all other members of the institute
asynchronously, enabling an organic, cellular propagation of multiple independent ar-
chive instances.

193
1.
The World & The Cave

If we cant reformulate digital ideals before our appointment with destiny, we


will have failed to bring about a better world. Instead we will usher in a dark age
in which everything human is devalued. (Lanier, 82)

Lets briefly assume you got up tomorrow and the internet all of a sudden wasnt there.
For whatever reason, instantaneous worldwide data exchange as weve come to expect
over the last two decades simply has ceased to function, and also as a possibly related
side effect, the global supply chain linking an unending stream of modestly-priced off-
shore-sourced personal technology devices has disappeared into history. Appreciating the
irony that the must-have marketing ethos of this seasons last device youll ever buy has
just become the haunting consumerist epitaph for the last device youll now ever own,
the tendency to reach for this device and share this revelation on your social network
of choice is interrupted, as alas, there is no network. The devices immediate use value
having abruptly vanished, the natural question now becomes what is its exchange value:
what else can be done with this sudden technological relic? It is within this hypothetical
doomsday that we begin to examine the role of interaction designers in a world where
the prevailing maxims of pervasive commodity computing and ubiquitous network con-
nectivity cease to be relevant.
The uncomfortable tendency to link the vanguard of human interaction with the
industrial necessity to produce new and ever- obsolescing form factors and product cat-
egories has become counter-productive. In the supposed interest of fostering ever deeper
and more meaningful connections with others and the world around us, the cybernetic
fetish of the new has grown from academic science fiction into a pandemic cultural
trope, where actual human interaction is gradually superseded by woefully approximate
network-mediated simulacra that grow ever more nebulous with each product-cycle.

Late 20th-century machines have made thoroughly ambiguous the difference


between natural and artificial, mind and body, self-developing and externally
designed, and many other distinctions that used to apply to organisms and ma-
chines. Our machines are disturbingly lively, and we ourselves frighteningly
inert. (Haraway, 120)

In the nearly three decades since A Cyborg Manifesto, and the birth of what Negroponte
affectionately termed the Digital Revolution, these machines continue to grow orders of
magnitude livelier, through constantly improving infrastructures for production and
networked connectivity, and our interactions with them have become infinitely more
complex. We have grown so disturbingly accustomed to this fait accompli that we now
generally fail to even notice it:

Its literal form, the technology, is already beginning to be taken for granted, and
its connotation will become tomorrows commercial and cultural compost for
new ideas. Like air and drinking water, being digital will be noticed only by its
absence, not its presence. (Negroponte 1998)

194
This questionable near-sightedness, of course, has not been helped by the flippant
normalization promoted by those cheerleading the notion that new media is not new
any more (Manovich, 70). As the once unimaginably difficult has become desensitizingly
ordinary with the realization of pervasive computing in its contemporary form, it would
seem that everything new is already old again. While this is an admirable testament to
the prescient capitalist longevity of Moores Law, it underscores the fundamental break-
down between consistently exponential technological evolution and true innovation:

I long to be shocked and made obsolete by new generations of digital culture, but
instead I am being tortured by repetition and boredom. (Lanier, 121)

In a digital world still smitten with the quaint, if somewhat narrow-minded ideal
that anything is possible, we are so enamored of our present and near-term anticipated
capabilities that we have become relatively ignorant to the very real limitations inherent
to the broader framework that enables them. Our devices become cyclically obsolete, yet
our unwavering devotion to the system continually producing and underpinning them
remains in stasis. The presently accepted maxim holds that so long as there remains an
unending, always-connected stream of newer and faster, innovation will be sure to follow.
Unfortunately, in practice the results are somewhat more disappointing:

Lets suppose that back in the 1980s I had said In a quarter century, when the
digital revolution has made great progress and computer chips are millions of
times faster than they are now, humanity will finally win the prize of being able
to write a new encyclopedia and a new version of UNIX! It would have sounded
utterly pathetic. (Lanier, 1212)

The ease and comfort afforded by the methodical certainty of the new has made for a
passive culture of complacency. When we choose to be ignorant of systemic limitations,
or to deny their existence entirely, innovation is arguably stifled.
There are, in fact, some very tangible limitations to the system. This is evidenced by
the lingering, romantic mythologies of a simpler, more altruistic fledgling public internet.
Although the widely espoused societal belief has been that the network is an infallibly
durable, continuously available and quasi-tolerant bastion of social freedom, these fan-
tasies are now being reconciled against the same industrial and political structures that
facilitate the enormous undertaking that is the systems ongoing operation.

The original Arpanet was designed to withstand the obliteration of a country and
remain useful because of its basic architecture. In other words, it was designed
to withstand an A-bomb. Over time, the backbones have been taken over by large
corporations and streamlined (to maximize profit) in such a way that the back-
bones themselves are now vulnerable. For the past decade, it has been doubtful
that the Net could withstand that A-bomb. (Dvorak, 2009)

Ironically, the spread of ubiquitous connectivity and virtually continuous uptime


has come at the expense of the intrinsic architectural qualities that brought about the

195
internets very existence: the systems availability now directly correlates to its fragility.
As has been demonstrated in recent, smaller-scale geopolitical struggles over the past
several years, internet traffic can and will be unilaterally suspended at the first sign
ofunrest. Despite the internets engineering pedigree as immune to the unpleasantries of
mutually assured destruction, the present reality is that it is susceptible to the whimsof
a prevailing regime at the touch of a button.
In a culture where the concept of disconnection from the network at large is assumed
to be nothing more than an aggravating, albeit temporary nuisance, visualizing the com-
plete and unmitigated collapse of the internet remains a critical and theoretical taboo
outside of the context of a truly dire doomsday scenario. Any discussion of the need for
systems that function even partially independently of the centralized internet requires
invoking imagery of total annihilation punctuated by nuclear winter, although in reality,
the wheels could come off with substantially less spectacle.
As such, Sudthuringer-Wald-Institut is predicated on the End of the World as a
heavy-handed starting point for the sorely needed discourse about what our limitations
are, and what can be possible outside of the established canon of pervasive commodity
computing and ubiquitous network connectivity. Borrowing from a tradition spanning
from the dawn of civilization to the last vestiges of Cold War paranoia, the Institute
thrives on a preternatural human condition: when the going gets tough, the tough go
underground.

Fig. 1. Sdthringer-Wald-Institut Field Expedition Team, June 2012.

196
2.
The Institute

At present, while a resident presence in the cave remains unnecessary, the Institutes
member researchers and practitioners throughout Europe, North America and the world
collaborate, contribute and share ongoing research through an open, distributed digital
architecture, consisting of both an internet-based Archive Platform and a growing num-
ber of personal Autonomous Node Devices. Scientific and creative output are maintained
online, as well in local archive nodes and replicated to all other members of the institute
asynchronously, enabling an organic, cellular propagation of multiple independent ar-
chive instances.
Each node functions as a local collections manager and server, semi-public connec-
tivity access point and interconnect for ad-hoc wireless mesh networking between nodes,
providing all essential services necessary to function independently, even without the
presence of an external internet. Should the need to retreat underground arise, single or
multiple instances of the entire digital holdings of the Institute could be easily be brought
beneath, establishing a core technical foundation enabling the research activities of the
Institute to continue as normal.
Structured around a tribal model that establishes symbiotic relationships between
specialists within a micro-community, the design of the Institute embraces the current
privilege of global collaboration afforded by our fortunate status as network users, without
falling victim to the perverse mentality of the hive mind. Centered on a small core group
of collaborators, the Institute is a purpose-built environment not specifically intended
for broad public consumption:

If you grind any information structure up too finely, you can lose the connections
of the parts to their local contexts as experienced by the humans who originated
them, rendering the structure itself meaningless. (Lanier, 138)

Maintaining an intimate social dynamic intentionally isolated from the relative ano-
nymity of the open internet promotes the implicit operational trust necessary for knowl-
edge exchange to flourish, especially in environments where physical presence must fill
the void left by a sudden lack of digital interconnection. As a close-knit transdisciplinary
collective, each member can browse and freely copy items from every other members
archive, metering his or her own level of personal engagement while promoting a site
for building and sharing dynamic, continually developing libraries of content.

User interaction is guided by a metaphor that blends elements of the Soviet tradition
of Samizdat with the distributed computing paradigm of eventual consistency. The ac-
tive, self-mediating information exchange within the archive serves as a basis for both
present-day discourse and future retrospective narrative:

() Tracing the various forms of labor that support that life, we should find that
Samizdat constitutes an outstanding modern example of textual system, not
only because it originates outside a capitalist economy, but because these texts
highlight with special force the texts epistemic ambiguity. What was deemed

197
truthful or valuable? How was this determined? The credit of any Samizdat had
to be established for each text, and, further, at each phase of the material life of
that text. (Komaromi, 4)

As the archive grows and evolves through use, its epistemological relevance advances.
In stark contrast to other subterranean archives such as Chauvet, Barbarastollen and the
Svalbard Global Seed Vault, which are charged with the task of ensuring the continued
existence of a static, historic encapsulation into posterity, Sudthuringer-Wald-Institut is
actively engaged in the speculation of an uncertain future.

Enlightened designers leave open the possibility of either metaphysical special


ness in humans or in the potential for unforeseen creative processes (Lanier, 52)

3.
Architectural Overview

The infrastructure of the Sudthuringer-Wald-Institut is designed as a modular, de-


centralized architecture that can function both inside and outside environments exhi
biting internet connectivity, enabling consistent application functionality and user
experience across the widest cross-section of client devices possible. Comprised of
both a software layer, the Archive Platform, and a complimentary hardware layer, the
Autonomous Node Device (AND), the system ensures the work of the Institute can carry
on even in the face of significant topological failure.
In the presence of internet connectivity, member researchers and practitioners can
access the Archive Platform through a centralized web-based Archive instance, and by
extension, each Autonomous Node Device can directly communicate with this system
and directly synchronize content. When internet connectivity is not available, each AND
ensures users continued local access to the Archive Platform through a choice of near-
range connectivity options, including LAN and ad-hoc wireless. These connections, as
well as support for wireless mesh networking, are also employed to establish connec-
tions between individual ANDs and allow direct synchronization and replication between
devices when possible.

Fig. 2. Architectural Overview: system topology.

198
4.Archive Platform

The Archive Platform is a browser-based distributed application built on a foundation of


open software technologies that allow it to function seamlessly across both the unified
online server environment and multiple Autonomous Node Devices, while providing a
consistently rich user experience on all supported client devices, from smartphones to
conventional PCs.
At the heart of the application stack is Apache CouchDB, a document-based database
system authored in Erlang, a fault tolerant programming language and runtime environ-
ment originally developed for mission critical telecom equipment. CouchDB is an ideal fit
for this project due to its HTTP-based query interface, robust replication, synchronization
and version control features. Functioning as both a database and web server, CouchDB
has the unique capability of deploying complete web applications from a single server
instance combining application logic and storage layers. Additionally, these applications
can be easily replicated across multiple server instances and instantly deployed. By design,
the system works with the grain of CouchDB [and] promotes simplicity in ... applications
and helps... naturally build scalable, distributed systems. (Anderson et al., 11)
Archive and synchronization management is made possible by the CouchDB applica-
tion framework developed by the Little Library Project. Originally conceived for sharing
content libraries across the cloud and personal devices, the customized Little Library
framework is assembled and maintained with the CouchApp toolchain, streamlining
application prototyping, modification and deployment.
Additionally, the framework delegates presentation-layer tasks including User
Interface rendering and event management to jQuery Mobile. This javascript library pro-
vides consistent touch-compatible UI elements and behaviors across all current mobile
device platforms and desktop browsers.

Fig. 3a. Archive Platform: application stack. Fig. 3b. Archive Platform in operation.

199
5.
Autonomous Node Device

Fig. 4a. Autonomous Node Device: hardware overview. Fig. 4b. AND-FX3a in the field.

Designed as a modular, self-sufficient system, the Autonomous Node Device (AND) pro-
vides all fundamental hardware support necessary to operate the Archive Platform in-
dependent of the unified online server environment. Anticipating a world of production
and distribution scarcities, this architecture functions more as a recipe than a rigid
reference design, conceived to be assembled from and maintained with any number of
commonplace commodity components. Foregoing esoteric manufacturing procedures,
an individual Autonomous Network Device can be easily constructed by someone with
basic familiarity with consumer electronics, out of parts and devices likely to be strewn
around the wreckage of the average office, studio or living room.
The AND specifications outline the presence and minimum functional compatibility of
each module without mandating a vendor- or platform-specific part. Abstracting the in-
dividual function sets of each component establishes a rudimentary set of requirements:
the power supply module must furnish sufficient electrical power to operate the entire
system, the server module must be capable of running an instance of CouchDB, and the
network connectivity module must provide some form of basic, local TCP/IP connectivity,
either wireless or wired. This approach ensures that the system can utilize the broadest
range of hardware combinations possible, from handsets and embedded systems to full-
fledged server-class PC hardware.
Early AND prototypes were targeted to fit within a shoebox-sized enclosure, however
this form factor presented several issues during the course of development. Originally
using repurposed ARM-based hardware running Android, first- and second-generation
prototypes simply were not fast enough to reliably support multiple Archive Platform us-
ers, and encountered numerous thermal and power management problems. The current
third-generation prototype version, AND-FX3A, is roughly the size of a consumer bread
machine, and comfortably houses an off-the-shelf lithium-polymer power supply, quad-
core application server module and an 802.11g wireless network connectivity module.
Production versions derived from the AND-FX3A will begin to be distributed to member
researchers and practitioners later in 2013.

200
Fig 5. Researcher Heino Weiflog evaluates AND-FX3A.

6.
A Journey Beneath

On 23 June 2012, members of the Field Expedition team journeyed to the Sudthuringer-
Wald-Institut Cave Site as part of the Inaugural Evaluation Expedition. Tasked with gath-
ering advance first-hand geographic, environmental and social data impossible to obtain
from the comfort of the lab, the occasion also marked the initial underground field testing
of one of the first functional Autonomous Node Device prototypes, AND-FX3A.
Several hundred meters below the forest, the team procured crucial baseline data neces-
sary to the future in situ establishment of the Institutes activity following the End of Days,
and prognosticated on the relative stability of the location for the next 50 to 70,000 years.
Furthermore, rigorous testing of the AND-FX3A verified the devices suitability for
subterranean installation and operation across a spectrum of usage scenarios and client
platforms. Although disconnected from external connectivity to the world above, the
Archive Platform remained continually accessible throughout the Cave Site as expected,
providing access to the entire holdings of the Institute Archive, as well as serving as a
real-time repository for field data.
Despite the overwhelming success of the Inaugural Evaluation Expedition, a litany of
other unanswered questions remain. As such, preparations by the Field Expedition Team
for future research missions are already underway.

Technical Acknowledgments: Sudthuringer-Wald-Institut graciously exists on a hard-


ware and software architecture based on a number of open technologies:
The Little Library Project: Archive management and synchronization platform.
http://github.com/rwadholm/The-Little-Library
Apache CouchDB: Document-based, RESTful database infrastructure.
http://couchdb.apache.org/
CouchApp: Javascript application framework for CouchDB.
http://couchapp.org/

201
jQuery Mobile: Cross-platform, touch-compatible UI framework.
http://jquerymobile.com/
IrisCouch: Cloud-based CouchDB hosting.
http://www.iriscouch.com/
Mobile Futon: Portable CouchDB installation and administration for Android
http://github.com/daleharvey/Android-MobileFuton
cyanogenmod: Expanded, unencumbered Android hardware support.
http://www.cyanogenmod.com/
OpenWRT: Open Linux-based firmware for wireless hardware.
https://openwrt.org/
OLSR: Wireless mesh networking support.
http://www.olsr.org/

References

Anderson, J. Chris, et al. CouchDB: The Definitive Guide. Sebastopol, California, USA:
OReilly Media, Inc. 2010.
Brown, Martin. Building CouchApps: Create web applications stored in an Apache
CouchDB database. Accessed 2012.02.02. http://www.ibm.com/developerworks/
opensource/tutorials/os-couchapp/os-couchapp-pdf.pdf
Dvorak, John C. Net Lament: Fragility, Censorship, and Ruination. PCMagazine
(online), 30 June 2009. Accessed 2012.01.25. http://www.pcmag.com/
article2/0,2817,2349511,00.asp
Haraway, Donna. A Cyborg Manifesto: Science, Technology, and SocialistFeminism
in the Late 20th Century. The International Handbook of Virtual Learning
Environments, 117158. Ed. J. Weiss et al. Springer. 2006.
Jung, Carl Gustav. Four Archetypes: Mother, Rebirth, Spirit, Trickster. Abingdon, UK:
Routledge. 2006.
Komaromi, Ann. Samizdat as Material TextMethodological Implications.
University of Toronto. 2006.
Lanier, Jaron. You Are Not a Gadget: A Manifesto. New York: Knopf. 2010.
Lavin, Peter. The Compleat [sic] CouchDB in 10 3/4 Pages. Accessed 2012.02.02.
http://www.objectorientedphp.com/articles/couchdb.html
Manovich, Lev. New Media From Borges to HTML. The New Media Reader, 1325,
Ed. Noah Wardrip-Fruin & Nick Montfort. Cambridge, Massachusetts, USA: MIT
Press.2003.
Mayr, Marcus. CouchApp Tutorial Documentation, Release 0.3.1. Accessed 2012.04.20.
http://web.student.tuwien.ac.at/~e0542042/enotes_tut/latex/CouchAppTutorial.pdf
Negroponte, Nicholas. Beyond Digital. Wired, 06.12 (1998). Accessed 2012.01.25.
http://www.wired.com/wired/archive/6.12/negroponte.html

202
Making Online Face-to-Face Interaction Easier
for Older People with Constructive Design
Research

Marianne Markowski
marianne@teletalker.org
Middlesex University, London, UK

Keywords: Online Social Interaction, Older People, Constructive Design Research, Research
Through Design.

Abstract: This paper reports early findings of employing constructive design research in
order to make online social interaction easier for older people. In the western world the
majority of computer illiterate people are older people. After investigating which forms
of online social interaction present the most obvious benefits for communication, it was
decided to focus on making online face-to-face communication more accessible and
easier for older people. For this the Teletalker, an installation with two online video ki-

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


osks connecting two places audio-visually and where a simple hand sensor operates the
sound, was built. Field research was conducted with the Teletalker connecting the com-
munal room of Age UK Barnet, London with Londons Middlesex Universitys entrance hall.
Constructive design research allowed making the idea tangible in order to collect feedback,
to assess impact on its environment and to generate a discourse on the preferred state.

203
1.
Introduction

The world has an ageing population. In 2010 in Europe there were around 120 million
people over 65 years old, which was 16.2 % of the world population. In the year 2075
an estimated 26% of the European population will be over 65 years old (United Nations
2011). For this research older is defined as 65 years plus since this is how the European
Commission defines older people in general (European Commission 2012).
With getting older, a person from the age of 30 experiences an increasing physical de-
cline (Stuart-Hamilton 2006; Fisk et al. 2009; Sharit et al. 2008). For example, one in three
people in their 80s experience mild cognitive impairment (Lawton Henry 2007). Having
a physical and potentially mental decline and being of an age when peers, friends and
family die, it is even more important for older people to maintain social contact for their
psychological well being (Lester et al. 2011; Blaun et al. 2012).
The first part of the paper describes the results of reviewing the relevant literature
and subsequently constructive design research as a method is introduced. This is followed
by a detailed description of the Teletalker installation, together with an account of why
this design was selected to be built. Early results of the first round of field research are
reported, followed by the proposed next steps for this research.

2.
Online connection for social connection

There is controversy in the research literature about whether Internet use increases or
decreases social connection between people and about its psychological benefits (Sum,
Mathews, Hughes, & Campbell, 2008).
Online communication might be particularly appealing to those individuals who per-
ceive themselves to be low in interpersonal competence and therefore prefer written or
mediated forms of interaction (Sum et al. 2008; Young n.d.; Kang 2007). One could argue
1. P
 ersonal communication with that online social interaction could have the effect of reducing offline social interaction1.
J. Culling, account manager at
Data by the Oxford Internet survey shows that online social interaction does not seem to
Foviance, London UK, in No-
vember 2010, who said I blame replace other forms of interaction with the family or friends such as interaction through
google that I talk less with my
visits, phone conversations and written communication, but complements it. Interaction
mum. He gave the example that
previously he would have rung through the Internet increased contact between friends and family who live further away.
his mum to ask a question about
For a quarter of respondents it also increased contact with friends and family who live
cooking for instance, now he
simply googles it or poses a ques- nearby (Dutton et al. 2009).
tion on a discussion forum.

2.1.
Online usage by older people
The number of older people who are computer literate is growing (Carpenter & Buday 2007).
Approximately 30% of the age group 6575 years are using the Internet on a regular basis
in the UK, but only a quarter of people over 75 years of age have ever used the Internet
(Lane Fox 2010; Williams 2010).
The table (see figure 1) by the Office for National Statistics (ONS) illustrates that 90%
of all the people that go online send and receive emails and that this figure is the same
across all age groups (Williams 2010). In contrast 75% of all 1624 year old users go online
to post messages to chat sites, social networking sites, blogs, but only 8% of all users over
65 years and older do the same.

204
Fig. 1. ONS table of Internet activities by age group (Williams 2010, p13).

Comparing the percentage of users of social media activities across the age groups,
it becomes clear that the trend is the younger the person is the greater the use of social
media becomes. When we look at Uploading self-created content to any website to be
shared and Telephoning or making video calls (via webcam) the difference in percent-
age between the age groups is less pronounced. The difference between percentages in
the various age groups is even smaller for video-telephony. This could be possibly because
of the generation connecting communication flow between grandparents, parents and
children.

2.2.
The barriers to going online
The most frequently quoted reasons for not being an Internet user are cost, access to the
equipment or lack of interest and skills (Lane Fox 2010; Carpenter & Buday 2007). Other
reasons that could be more age specific might be related to the attitude towards comput-
ers. There is fear (Harwood, 2007) and unpredictability of technology (P. Turner, Turner,
& Van De Walle, 2007) felt by older people. Turner collected data on the experience voiced
by older people who tried to learn how to use a computer. They commented on the dis-
concerting unpredictability of certain features and on their frustration at their own
inability to remember the necessary sequence of steps (Turner, 2007 p290). Observations
at a Age UK computer class confirmed suspicious attitudes towards computers where par-
ticipants called the computer a necessary evil or the all-seeing machine that creates
neurotic young people.
Barrantes found that the use of the mouse and in particular double-clicking was a
major stumbling block, but despite the existence of other input devices older people want-
ed to use the mouse, so they felt included and not excluded by having to use something
designed differently (Barrantes 2009). Other researchers who worked with older people
who needed assistive technologies such as a walker or hearing aid also noted the issue
of feeling stigmatised (Mullick 2001; McCreadie & Tinker 2005).
Melenhorst et al. studied older adults motivation for technological adaptation by
running 18 focus groups in the US and the Netherlands discussing the use of email and
traditional communication methods. The results showed that the perceived benefits are
the primary incentive for older peoples willingness to learn and engage with computer
technology (Melenhorst et al., 2006). Or to put it another way: an older person would not

205
take up computer use and go online, even if they are given a computer and lessons free
of charge when they dont perceive benefits in using a computer. The older person would
prefer spending their time with something they can already do and that they enjoy rather
than having to learn something new when their life time is limited (sending an email
versus writing a letter for example).
Looking forward 30 years into the future, there may be no need to introduce the ben-
efits of online technologies to older people since half of the people now in their 60s are
familiar with the concept of a computer and going online, which means that majority
of older people will be computer literate and online (Pollard 2009; Carpenter & Buday
2007). However, as it stands now, it is important to keep older generations connected
with the technological advancements for their psychological well-being and self-esteem
(Lester et al. 2011; Blaun et al. 2012). Even if presenting the benefits of online Face-to-
face communication does not necessarily entice older people to learn the technology by
themselves, they will at least know what is possible and can tell their family or friends
about the experience. This way they might feel connected to what is going on in society
and not feel left out.

3.
Constructive design research (CDR) as research method

Employing CDR the Teletalker was built as a tangible artefact to elicit feedback and to
further discussion on the role and form of online technologies for older people and its
benefits. The Teletalker is an installation of two kiosks connecting two public places
using Skype, appearing to work as an online window by constantly displaying the oth-
er location. The volume, (which is by default off), is controlled by a simple hand sensor,
which has been selected with the older user in mind. The Teletalker will be placed in
carefully selected locations, where a large number of older people will have access to, in
order to observe usage and reactions. The resulting discussions, further development of
the artefact and academic discourse will form part of the knowledge generation.

2. C
 onstructive design research has There are numerous other examples of CDR2 such as iFloor, the Presence project and
previously been labeled Research
Maypole. The common denominator of these projects is that a product, system, space or
through design (S. Bardzell etal.,
2012). media takes centre place and becomes key means in constructing knowledge (Koskinen
et al. 2011). A constructive design researcher follows the steps similar to those used in
Action Research: iteratively planning, acting (i.e. producing a prototype, concept, scenario),
observing and reflecting whilst drawing from interdisciplinary knowledge (Koskinen et
al. 2011; Basaballe & Halskov 2012).
Examples of CDR derive from a collaboration of various disciplines such as archi-
tecture, design, computer science and anthropology to name a few. CDR is particularly
helpful when research is dealing with a wicked problem (Buchanan 1995). For design
problems that are ill-defined or wicked (as opposed to puzzles which can be solved with
one correct solution) analysis can be exhaustive and a correct solution cannot be guar-
anteed. When dealing with a wicked problem a solutions-focused strategy is preferable
over a problem-focused one (Cross 2007).
If theory is developed from CDR it is predominately in the early steps of development
i.e. in the formation of nascent theory (J. Zimmerman et al. 2010). There is one strand of

206
CDR, which is labelled critical design in contrast to affirmative design. Critical designs
role is to challenge pre-existing conceptions and norms that are usually designed into
products, systems and spaces (Dunne & Raby 2001), as opposed to affirmative design,
which operates within existing design expectations.
With the Teletalker research it is intended to elicitwith help of the artefacta
discussion about the preferred state. The preferred state is the goal the researcher is trying
to achieve with the design (personal communication with J. Zimmerman on 11/12/2011).
In this case, it is the discussion and subsequent change in thinking by (older) people
about online technologiesi.e. that online Face-to-Face communication can be made
easyas well as change in expectations about forms of online technologies and how
this can inform further projects and designs.
Older and younger people of the general public, colleagues and academics were able
to physically experience the Teletalker and talk about it either with each other or with
the researcher. In addition, the Teletalker research was presented at conferences, where
other academics and practitioners were invited to discuss.
The design of the Teletalker does not only consist of the physical artefact, but also
of the choice of placements and the communications around it. In fact, CDR demands
more than producing a product, but to reflect and to review the artefacts impact on its
environment at the same time.

3.1.
Critique of constructive design research (CDR)
CDR has not yet been fully formalised with regards on how to capture design development,
decision points and how to assess the artefact and its impact. There have been calls to
make the research approach more formalised (Basaballe & Halskov 2012; J. Zimmerman
et al. 2010), but also views on keeping the research approach on general terms since the
situational project or research context is always different. For example, Gaver calls for a
less structured approach and to concentrate only on the main characteristic of CDR such
as starting point, documenting the design process, artefact and consequences. Gaver, in
particular, advocates the use of annotated portfolio to portray and document the design
process (Gaver 2012).

4.
The emergence of the Teletalker as a design response

When looking at the question of how to design online social interaction for older people
firstly relevant literature was reviewed and then user-centred design methods such as
story telling workshops (Schuler & Namioka 1993) were employed to identify the design
requirements. The Teletalker research as such was initiated after collecting design re-
quirements for a web solution, after it became obvious that a web solution would have
not addressed the majority of older people effectively. It appeared that it would be more
useful to design a physical system that allowed older people to experience online technol-
ogy and its possible benefits directly without having to learn about computer technology.
The Teletalker is placed in a public space intentionally, so that older people are invited to
come to it, giving them a reason to leave their house. The Teletalker can be experienced
in groups, which also nurtures interaction (Vom Lehn & Heath 2002).

207
4.1.Why was the Teletalker selected over other possible ideas?
The Teletalker idea was selected over other possible design ideas such as designing a
website since:
3. S
 ocial presence theory ranks It was decided to concentrate on online Face-to-Face communication since it ap-
the communication medium by
peared to be the closest to offline Face-to-Face communication where immediate
the degree to which it conveys
the physical presence of the feedback during communication is given. (Friendly) Face-to-Face communication
communicating participants
can be seen as instantly rewarding in comparison to written online communication3.
(Biocca, Harms, & Burgoon, 2003;
Connell, Mendelsohn, & Robins, Findings from interviews with older people emphasised that having a reason to
2001; Walther, 1992). Social
get out of the house such as going shopping was part of older peoples social interac-
presence would be seen as low
when people interact in com- tion. Therefore it is important for the research to place the Teletalker in public places
puter-mediated-communication
where people can visit.
(CMC) since there is a lack of
non-verbal cues. The visual transmission also allows the user to experience the atmosphere of the
other place as well as non-verbal communication between people.
4. Sokoler and Svenson empha- The design of the Teletalker is supposed to evoke curiosity to try it out (Romero et al.
sise how ambiguity should
2010). This is expected to generate interaction (Vom Lehn & Heath 2002) and discussion
be embraced when designing
non-stigmatizing technology at each location, through the Teletalker and around the Teletalker. The design of the
for social interaction for older
Teletalker might be a ticket to talk in itself4.
adults. They found that everyday
activities such as gardening can The Teletalker view is constantly on for immediate use5 and therefore no need for
provide a ticket to talk with un-
computer literacy skills such as logging on, using a mouse, switching applications
acquainted older people (Sokoler
& Svensson 2007). is required. The simple mechanism (a light sensitive hand sensor) to switch the
sound on / off (= hand on / hand off) has been chosen with older people mobility
5. A
 ccording to the socio-emotional and strength in mind.
selectivity theory older adults
The Teletalker is a tool for connectivity between people of any age, but taking the
live more in the present and
prefer to do things they get older person and technological novice as a design requirement. Designing for older
immediate pleasure out of
people exclusively could either result in specialised accessibility technology or fall
(Carstensen et al. 1999).
into the stigmatisation trap where it might be a useful service / tool / technology,
but not accepted by older people since it communicates the message that one is old
(McCreadie & Tinker 2005).
The Teletalker concept asserts acute simplicity in order not to distract from the cen-
tral aim of interacting socially with each other.

4.2.
The making of the Teletalker
Due to time constraints and constraints on resources the original designs had to be
adapted. However, having researched the designs of Televisions from the 19301950s, the
concept of the Teletalker being a piece of furniture similar to the 1936 Baird T5 was fol-
lowed (as shown in figure 2).

208
Fig. 2. 1936 Baird T5 picture accessed on 14th April 2012.
Shown with courtesy by the TVhistory website.

Two 27inch iMacs, which had cameras and speakers built-in, were used for each kiosk.
The Teletalker housing was created with Medium Density Fiberboards (MDF) and painted
bitter chocolate brown to match the colour of the Baird T5. The hand sensor6 consisted
of a hole in the shelf, in which the resistor was placed (see figure 3). 6. A
 n ardiuno board with a light
sensitive resistor was used to
create the hand sensor.

Fig. 3. The Teletalker during field research in the quadrangle of Middlesex University. This photo shows
the hole in the body of the Teletalker at the height of 105cm and the light shining out of the hole. The user
needs to place their hand into the hole, covering the light-sensitive resistor, in order to activate the volume.

5.
Field research June 2012

From 12th June15th June 2012 field research with the Teletalker prototypes was conduct-
ed. One Teletalker kiosk was placed in the quadrangle of Middlesex University, London
(see figure 5).

209
Fig. 4. A group of older people using the Teletalker at Middlesex University
speaking to a person at Age UK Barnet.

The second Teletalker was placed in the communal room of Age UK Barnet. The ma-
jority of the Age UK Barnet day centre clients are between 70-90 years old, have some
form of locomotion restriction and are not computer literate. Between 3540 clients vis-
it the Age UK Barnet day centre daily. Some have repeated visits during the week. Data
was collected through observation and interacting with people through the Teletalker,
through individual interviews with people who tried it out as well as with staff from the
day centre. The Teletalker did not record the video transmission. With peoples consent
some video was filmed of people interacting through the Teletalker.
In total 27 conversations through the Teletalker have been noted down. The majority
took place between members of the researchers team and with a daycentre visitor. Eight
conversations took place between students and daycentre visitors.

Fig. 5: An edited video clip showing the use of the Teletalker.


(h
ttp://www.youtube.com/watch?v=Ucoy6pm3wyI)

210
5.1.
Early results of the field research
Analysis of the data is still in progress, but here are early results.
As expected the Teletalker generated interaction and communication between
younger and older people as well as between the people at each location.
The Teletalker seemed to have worked well as a window, giving each side a feeling
of what is happening at the other location.
The Teletalker introduced older people without computer literacy skills to online
Face-to-Face communication.
Tuesdays and Thursdays group at Age UK seemed to receive the Teletalker positively.
Several day centre visitors went up to the Teletalker, tried it out and spoke to students
and Middlesex staff. Older people suggested practical applications for the Teletalker
such as serving as an information point in a major shop or for travel information.
Wednesdays group at the Age UK felt that their privacy was invaded. In particular
one person felt upset about not having been asked whether this research could take
place near her seat. (Note: the Age UK day centre management has re-assured that
they will take extra care to inform everybody about future research in the day centre.)
It was observed that younger students were more curious to try out the Teletalker
by themselves. At the universitys location A-level students from the college across
the road were coming in in order to see the cool machine, which fellow students
had told them about.
The hand sensor was very easy to use, although older people still needed guidance
as to where to place their hand exactly. Once this was understood, older people did
not have a problem using it.

5.2.
Immediate lessons learnt from the first round of research
Signage is needed
It wasnt obvious without any signs, what the Teletalker was, why it was there and or
what a person needed to do to experience it.
Physical placement
The physical placement of the Teletalker was crucial in order for people to come up it or
to stop when walking past. When it was placed directly next to the main exit, lots of stu-
dents stopped to have a look, but they did not stop when it was placed under the staircase.
At the day centre the Teletalker was placed in the communal room, which worked well
to give people at the Middlesex location an idea of what older people do in a day centre
such as playing cards.
A person always present at one location
The Teletalker was more effective when there was always a person present at one
Teletalker. Ideally, the Teletalker was supposed to initiate random conversation between
people walking past. However, in hindsight it was unlikely that two random people ap-
proached the Teletalker at the same time and then started talking.
Technical issues
Technical issues did get in the way with enjoying the experience of the Teletalker. The
Wifi connection was not very stable at times, which meant the Teletalker disconnected
several times. The sound and picture quality was not always adequate (most likely due
to limited bandwith connection). In one instance Skype lost its volume functionality.

211
5.3.
Preparing for the next round(s) of field research and
discourse
Currently, modifications to the Teletalker have been made such as adding extra speakers
and improving the programming, so that the audio connection between the two places
is more immediate. The next location for the second round of field research has been
chosen. On 18th December 2012 the Teletalker connected two communal rooms between
two Age UK day centres, which effectively meant connecting older people with older peo-
ple. However, due to technical problems the Teletalker volume was not working properly.
This round of field research will be repeated and the results will be compared with the
previous results, where students and older people were connected. Subsequent planned
field research, such as connecting two care homes, will add to the findings and provide
a more complete picture of how successful the Teletalker was in introducing older people
to online face-to-face interaction and what benefits it may bring. In discussion with the
care home manager the Teletalker will be adapted to cater for the residents requirements
and become a Telewalker. This means that the Teletalker will be placed on a trolley and
include a bell to ring for peoples attention at other location. However, the main outcome
of the constructive design research is not to propose the Teletalker or Telewalker as a
commercially viable product, but to generate discourse around the role technology can
play to connect older people and which physical forms it may take. This will be achieved
by holding a small symposium in July 2013 where representatives working with older peo-
ple, researchers focussing on older people, designers and some older people themselves
will take part. The Teletalker and Telewalker will be there to give participants a tangible
experience, results from the field research will be reported and participants are invited
to contribute to the imagined future uses and forms of the Teletalker. This symposium
will be filmed and results of the discussion will be reported.

6.
Conclusion

This paper presented results of reviewing relevant literature in regards to older people and
their use of online technologies in the UK. It argued why it is useful to have older people
connected with online technologies. Further, it introduced the method with which the
Teletalker research is being conducted. Constructive Design Research (CDR) is particular-
ly helpful when dealing with wicked problems and where a solutions-focussed design
strategy is more applicable since analysis can be exhaustive and there might be several
possible design solutions. By building a physical artifact research goals can be external-
ised and provide people with a tangible experience to give feedback on.
The Teletalker design response has been selected based on knowledge gained through
literature and from direct data collection. The main idea is to present a window where
online face-to-face interaction can be carried out in a very simple form (such as waving),
and so that the use of the technology becomes instantly rewarding.
The making of the Teletalker was described and early findings of the field research
reported. Analysis of the full results is still in progress, but preparations for further rounds
offield research are being made. With a future round of field research the Teletalker
will be transformed into a Telewalker to addres the target audience needs. This is a
major transition of the teletalker from a general research tool (which could be placed
anywhere the researcher decides in order to connect older people) to a specific research

212
tool (connecting two care homes). This transition highlights the difference between re-
search for design (Frankel & Racine 2010) and constructive design research in as much
as the Teletalker has been built to externalise the researchers knowledge rather than
being based on a real application need. In order to achieve a meaningful discourse in the
research community about the role and form of online social interaction technologies a
symposium will be held. In the symposium with selected stakeholders, such as represen-
tatives of organisations working with older people, the artifacts will be presented, field
research findings reported and a discussion generated on the of future forms and appli-
cations of the Teletalker. It needs to be emphasised that not only the physical artefact, in
this case the Teletalker, is part of CDR, but also the data collection, the choice of placement,
direct and indirect feedback from the people who tried it and from the research commu-
nity. Generalisable knowledge can be reported on once the Teletalker / Telewalker has
been placed into the field for at least three times, if not more, and when the researcher
has been able to reflect on the experiences including the use of CDR.
With this paper other researchers are invited to comment on the Teletalker research
in order to stimulate the discourse on the role of online social interaction technology for
older people and which physical forms it may take in the future.

Bibliography

Bardzell, S. et al. Critical Design and Critical Theory: The Challenge of Designing for
Provocation. In DIS. Newcastle, UK, pp. 288297. 2012.
Barrantes, S.S. Human-Computer Interaction with Older People: From Factors to Social
Actors. Universitat Pompeu Fabra. 2009.
Basaballe, D.A. & Halskov, K. Dynamics of Research through Design. In DIS. Newcastle,
UK, pp. 5867. 2012.
Blaun, H., Saranto, K. & Rissanen, S. Impact of computer training courses on
reduction of loneliness of older people in Finland and Slovenia. Computers in
Human Behavior, 28(4), pp.12021212. 2012.
Buchanan, R. Wicked Problems in Design thinking. In V. Margolin, ed. The idea of design. 1995.
Caplan, S.E. Preference for Online Social Interaction: A Theory of Problematic Internet
Use and Psychosocial Well-Being. Communication Research, 30(6), pp.625648. 2003.
Carpenter, B.D. & Buday, S. Computer use among older adults in a naturally occurring
retirement community. Computers in Human Behavior, 23(6), pp.30123024. 2007.
Carstensen, L.L., Isaacowitz, D.M. & Charles, S.T. Taking time seriously. A theory of
socioemotional selectivity. The American psychologist, 54(3), pp.16581. 1999.
Cross, N. Designerly Ways of Knowing (Board of International Research in Design),
Birkhuser GmbH. 2007.
Dutton, W.H., Helsper, E.J. & Gerber, M.M. Oxford Internet Surveys | OxIS - Oxford
Internet Surveys. Oxford Internet survey. Available at: http://microsites.oii.ox.ac.uk/
oxis/ [Accessed February 26, 2012]. 2009.
European Commission. The 2012 Ageing Report, Economic and budgetary projections for
27 EU Member States (2010-2060). 2012.
Fisk, A.D. et al. Designing for Older Adults (Human Factors and Aging Series), CRCPress.2009.
Gaver, W.W. What should we expect from research through design? In CHI12. Austin,
Texas, pp. 937946. 2012.

213
Harwood, J. Understanding Communication and Aging: Developing Knowledge and
Awareness [Paperback], Sage Publications, Inc; 1 edition. 2007.
Hoshi, T. et al. Touchable holography. ACM SIGGRAPH 2009 Emerging Technologies on -
SIGGRAPH 09, pp.11. 2009.
Kang, S. Disembodiment in online social interaction: impact of online chat on social
support and psychosocial well-being. Cyberpsychology & behavior: the impact of the
Internet, multimedia and virtual reality on behavior and society, 10(3), pp.4757. 2007.
Koskinen, I. et al. Design Research Through Practice: From the Lab, Field, and Showroom,
Morgan Kaufmann. 2011.
Lane Fox, M. MANIFESTO FOR A NETWORKED NATION, London. 2010.
Lawton Henry, S. Just Ask: Ingtegrating Accessibility Throughout Design, Lulu.com.2007.
Vom Lehn, D. & Heath, C. Misconstruing interaction. In Proceedings of interactive
Learning in Museums of Art and Design. 2002.
Lester, H. et al. An exploration of the value and mechanisms of befriending for older
adults in England. Ageing and Society, 32(02), pp.307328. 2011.
McCreadie, C. & Tinker, A. The acceptability of assistive technology to older people.
Ageing and Society, 25(1), pp.91110. 2005.
Mullick, A. Universal Bathrooms. In W. Preiser & E. Ostroff, eds. Universal Design
Handbook. McGraw-Hill, p. 42.8. 2001.
Pollard, M. Internet Access Households and Individuals. 2009.
Romero, N. et al. Playful persuasion to support older adults social and physical
activities. Interacting with Computers, 22(6), pp.485495. 2010.
Schuler, D. & Namioka, A. Participatory design: principles and practices, Lawrence
Erlbaum associates. 1993.
Sharit, J., Czaja, S.J. & Pirolli, P. Investigating the Roles of Knowledge and Cognitive
Abilities in Older Adult Information Seeking on the Web, 15(1). 2008.
Sokoler, T. & Svensson, M.S. Embracing ambiguity in the design of non-stigmatizing
digital technology for social interaction among senior citizens. Behaviour &
Information Technology, 26(4), pp.297307. 2007.
Stuart-Hamilton, I. The Psychology of Ageing: An Introduction [Paperback], Jessica
Kingsley Publishers; 4th Revised edition. 2006.
Sum, S. et al. Internet use and loneliness in older adults. Cyberpsychology & behavior:
the impact of the Internet, multimedia and virtual reality on behavior and society,
11(2), pp.20811. 2008.
Turner, P., Turner, S. & Van De Walle, G. How older people account for their experiences
with interactive technology. Behaviour & Information Technology, 26(4), pp.287296. 2007.
Turner, Phil. Affordance as context. Interacting with Computers, 17(6), pp.787800. 2005.
United Nations. United Nations. Available at: http://esa.un.org/wpp/unpp/p2k0data.
asp [Accessed November 15, 2011]. 2011.
Williams, M. Internet Access 2010. 2010.
Young, K. netaddiction.com. Available at: http://www.netaddiction.com/ [Accessed
January 5, 2012].
Zimmerman, J., Stolterman, E. & Forlizzi, Jodi. An Analysis and Critique of Research
through Design: towards a formalization of a research approach. In Aarhus
Denmark: DIS 2010, pp. 310319. 2010.

214
Innovation, Collaboration, Education: Histories
and Perspectives on Living Labs

Gabriella Arrigoni
gabriella.arrigoni@ncl.ac.uk
Digital Media at Culture LabNewcastle University, Newcastle upon Tyne, UK

Keywords: Curating, Living Lab, Education, Open Source, Collaboration, Innovation.

Abstract: This paper suggests a genealogy of Living Laboratories (LL) by comparing similar-
ities in their development with media labs and experimental art schools. These histories
all share an interest in concepts of innovation, collaboration, interdisciplinarity, and in
the subversion of traditional forms of governance and knowledge production. Originally
conceived as a research environment in the field of computer science, and subsequently
applied as a curatorial strategy for exhibiting and evaluating interactive art, the idea of
the LL can be expanded and enriched with new potential. Looking at the models of media
lab and the educational turn in contemporary art can not only add a chapter in media

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


histories, but can also indicate a possible trajectory for LL towards the establishment of
temporary communities engaged in forms of knowledge exchange. By ascribing new
responsibilities to the public and addressing issues relevant to them, this can bring new
perspectives on audience development and offer a context more suitable for the presen-
tation of digital media projects.

215
1.
Introduction

There is an increasing inclination in the art world towards a transition from spectator-
ship to active participation. Minimalism, happenings, public art, community specific art,
interactivity, discursive practices, all contributed to a tendency which experienced an in-
credible acceleration with the rise of the Web 2.0 and its possibilities in terms of sharing,
crowdsourcing and networking. The dream of a democratisation of art merged with the
development of new curatorial strategies and the creation of platforms for online collab-
orative curating or to facilitate the collective production of artworks (Paul 2006). The idea
of a user-centred approach is rooted in business studies, particularly around the concept
of lead user developed by Eric von Hippel: according to his theories, innovation is largely
generated by end-users rather than manufacturers (1986) whose role is mainly to respond
and implement new needs identified in the marketplace. Subsequently, disciplines such
as computer science, psychology and interaction design were informed by the principle
1. T his approach informs for of an open, distributed innovation1, with the setup of dynamic environments to test user
instance ideas of Cooperative
experience in a collaborative dimension closer to everyday life and
Design (Greenbaum and Kyng
1991) and Emotional Design
(Norman 2004).
engage all stakeholders such as end-users, researchers, industrialists, policy
makers, and so on at the earlier stage of the innovation process in order to exper-
iment breakthrough concepts and potential value for both the society (citizens)
and users that will lead to breakthrough innovations. (Pallot 2006)

One of these platforms for innovation and experimentation took the name of LL and
inspired a redefinition of exhibiting strategies for interactive art. Beta_Space, launched
in 2004 at the Powerhouse Museum, Sydney, is an exhibiting space where interactive
artworks are showed at different stages, from early prototype to product, and where the
audience is involved in the evaluation process (Muller and Edmonds 2006). What is cru-
cial at Beta_Space is that the audience is

expected to provide feedback to assist the research happening in the same space.
This action, this participation becomes the median by which the work is mea-
sured. (ibid.)

LLs provide a framework to address the ongoing question of how artistic practice is
reshaped to suit the adoption of digital technology and scientific procedures. However,
this paper takes as a starting point the contention that the application of the LL as a cu-
ratorial strategy contains a strong political potential which has not been fully explored
yet. Pallot considers the potential of LLs in terms of citizen-government partnership and
2. F
 or a detailed survey of LLs in mentions a series of examples2 at the level of local authorities where it has been applied
the public sector see
as a model for regional development to facilitate the citizens understanding of various
www.openlivinglabs.eu
issues in their environment and test possible solutions (2006). Given the value of LLs as
a means of participatory co-planning, territorial self-governance and citizen ownership,
even though still at an experimental stage, a richer perspective can be envisioned also for
contemporary art. Therefore its application should not be limited to the evaluation of in-
teractive art but extended to a wider area of interest. One of the most problematic aspects

216
in translating the user-centred model in innovation into artistic practice is the difficult
coupling of user and audience. For this reason, LLs will be put in relation to creative plat-
forms such as experimental art schools and media labs, especially in association with
their contribution to the free culture and open source movement. We will show how this
can provide a fertile model for future applications and will allow us to put an emphasis
on learning as vector for creativity, social interaction and collaboration.

2.
Media lab histories, free culture and innovation

By suggesting a collective-action model in innovation, Von Hippels theories had an impact


on the free/open source software (FLOSS) and Free Culture (Lessig 2004) movements. He
directly addressed the question of open source software as a mixed private-collaborative
strategy (von Hippel and von Krogh 2003). In Democratizing Innovation he explained how
technology enabled users to initiate communities of innovators and why it is profitable
to share intellectual commons freely (von Hippel 2005). Furthermore, open source soft-
ware is an easier platform for those customisation, reinterpretation and adjustments
which are typical of creative production, rather than tools protected by intellectual prop-
erty (National Research Council 2003, 4). Innovation is in fact one of the key arguments
adopted in Lessig advocacy of a free distribution of cultural content (2004, 184). Criticisms
to this position however do not only come from copyright advocates, but also by those
concerned with the dangers of free and anonymous labor: in his book You Are Not A Gadget
(2010), Jaron Lanier warns that free culture may lead to the exploitation, rather than the
empowerment, of small producers. What is also relevant is FLOSSs major role in promot-
ing free and broad access to knowledge and enhancing peer led models for production
and education.3 This collaborative approach has proved essential for the growth of media 3. The debate about free access
to academic publishing is par-
labs across Europe. In a recent article commissioned by the Arts Council Charlotte Frost
ticularly relevant here to show
stresses their contribution to Open Source culture and also provides a basic definition of how research can be affected
by a limited availability of
media labs, described as
publications.

spacesmostly physical but sometimes also virtualfor sharing technological


resources like computers, software and even perhaps highly expensive 3D print-
ers; offering training; and supporting the types of collaborative research that do
not easily reside elsewhere (2012).

This definition is helpful to understand certain continuities between media labs and
LLs: the idea of artist as innovator or lead user (not just applying existing technologies
to creative purposes, but developing media and applications in close collaboration with
scientists and technologists) is an essential premise with which to speculate on the role of
the audience itself as innovator. However, it is interesting to notice how Frosts definition
does not envisage a program explicitly open to the general public. In media labs there is
no audience: all participants are users and tend to form communities clustered around
specific projects, rather than opening doors to occasional visitors. Frost (ibid) outlines
a succinct account of media labs in the UK from the Nineties onwards culminating in
their recent incarnation of the hacklab. However, the history of the productive synthesis
of practices, resources and methodologies between science, art and technology is a more

217
complex and long-lived one. Michael Century (1999) provides a compelling insight into this
matter adopting the definition of studio-lab, which significantly emphasises the merging
of artistic and scientific research spaces. Centurys report describes the gradually inten-
sified communication between the scientific and humanistic sectors leading to hybrid
institutions where media technologies are designed and developed in co-evolution with
their creative application (ibid). Century traces back the roots of this development in the
early 20th Century avant-gardes and especially the Bauhaus, characterised by

a strongly applied socio-technical project to shape the quality of mass repro-


duced designs with all the imaginative resources of the contemporary creative
spectrum (ibid).

Subsequently, Century identifies the following three phases in the historical evolu-
tion towards the studio-lab. 1) Art centres created during the 1960s and 1970s to support
the artistic experimentation of emerging technologies. For instance: E.A.T. (Experiments
in Art and Technology), IRCAM (Institut de Recherche et Coordination en Acoustique et
Musique) and the Centre for Advanced Visual Studies at MIT. 2) Media centres interested
in research but also in engaging the public with festivals and exhibitions, appeared in the
1980s and 1990s (ZKM and NTTInterCommunication Centre). 3) Studio-labs created in the
1990s and based on strong partnerships with the industry or higher education. Examples
are the MIT Media Laboratory, Xerox Parc PAIR artist in residence program, and the Banff
Centre. This history demonstrates how the relationship between engineers and artists
goes far beyond that of provider and consumer of technology, to become a flexible and
thoughtful collaboration in which the roles of software designer and user are not rigidly
distinguished (National Research Council 2002, 3). Studio-labs have been informed by
hacker culture and its preference for the open source ethos, and have a strong tendency
towards teamwork and interdisciplinarity. Not only does innovation become embedded
in cooperative practices, but it precisely aims to address social needs (Frost 2012). What
appears crucially reinforced in the last generation of media labs is the effort to engage a
larger community outside their peer circle, and especially marginalised groups, not with
an exhibiting program but with an open door approach, involving all participants in the
maintenance of the space and its resources, offering opportunities for inclusion and
learning-by-making, community-oriented projects, internet access, tuitions on software
packages and professional training for unemployed people. Learning tends to happen in
informal ways, often through direct application to creative production: once a media lab
participant has learnt how to do something, they should pass this knowledge on. (Frost
2012). To illustrate this emphasis on social empowerment Frost provides the example
of the Zero Dollar Laptop project (a collaboration between Access Space and Furtherfield
2009): a series of workshops to teach homeless people how to build and maintain a laptop
created using recycled, donated hardware and open source software. This preference for
recycled technology is not just a money-saving solution, but a way to disseminate the
potential of creativity in re-using things and the importance of accessibility. Frost goes
on stressing the importance of media labs in addressing the special needs of digital art,
which often does not find an ideal context in traditional gallery spaces. The difficulties in
exhibiting digital art have been widely debated (Dietz 2003, Paul 2008, Graham and Cook

218
2010) and lie, in part, in its process-oriented nature. Paul identifies a number of issues in-
herent to the display of digital art, including the requirement of a certain familiarity with
the interface, an extended viewing period, a strong dependency on the context and partic-
ipatory and non-linear qualities. She also tries to outline what an ideal setting would be:

New media art seems to call for a distributed, living information space that is
open to artistic interferencea space for exchange, collaborative creation, and
presentation that is transparent and flexible (Paul 2006, 85).

Media labs offer the artists a platform to work, test, develop a process but do not re-
quire them to show a final product. This also made the role of media labs complementary
to that of the gallery, sometimes resulting in fruitful collaborations between the world
of contemporary art and that of digital media4. If we take the blurring of boundaries be- 4. Frost gives the example of Follys
collaboration with the Harris
tween production and exhibiting site as a defining feature of the LL, we see how strong
Museum and Art Gallery in a
its continuity with the media labs is. However, media labs partnerships are not limited project involving the exhibi-
tion and acquisition of digital
to art organisations, as they are frequently affiliated, supported or hosted by educational
artworks. This is happening
institutions or universities. To sum up, what LLs can draw from the experience of media despite a certain historic an-
tagonism between new media
labs could be in the first instance a more concrete idea of its public. LLs need to address
and mainstream contemporary
and nurture communities around specific projects. Community is defined here as any art, a question recently tackled
by Claire Bishop in an article on
temporary collectivity built around a shared site of co-creation and common interests.
Arforum (2012).
Media labs also suggest a range of structural solutions: partnerships with the University
and art organisations, networks of labs, online and offline presence, are all viable possibil-
ities for the LL to pursue. Finally, rather than limiting the involvement of the public in the
evaluation process, workshops and training activities introduce participants to the use of
tools which can trigger further creative production and dissemination, and that suggests
a shifting aesthetic paradigm. Open-ended pieces, subject to further modifications would
be preferred to static artifacts. For instance, the possibilities offered by code (live coding,
web scrapers, data visualization, rapid prototyping) tend to engender further re-writings
and enable production by others, turning these creative languages into living organisms.

3.
Experimental Art Schools

The emphasis that the Bauhaus put on the potential of creativity to encourage social
change explains its influential role in shaping the imagination around the idea of the
art school. It was mentioned earlier how Century considered the Bauhaus as a source of
inspiration for the development of studio-labs. The institution founded by Walter Gropius
is also claimed as model for a number of experimental art schools that contributed to
what became popular since the mid-Nineties under the name of the educational turn
in contemporary art (ONeill and Wilson 2010). This definition has worked as an umbrel-
la term to classify a series of heterogeneous experiences associated with the adoption
of formats and methodologies typical of educational infrastructure (seminars, classes,
courses, research trips, workshops, lectures) within curatorial or artistic practice. This
turned the exhibiting space into a site for discourse, but also expanded curatorial practice
to alternative sites, outside the traditional gallery. The School of Missing Studies (n.d.),

219
for instance has a specific focus on architecture and urban studies, and its most famous
project was the Lost Highway Expedition in 2006, located literally on the road:

A multitude of individuals, groups and institutions will form a massive in-


telligent swarm that would move roughly along the unfinished Highway of
Brotherhood and Unity in the former Yugoslavia. The road was made in [the]
Sixties in the massive voluntary campaign of the peoples of all nationalities that
constituted Yugoslavia. The expedition is meant to generate new projects, new
art works, new networks, new architecture and new politics based on experience
and knowledge found along the highway.

Expanded academia, artist as researcher, seminar as exhibition, the interpretation of


the educational turn vacillates between two poles. On one side it could be considered as
a further declination of the wider trend of art as encounter, (Dave Beech 2010, 48), that
refers to a repertoire including relational and dialogical practices curtailing the role of
the public as viewer and turning it into a user. On the other side, it can be cast in a more
specific light as a reaction against the educational institution, which, with the introduc-
tion of the Bologna Accords of 1999, has been criticised for standardising and corporatising
the entire Higher Education system within the European Union. More recently, the Arts
Against Cuts movement reinvigorated similar antagonisms in the UK. This criticism is
also addressed at the hierarchies traditionally informing the passing of a pre-determined
set of knowledge on to coming generations. Experimental schools were conceived as a
way to undermine an idea of pedagogy as discipline and encourage instead an education-
al practice driven by emancipatory and liberative forces (Freire 1972; Rancire 1991). The
association between knowledge and power is a well-established one that acquired new
complexity with the rise of the so-called knowledge economy. The question of immaterial
labour (Lazzarato 1996) is having a deep and multifaceted impact on the art world which
would take too long to analyse here. We can however say that the financialisation of in-
tellectual practices nurtured a desire for opportunities of knowledge production outside
the logic of profit. A case in point is the Copenhagen Free University. The house of its
founders Henriette Heise and Jakob Jakobsen became a public space in which one could
research archival material, take part in debates, present artworks or screen films. The
following excerpt from the project website suggests how crucial the idea of performing
education in a living environment is:

Seeing how education and research were being subsumed into an industry
structured by a corporate way of thinking, we intended to bring the idea of the
university back to life. By life, we mean the messy life people live within the
5. I nvited to curate Manifesta contradictions of capitalism. We wanted to reconnect knowledge production,
6, Vidokle envisioned it as an
learning and skill sharing to the everyday within a self-organised institutional
art school in Nicosia, Cyprus.
The project failed due to the framework of a free university. (Heise and Jakobsen 2007)
political contrasts between the
Greek and Turkish population
but it was successively realised Further motivations for artists and curators to explore the dimension of learning are
in Berlin under the name of
to be found in what we could define as the biennial fatigue. As Anton Vidokle5 points
Unitednationsplaza (www.
unitednationsplaza.org/). out, the exhibition might not necessarily be the most effective way to deliver an art

220
aiming to engage and transform society, rather than simply present itself as a symbol-
ic gesture. Large scale international exhibitions have become a trite reiteration of the
same standardised formula, very often showing the same pieces by the same artists
(2010). Additionally, Vidokles fundamental belief6 that art schools do not primarily teach 6. A
 fter Walter Gropius famous
claim that art cannot be taught
but create the precondition for creative work (Vidokle 2006), raises questions about the
(1919).
self-reliance of contexts. Jan Verwoert warns about the risk of thinking that creating a
platform is a self-sufficient strategy, without much concern for the content, reduced to a
semi-disposable filling for the format (Verwoert 2010, 26). The idea of adopting education
as a medium implies troublesome questions. How to balance the needs of learners with
aesthetical requisites? How to avoid forms of exploitation (towards the students) for the
sake of art? Piero Golia, co-founder in 2005 with Eric Wesley of The Mountain School of
Art, operated out of a bar in Los Angeles, radicalises this point:

I dont think a school is part of an art practice, I think thats where the confu-
sion is. I think some people misunderstood and wanted to play education as a
medium because they noticed it was successful for others. But education is not
a media, its education. Its just for the students and not for educators/artists
personal research. (Golia 2010)

We can consider under this rather functional perspective also The University of
Openness, founded by Saul Albert as an experiment in the self-provision of a collabora-
tive research infrastructure (Albert n.d.). This is a case in point to trace back to our dis-
course on media labs, free culture and open source and to demonstrate how the idea of
collaborative learning is productively intertwined with the creative applications of media
technologies. Or, to slightly rephrase it, this clarifies the importance of digital and net-
working technologies in facilitating alternative and independent forms of education. The
University of Openness was devoted to researchers interested in the possibilities offered
by Unix to art production. It was structured in weekly sessions at Limehouse Town Hall
but the community grew significantly when resources were made available and shared
through those platforms emerging as the favoured sites for collaborative work for geeks
and media practitioners: wikis, mailing lists, blogs, IRC. Despite such a heterogeneous
collage of experiences, some commonalities among experimental art schools prove useful
in understanding where LLs can go. The idea of learning as a structure for inclusion and
access is combined with a rethinking of the dialectic between exhibiting space and sites
for dialogical practices. By removing the gap between production and discussion, and
encouraging questioning rather than aiming at the achievement of an expertise, these
models of education empower the community by transferring responsibility to all partic-
ipants of carrying out the project and filling the platform with content. LLs can be envi-
sioned as self-organising systems where the transmission and production of knowledge
are intertwined and not dramatically separated as in traditional schools. Even though
we can only consider labs in a complementary role in the broad educational system, they
are indicators of deep transformations in the way we tend to organise knowledge. The
relationship between humanistic and scientific areas of research, developed in relation to
digital culture, is in fact a symptom of the inadequacy of the traditional discipline-based
educational practices, and calls for a rethinking of the system towards a project-based

221
approach. This obviously demands a great amount of time and commitment, but it pays
back with a sense of shared ownership towards the outcomes of the projects itself. This
is also made possible by subverting the traditional separation between artist, curator and
audience: a certain degree of criticism towards the institution, its hierarchies and power
structures is ascribable to most of the experience we took into account. The emergence
of new curatorial strategies, new institutional configurations and new models of rep-
resentation comes together with a new conception of art and its public. Curator Simon
Sheikh talks about a fundamentally fragmented public sphere and investigates how to
construct participatory models of spectatorship as opposed to modernist generalised ones.
The erosion of nation states and the process of globalisation played an important role in
this shift, since the public realm can no longer be associated with a location, but rather
with networks, groups or subgroups (Sheikh 2004). A plurality of more or less special-
ised publics means not only that the traditional divide between cultural providers and
cultural receivers is less and less substantial, but also that curators should stop treating
the audience as endowed with an equal, neutral background. Rather, everyone can bring
their own specific knowledge and share it with the participants in a given project. This
has important consequences in terms of the sustainability of the LL, suggesting forms
of gift economy and exchange whereas large financial resources would have been oth-
erwise indispensable.
Additionally, the performed character of most experimental art schools indicates a
drive towards liveness, conceived as both the re-creation of a context mimicking every-
day life situations and concerns, and the live dimension of the presented projects, expe-
rienced in their own making. An interesting perspective for LL would be to set up a situ-
ation that works on the double level of real life and symbol, assembly and performance,
specific setting and archetype. From a curatorial perspective, liveness also establishes
a new autonomy for art practice, by avoiding the usual displacement of the artwork in
the space and time of the exhibition (and letting it inhabit, instead, the space and time
of its own creation).

4.Conclusions

This study addressed a range of issues involving media labs, experimental educational
practices and the FLOSS movement. The latter contributed to the delivery of forms of
self-education and to the digitalisation of educational resources into open-source packag-
es available to everyone (Roush 2011). One of the key arguments to support FLOSS is that of
innovation (the free circulation of cultural content is not an impairing force in the mar-
ket but rather a propulsive one). We have discussed the relationship between innovation
and user-centred approaches first in business research, then in computer science and
finally as applied to curatorial and artistic practice. We have also emphasised the role of
digital technologies in facilitating a democratisation of innovation by enabling more and
more people to access resources and skills to creatively reuse those already in circulation.
This culture of sharing and collaborative co-creation is typical of media labs. By tracing a
history of the different incarnations of media lab we identified relevant commonalities
with the still open-ended concept of LL and key features of its possible future trajectories:
a) there is no such thing as a general audience, but rather temporary project-oriented

222
communities (with a potential in terms of sustainability); b) partnerships with research
or art organisations can contribute at different levels (including financial support, partic-
ipation in large research projects, outreach); c) the program is focused on workshops and
other activities encouraging an exchange of knowledge and skills that can trigger further
creative production, able to enter into an active life beyond its initial implementation (for
instance coding). Experimental art schools are also imbricated in the FLOSS movement as
models for collaboration and self-regulation (Roush 2011). They developed as a response
to a series of crises: of the audience, the public, the exhibition, the educational institu-
tion (and against the monetarisation of knowledge typical of the new economies). The
attempt to reintegrate the putative inclusive role of education is enhanced by the effort
to disrupt a set of hierarchies and power relationships traditionally associated with a top
down transmission of knowledge where expertise is intended as authority. LLs emerge
from this discussion as possible sites for the transfer of responsibilities from the usual
cultural gatekeepers to the public. This leads us to consider creative practice as a space
where people can think about how to fit in society and arises questions for possible fu-
ture research around the role of the LL as an environment in which to experiment with
new forms of governance and production. If involvement in creative projects can be an
emancipatory force, supported by the feeling of giving a contribution to the collectivity,
how can it be put in relation with ideas of DIY and gift economies, equality, autonomy
and self-governance? How can we bypass the spasmodic utopian flavor of community
ethos which might be applicable, after all, only on the small scale? The risks embedded
in this approach lies precisely in making the public interest as a guiding principle. The
point will be to understand where the shift between merely gathering people together
around some digitally-enabled bricolage and actually engage them, take place. In the
context of LLs, liveness invokes responsibility and choice, but also performance and rep-
resentation: an effort towards the synthesis of the contingency of a specific situation and
the staging of the symbolic.

Acknowledgments. This paper is part of a research project on the subject of Living Labs
supervised by Dr. Brigitta Zics and Professor Mike Stubbs. I would also like to acknowledge
the support of the Arts and Humanities Research Council.

References

Century, Michael. Pathways to Innovation in Digital Culture. Montreal: Centre


for Research on Canadian Cultural Industries and Institutions/Next Century
Consultants, 1999.
Dietz, Steve. Interfacing the Digital. Paper presented at Museum and the Web 2003.
Frost, Charlotte. Media Lab Culture in the UK. Last modified August 28, 2012.
http://www.furtherfield.org/features/articles/media-lab-culture-uk.
Furtherfield. Zero Dollar Laptop. Last modified December 29, 2009.
http://www.furtherfield.org/zerodollarlaptop/?page_id=2.
Greenbaum, Joan and Kyng, Morten, eds. Design At WorkCooperative design of
Computer Systems. Hillsdale: Lawrence Erlbaum Associates, 1991.
Lanier, Jaron. You Are Not a Gadget: a Manifesto. New York: Knopf Press, 2010.

223
Lazzarato, Maurizio. Immaterial Labor, in Radical Thought in Italy: A Potential
Politics. Edited by Paolo Virno and Michael Hardt, Minneapolis: University of
Minnesota Press, 1996.
Lessig, Lawrence. Free Culture. New York: Penguin, 2004.
Muller, Lizzie and Edmonds, Ernest. Living Laboratories: Making and Curating
Interactive Art. Paper presented at Siggraph, Boston, MA, July 30August 3 2006.
National Research Council. Beyond Productivity: Information, Technology, Innovation,
and Creativity. Washington, DC: The National Academies Press, 2003.
Norman, Donald. Emotional Design: Why We Love (or Hate) Everyday Things. New York:
Basic Books, 2005.
Paul, Christiane. The Myth of Immateriality: Presenting and Preserving New Media,
in Media Art Histories, edited by Oliver Grau, Cambridge, MA: The MIT Press, 2006.
. Flexible Contexts, Democratic Filtering and Computer-Aided Curating, in
Curating Immateriality: The Work of the Curator in the Age of Network Systems. New
York: Autonomedia, 2006.
. New Media in the White Cube and Beyond: Curatorial Models for Digital Art.
Berkeley: University of California Press, 2008.
Pallot, Marc. What is a Living Lab? Last modified on 15 May 2006 http://www.ami-
communities.eu/drupal/node/28.
von Hippel, Eric. Lead Users: a Source of Novel Product Concept. Management Science
no.7 791805, July 1986.
. Democratizing Innovation. Cambridge, MA: The MIT Press, 2005.
von Hippel, Eric and von Krogh, Georg. Open Source Software and the Private-
Collective Innovation Model. Organization Science no.2 March/April 2003
doi10.1287/orsc.14.2.209.14992.

224
On the Notion of Code Convergence in Vilm
Flussers Work

Rainer Guldin
guldinr@usi.ch
Flusser-Studies, Universit della Svizzera italiana

Keywords: Technical Images, Photography, Computer, Re/Translation, Einbildungskraft,


Gesamtkunstwerk.

Abstract: In the course of the 1970s and 1980s Vilm Flusser formulated the theoretical
vision of a general convergence of different diverging aspects of modern society. According
to him, this was made possible thanks to the latest technological developments: the in-
vention of technical images, through photography and film, as well as the creation of
new calculated digital images emerging from computer monitors. This notion of a final
fusion is based on Flussers own daily translation and retranslation practice and the
theoretical vision he associated with this.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

225
Ah love! Could thou and I with fate conspire,
to grasp this sorry scheme of things entire,
would not we shatter it to bitsand then,
remold it nearer to the hearts desire?
Edward Fitzgerald, The Rutalyat of Omar Khayyam

In my talk I would like to focus on the notion of code convergence in Vilm Flussers
work. Even if he used the terms medium and mediation throughout his oeuvre he never
developed a media theory proper, probably also to distance himself from the likes of
Marshall McLuhan. Instead of media, Flusser speaks of discursive and dialogical commu-
nication structurestheaters, pyramids, trees, amphitheaters, circles and netsand
of codes, images, texts and technical images. His vision of a final fusion in the digitally
calculated technical sounding images, as he developed it in Into the Universe of Technical
Images first published in German in 1985 can, therefore, strictly speaking, not be described
in terms of multimediality only: a significant theoretical difference that would have to
be explored further.
The idea of a final fusion, a synthesis of the different codes, the senses associated
with them and the body parts that go with this is slowly developed in a series of texts in
the course of the 1980s. Already in Mutation in Human Relations? however, written be-
tween 1977 and 1978, Flusser develops a loose narrative moving from one communication
structure to another and envisaging a sort of final convergence which he calls synchroni-
zation. Each step is motivated by a structural weakness which the following structure is
supposed to do away with, creating, however, a new problem calling for further changes.
This dialectics of mediation is also at work in the code progression described in the works
of the 1980s. The move from theater to pyramid to tree to amphitheatre, furthermore, ad-
umbrates the later passage from image to text to technical image described in Towards
a Philosophy of Photography.
A first version of the notion of final synthesis can be found in Towards a Philosophy of
Photography first published in German in 1983, but subsequently translated into English
and republished in 1984. In this text Flusser develops a history of media based on a series
of processes of translation and retranslation. Flusser defines three interconnected codes
each defining a specific universeimages, texts and technical imagesand develops a
history of media evolution based on a series of processes of translation and retranslation.
In a Lexicon of basic concepts at the end of the book, translating is defined as a move
from code to code, a jump from one universe into another. (Flusser, 1984:61) The first
step in this evolutionary process based on an alternation of images and texts consists in
the creation of significant surfaces whose function is to make the world imaginable by
abstracting it. These surfaces were meant to be mediations between man and world, but
tended to hide the world by slowly absorbing and substituting it. The world becomes im-
age-like (). This reversal of the function of images may be called idolatry (). (Flusser,
1984:7) To counteract this tendency, texts were invented. Their aim was to break up the
hallucinatory relationship of man to image and to criticize imagination by recalling its
original intention.

226
Some men () attempted to destroy the screen in order to open the way to the
world again. Their method was to tear the image elements out from the surface
and to align them. They invented linear writing. In doing so, they transcoded the
circular time of magic into the linear time of history. (Flusser, 1984:7)

History, thus, can be defined as the progressive translation of ideas into concepts
(Flusser, 1984:60), of images into texts.
The dialectics of mediation at work in the passage from the first to the second step
of evolution, however, leads to a second impasse.

The purpose of writing is to mediate between man and his images, to explain
them. In doing so, texts interpose themselves between man and image: they
hide the world from man instead of making it transparent for him. () Texts
grow unimaginable, and man lives as a function of his texts. A textolatry occurs,
which is just as hallucinatory as idolatry. (Flusser, 1984:9)

The same way the prehistoric phase of images was overtaken by a historical phase
of texts, posthistory takes over from history and by inventing technical images attempts
to make texts imaginable again. By doing this, posthistory bends the progressive linear
development of translation from images into texts back to its origins and beyond. Flusser
describes it as a re-translation of concepts into ideas (Flusser, 1984: 61), that is, of texts
into technical images. Technical images differ from traditional images in that the two are
the results of dissimilar processes of translation. Traditional images have real situations
as their source; technical images, on the other hand, start out from texts, which in turn
have been written in order to break up images through translation.
Flussers history of media evolution as translation and retranslation has its origin
in his vision of translation which he developed in the 1960s. Flussers writing practice
consisted in translating each text into another language rather than just rewriting it in
the same language. This text was in turn translated into another language. Flusser used
four different languages altogether: German, Portuguese, English and French. These
processes of multiple successive translations were generally ended by retranslating the
last version into the language of first text, thus turning a straight line into a circle. This
final text, a palimpsest of sorts, in a way, contained all other previous texts the same way
that the technical image contains texts containing images. The following description of
a translation process holds true also for the code progression described above. When we
translate an English text into a French one, or an image into a text, one code feeds on
the other: the French text, the meta-code, or the target language, swallows the English
one, the object-code, or the source text.

In the case of retranslation the original relationship of the two codes is reversed:
the object-code becomes now a meta-code. In other words: after the French code
has swallowed part of the () English one, he is in turn swallowed by the English
code, () so to speak with the English in his belly. (Flusser, 1996:343)

227
Technical images are transcodings of texts that have ingested images. This is the first
aspect of Flussers idea of code convergence. But there is more to it.
In Into the Universe of technical Imagesfirst published in German in 1985 as Ins
Universum der technischen BilderFlusser amplifies his early concept of code conver-
gence by adding numbers and sounds. In the chapter Chamber Music Flusser uses to
compose and to compute as synonyms, bringing the world of music, mathematics
and technical images together.

The world of music is a composed universe. () We dont need to wait for elec-
tronic music to recognize this quality about music. The universe of music is as
calculated and computed as that of technical images. (Flusser 2011:164)

Contrary to music, the universe of technical images is a two-dimensional universe of


surfaces, but like the musical universe and contrary to that of traditional images

It is a pure universe, free of any semantic dimension. Technical images are pure
art in the same sense that music alone once was. () Since the beginning of com-
puting, technical images have rushed spontaneously to sound, and from sound
spontaneously to images, binding them. (Flusser 2011: 1645)

Flusser does not explain the reason for this reciprocal tendency of images and sounds
to fuse into one, but defines this inclination as a characteristic of both pretechnical imag-
es and pretechnical music. The technical image is the first instance of music becoming
an image and an image becoming music. (Flusser 2011: 165)
This synthesizing fusion, however, is not to be understood as a simple juxtaposition
of the visual and the acoustic. What Flusser intends is a complete reciprocal penetration
and fusion of the two codes creating something radically new, unheard-of and unseen so
far. This is made possible by computing which breaks down sound and sight into small
bits and reassembles them again into a new coherent form.
An example that aptly sums up Flussers positionbut unfortunately without the
acoustic dimensioncan be found in the work of Nancy Burson to whom Flusser dedi
cated a short essay published in 1987. Flusser starts out with one of his favorite quota-
tions, two verses from the Rubaijat of the Persian poet Omar-i-Chajjam: We shatter it to
bits, and then remold it nearer to the hearts desire. (Flusser 1998:146) Expressed in less
poetic terms, continues Flusser, we calculate the world in order to compute it. (Flusser
1998:146) [my translation RG] Flusser uses the English word bits in a double sense: in
the general sense of bits and pieces and in the more restricted sense of binary digit, the
basic units of information theory. We shatter the world to bits in order to recreate it ac-
cording to our own wishes. We project new composite realities. Nancy Burson does the
same. She creates chimeras through photography. Her chimeras, however, are not like
the traditional ones from Greek mythology: a lion with the head of a goat arising from
its back and a tail ending in a snakes head. Her pictures are not assembled like a collage,
through simple juxtaposition. The mythical chimera was composed from different het-
erogeneous elements. If Bellerophon instead of fighting it, so again Flusser, would have
kicked it up its backside the lions head would have tumbled on the right and the snake

228
tail on the left. This would not be possible with Bursons chimeras. Her portraits of pol-
iticianscombining Hitler, Stalin and Mussolini into a single face , and her ironical
composite female beautiesa cocktail mixed out of Audrey Hepburn, Bette Davis, Grace
Kelly, Sophia Loren and Marilyn Monroeare based on computer programs that work
according to a specific algorithm. These new authentic chimeras, writes Flusser, are
self-contained independent phenomena. (Flusser 1998:146) [my translation RG]
Neither the concept of the audio-visual nor the existence of electronic intermixers
that translate images into sounds nor sounds into images, correspond to the new level of
integration that has become possible with the invention of calculated technical images.

In a sounding image, the image does not mix with music; rather both are raised
to a new level () Contemporary approaches to making music pictorial and pic-
tures musical have had a long preparation. They can be seen, for example, in
so-called abstract painting and in the scores of newer musical compositions. ()
so-called computer art is moving toward sounding images and visible sound.
(Flusser 2011:1656)

As Flusser points out, this trend can be detected in all synthetic images even those
that present themselves as scientific or political documents rather than art. (Flusser
2011:166) This anticipates the third and last aspect of the notion of convergence I am dis-
cussing here. I will come to it shortly.
The technical images finally manage to get rid of their earlier representative charac-
ter of images and to become pure art, the same way music always was: immaterial and
with-out an object to refer to.

But only synthesized images are really conceived musically and made musical
with visualizing power. It will be pointless to try to distinguish between music
and so-called visual arts because everyone will be a composer, will make images.
The universe of technical images can be seen as a universe of musical vision. ()
Once they have both become electronic, visual and acoustic technologies will no
longer be separable. (Flusser 2011:165)

Unfortunately the English translation does not quite reproduce the idea Flusser
is trying to express here. For conceive and vision Flusser uses einbilden and
Einbildungskraft, linking thus the word image, Bild to the new technical possibili-
ty of computation, Einbildung, and calling this new form of technical imagination
Einbildungskraft in order to separate it from earlier forms of imagination. In the Ger-
man original, stressing the two-way thrust of his argumentation, moving from image to
sound and back, he writes: erst bei synthetischen Bildern wird tatschlich musikalisch
eingebildet und mit Einbildungskraft musiziert. Another play on words takes place with
the use of synthesis, synthetic, synthesize and synthesized, in German Synthese syn-
thetisch synthetisieren and synthetisiert, linking the early vision of a final synthesis
through multiple translation to the new vision of synthetic sounding technical images.
Flusser ends his description with a reference to German Romanticism, however, with
a rationalist twist. The new general convergence is not about mysticism, but the collective

229
projection of a world that is completely man-made and therefore concrete: an utterly
fictitious world in which to live with complete self-consciousness.

I think this new aspect can be grasped at its tip in the dreamlike quality of
the emerging image world. It is a dream world in which the dreamers seem
exceptionally alert, however, for to press the buttons that produce pictures, the
dreamer needs to calculate and compute clear and distinct concepts. It is a dream
world, then, that does not lie below waking consciousness but above it, conscious
and consciously constructed, a hyperconscious dream world. It will therefore be
pointless to try to interpret dreams: they will mean nothing beyond themselves,
and they will be tangiblea world of pure art, of play for its own sake. Ludus
imaginis () as ludus tonalis () and the emerging consciousness of the power
to imagine as that of homo ludens. (Flusser 2011:166)

The same way that music does not refer to any specific object, technical images are
concrete dreams that do not refer to any reality but to themselves. I would now like to
conclude with the third aspect of the notion of convergence.
In The Photograph as Post-Industrial Object: An Essay on the Ontological Standing of
Photographs published in Leonardo in 1986 Flusser sums up his idea of an encompassing
cultural convergence directly stemming from technological evolution: the meeting and
fusion of the natural sciences and the humanities, art and science, imagination and
precision. In the following passage Flusser, furthermore, links this evolution to the work
of Leonardo da Vinci and the notion of Gesamtkunstwerk as it appears in the music of
Richard Wagner.

Ever since the fifteenth century occidental civilization has suffered from the
divorce into two cultures: science and its techniquesthe true and the good
for somethingon the one hand; the artsbeautyon the other. This is a
pernicious distinction. Every scientific proposition and every technical gadget
has an aesthetic quality, just as every work of art has an epistemological and
political quality. More significantly, there is no basic distinction between sci-
entific and artistic research: both are fictions in the quest of truth (scientific
hypotheses being fictions). Electromagnetized images do away with this divorce
because they are the result of science and are at the service of the imagination.
They are what Leonardo da Vinci used to call fantasia essata. A synthetic image
of a fractal equation is both a work of art and a model for knowledge. Thus the
new photo not only does away with the traditional classification of the various
arts (it is painting, music, literature, dance and theatre all rolled into one), but it
also does away with the distinction between the two cultures (it is both art and
science). It renders possible a total art Wagner never dreamt of. (Flusser 1986:331)

To sum it up: The global encompassing convergence Flusser is envisaging is a synthe-


sis of several diverging aspects. Not only mathematics and music merge, also the West
and the East, art and sciencethat were separated in the Renaissanceare joined
again, science, art and politicsthat were divided in the course of a more and more

230
positivistic and factual 19th centuryfinally join hands again, the senses and the codes
come together, the eye, the ear and the fingertips, the visual, the acoustic and the tactile
creating a multilingual, multi-mediatic and multi-discursive Gesamtkunstwerk. All bor-
ders disappear, all simple dualisms are abolished: the border between dream and reality,
the separation between the artist and his audience, as well as that between art and life.

Bibliography

Flusser, Vilm. Towards a Philosophy of Photography, Gttingen: European


Photography. 1984.
. The Photograph as Post-Industrial Object: An Essay on the Ontological Standing
of Photographs. In Leonardo, 19, 4:329332. 1986.
. Nancy Burson: Chimren. In V. Flusser, Standpunkte. Texte zur Fotografie.
Gttingen: European Photography: 146148. 1998.
. Into the Universe of Technical Images. Minneapolis and London University of
Minnesota Press. 2011.

231
Short Papers
234
Transients: a Transit Visualization

David Bouchard
david.bouchard@ryerson.ca
Ryerson University, Toronto, Canada

Keywords: Data Visualization, Generative Art, Transit.

Abstract: Transients is a series of generative animations inspired by the notions of flow,


ephemerality and transitory states. The underlying structure of these animations is a
database created using GPS data from the Toronto public transit system. The data, avail-
able on the web through the Toronto Open Data portal, includes the location, routes
and stops of every bus and streetcar in the system, as well as the arrival times of trains
within underground subway stations. Custom software created by the artist establishes
an aesthetic framework for the data to unfold within, balancing artistic and algorithmic
decisions alongside existing patterns within the data.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

235
1.Context

This work was curated by Sharon Switzer and was developed as a specially commissioned,
site-specific installation for Pattison Onestop and Art for Commuters, in the context of
Scotiabank Nuit Blanche, an all-night contemporary art festival. Transients was exhib-
ited on over 300 information screens operated by Pattison on subway platforms across
the Toronto transit system. The work was presented without interruption, replacing the
news and advertisements otherwise typically shown on these screens. The animations
generated for this work had a total runtime of 12 hours, in order to coincide with the
duration of the event.

Fig. 1. The work running on an information screen.

2.
Artist Statement

The motivation behind Transients is to look at the mundane, everyday nature of transit
activity within the city, and present this information from a different perspective than
what is typically experienced by commuters. Through animations generated by custom
software, motion patterns are slowly revealed using colorful ribbons unfolding accord-
ing to the paths taken by vehicles. The work provides an opportunity for the audience to
become aware of the behavior of the network, as well as to reflect on how they become
a part of this larger system by riding transit.
The software alters the scale of the representation over time, going from a birds eye
view to extreme close-ups on individual routes, shifting the focus between the network
as a whole (Fig 1) and the seemingly meandering motion of a single vehicle (Fig 2).

236
Fig. 1. A video excerpt showing motion patterns across the network (http://vimeo.com/57697210).

By definition, the term transient refers to the commuters; the people in motion, the
temporary guests, who are the primary audience for this work. However, the title also
implies the notion of ephemerality. Like an improvised performance, the motion patterns
of hundreds vehicles across the city generates an intricate composition. The movement
is not rehearsed, yet it follows a specific structure dictated by the routes and timetables.
This dance of the trajectories exists in the moment, and as such can only be perceived
when captured and represented by a system such as Transients.

Fig. 2. An example of a close-up, shifting the focus on the motion of individual vehicles
(http://www.vimeo.com/57697214).

This work is positioned as a form of artistic data visualization. (Viegas/Watternberg,


2007) While it is based on actual data, its aim is not to analyze or represent, but rather to
evoke a particular emotion using the underlying data as a driving force. The map meta-
phor is used as a starting point, but is transformed (particularly at the extreme close-up
scale) to the point of not always being recognizable as such.
Another major preoccupation behind this work is an exploration of generative meth-
ods within the creative process. Generative is sometimes a contested term, but broadly
speaking can be defined as following rule-based or mathematical structures, operating
in real-time and created with an emphasis on critical concerns for the process of produc-
tion. (Cox 2002) In the case of Transients, the rules are not purely mathematical in nature,

237
yet the data, which informs the work, is unpredictable and subject to infinite random
variations introduced by the real world.
The approach used in creating the work (a real-time software program, as opposed
to a static rendering of the data) ensures that the work has some degree of autonomy,
introduced by variations within the external data as well as occasional elements of ran-
domness in the rules established to interpret the data. As such, it is a reflection on the
notions of artistic control and authorship. (Galanter 2003) The software establishes an
aesthetic framework for the data to unfold within; but ultimately the outcome represents
the careful balance of thoughtful algorithmic decisions alongside existing patterns within
the data itself over which the artist has no control.

3.
Process

The initial step for the realization of this work was to collect and process the GPS vehicle
data offered by the citys public feed. An automated system was put in place to query and
collect the information for individual vehicles over time. The data was then compared
against known route topology to filter errors and outliers. The software also performed in-
terpolation between GPS updates in order to generate fluid animations and motions paths.
A real-time visualization engine was then developed to explore, verify, and understand
how the data behaved over time. The engine included basic features such as zoom, pan,
time controls and vehicle lock-on (the latter eventually became one of the main mech-
anisms of the final piece).

Fig. 3. The transit map, viewed within the development engine (http://vimeo.com/57697208).

After experimenting with the engine, variables were selected to vary within the final
work: route number, time of day, speed factor, map scale and route color. The map itself
was removed and replaced by a randomly generated triangle mesh. Initially invisible,
triangles are revealed and tinted when touched by the path of a vehicle. The result is a
series of intertwined ribbon-like shapes, which unfold according to the vehicles motion.
The camera follows selects a route to follow at random, and occasionally jumps from one
route to another when two the path of vehicles intersect.
The structure and impression of the transit map remains, albeit in a more abstract
form. In order to provide context to the audience, the names of subway stations were left

238
in their original locations as landmarks, hinting at the actual geography represented by
the ribbons. The route number currently being the tracked by the engine, as well as the
time of day being represented were also included as overlays.
While the engine was capable of generating the visuals in real-time, constraints of
the display platform for the exhibition required animations to be pre-rendered and at
most 5 minutes long. As such, a script was used to create a playlist of unique short clips,
each seeded with randomized starting conditions.
Additional video excerpts from the work are available online on the projects website
at http://www.onestopallnight.com/12.

Acknowledgements: This work was made possible in part by funding from the Toronto
Arts Council.

References

Cox, Geoff. Generator: the dialectics of orderly disorder, Creativity & Cognicition, pp.
4549, 2002.
Galanter, Philip. What is generative art? Complexity theory as a context for art theory.
6th Generative Art Conference, 2003.
Vigas, Fernandes and Wattenberg, Martin. Artistic Data Visualization: Beyond Visual
Analytics, Online Communities and Social Computing, pp. 182191. Springer Berlin
Heidelberg, 2007.

239
240
Exploring Open Hardware in the Image Field

Lus Eustquio
e@takio.net
Universidade do Porto, Portugal

Miguel Carvalhais
miguel@carvalhais.org
ID+, Faculdade de Belas Artes, Universidade do Porto, Portugal

Ricardo Lafuente
ricardo@sollec.org
Universidade do Porto, Portugal

Keywords: Electronics, Hardware, Image, Open Source, Physical Computation, Tools.

Abstract: The project documented in this article, developed under the Image Design mas-

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


ter degree program at the University of Porto, aims to explore the production and trans-
formation of imagery through the use of open platforms for electronic prototyping and
physical computing. This field for exploration encompasses the construction, hacking
and deconstruction of electronic, analog and digital devices, both as a means for creative
research and a quest for alternatives to work processes established as de facto standards.
Practical development is focused on modifying, designing and building devices to generate
and manipulate imagery with analog and digital components. This study is framed by
the relevance of open source technologies, shared creativity and produsage models, as
well as the promotion of hardware literacy.

241
1.
Introduction

Images are increasingly contaminated by technology, in aspects well beyond a merely


functional role (Bolter and Grusin 2000, 4550). How a certain image reaches us, how in-
timate is the channel through which we view it, can be as determining to its perception
as the visual matter itself. However, technological literacy remains focused mainly on
promoting software packages and training end users. While plural in their use, devices
are increasingly averse to being modified or repurposed by users, be it through physical
properties or legal restrictions. In view of this setting, we seek to retrieve technological
matter as part of an open creative process, as opposed to a set of defaults. A pliable tool
instead of a workplace.
In liberating oneself from predefinitions found in most media-capable devices, strate-
gies such as hardware deconstruction, repurposing and hacking can provide stimulating
paths in a search for alternatives to established workflows, framed by the relevance of
computational technologies, open source standards, shared models for creative produc-
tivity and the promotion of hardware literacy.
In this frame of mind, we set out on a practical exploration of open hardware and elec-
tronic prototyping platforms, ultimately geared towards developing operational devices
for the production and manipulation of images and sound. Developments and results are
freely available as a contribution to further work in this field and retribution to those that
have generously contributed with their knowledge and experience. As this project required
a good amount of learning about electricity, electronics, prototyping, building, testing
and debugging, it offered an opportunity to assess both its feasibility for the average lay-
man and its applicability to learning programs focused on visual communication. This
learning process also seeks to point out the benefits of libre and open-source resources,
particularly their uncompromising flexibility and adequacy to shared creativity models.
Finally, the critical reading of experiments, processes and results is an opportunity to
reflect on convergence points between images and their technology.
This convergence has deep historical roots, such as Thaddeus Cahills Telharmonium,
1. P atent document available at which gathers a set of features that make it relevant to this day. Patented in 1898,1 predat-
the United States Patent and
ing both the Theremin and the Ondes Martenot, it is the first widely known instrument to
Trademark Office website
(http://patimg1.uspto.gov/.piw? synthesize polyphonic sounds from electricity, breaking the record-playback loop of con-
docid=00580035).
temporary inventions like Edisons Phonograph and rooting the idea of device-generated
media. Incidentally, it also preluded streaming, as it was Cahills intention to broadcast
music to public spaces and private homes via telephone wires, on a subscription basis.
Sadly, the massive infrastructure required by this invention was the main cause of its
early demise.
In what Marshall McLuhan called an era of illumination (2008, 353), more recent
technologies like video, personal computing and digital photography were rapidly em-
braced by a thriving consumers market and a notably disruptive artistic community.
The growing ubiquity of technology-based media marked a turning point in art and
design practice, urging a more widespread thought on media and our connections to
(and through) it. The Experiments in Art and Technology, started by Robert Rauschenberg
and Billy Klver in 1966, remain especially relevant to this topic, as they so memorably
achieved the goal of developing an effective collaborative relationship between artists and

242
engineers (Klver et al. 1980). These experiments reverberate far and wide, from inter-
sections with Nam June Paik (Wardrip-Fruin 2003, 227) to works such as Bruce Naumans
Live-Taped Video Corridor (Shanken 2009, 31) or even Roy Ascotts admonition on how
dazzling effects achieved through skillfuly crafted technology can replace the creation of
meaning (2008, 358). More recent works, such as Hektor2 by Jrg Lehni and Uli Franke, or 2. Documented at http://hektor.ch/

Zimouns reduced technological structures, denote how researching technology for its
3
3. Documented at
expressive potential has kept a continued interest. This is also evidenced by well-known http://www.zimoun.net/

academic laboratories dedicated to this area of research yielding influential results, such
as the Processing IDE.4 Here we narrow our focus on the cultural influence of makers and 4. http://processing.org/

users in technological developments, as well as the technological origins of that influence


(Lister et al. 2009, 320), for if some devices or technologies cater to a perceived need or
want, others are ultimately shaped by unforeseen usage.

2.
In the lab

A good number of electronics prototyping platforms are now widely available, allow-
ing one to assemble devices useful to this study with reasonable speed and economy.
Arduino,5 now a staple in the makers tool chest, was selected for its strict conformity 5. http://arduino.cc

to industry standards and open hardware definitions. Its extensive documentation and
massive popularity also provide a fertile ground for exchanging knowledge and practical
applications.
For a practical exploration of translations between sound and image, a working base
permeable to different formulations is needed. To serve this purpose, two complemen-
tary devices were planned: one to produce images from captured sound, the other to
reverse this flow by generating sound from captured images. This configuration allows
the devices to operate together and independently, accepting both mutual and external
stimulus. The sound component externalizes part of the machine translation process and
increases susceptibility to interference.
In a most simple description, the audio input stage uses amplified electret micro-
phones, while the output is performed by salvaged speakers. Video capture uses inex-
pensive micro cameras and image output is fed to small LCD screens. These choices
were biased toward the development of portable devices, easier to carry and use in any
location, with low production costs. Composite analog video was used, as it is less taxing
on limited microprocessors and more widely compatible with equipment salvaged from
obsolescence. Also, the use of only black and white furthers the economy of processing
resources, reinforces an aesthetic penchant for deprecated media and provides a more
focused canvas, one less prone to diversion maneuvers. Programming on Arduino micro-
controllers brings this flow together, managing analog to digital to analog conversions
and affording computational control over response, variability and operational autonomy.
A project of this nature requires a small laboratory with a few specialized tools and
basic knowledge of how to use them, such as a multimeter or a soldering iron. Also, while
many tutorials and instructional documents are readily available online, reference liter-
ature in electronics is strongly advised. Obtaining such resources and knowledge is quite
painless and inexpensive, especially when aided by a community of enthusiasts and
open laboratories, as was the case in this endeavor. Organizing development in stages,

243
defining tasks and intermediate goals, proved critical to progress through incremental
gratification.
The first device generates imagery based on data collected from a microphone. Sound
is routed through an operational amplifier, in order to achieve adequate current values
6. R
 oot mean square, i.e. the for the Arduino, where an RMS6 algorithm is applied to the sampled data. This averaging
square root of the mean of the
method enables fluctuations to more closely resemble a human perception of the sound
squares of a set of values.
environment, favoring a more obvious correlation between cause and effect in sound and
7. C
 ode library for the Arduino IDE image relations. Images are generated through the TVout library7 and output to a 3.5 inch
by Myles Metzler, available on
LCD screen, usually sold as a monitor for aftermarket car reversing cameras. Designed to
Google Code (http://code.google.
com/p/arduino-tvout/). operate on the 12 V standard automobile power, the screen was modified to work on 5 V by
performing a bypass on a voltage regulator. This enabled the entire device to be powered
from USB or a 9 V battery, thus allowing its assembly on a small reused plastic box. Once a
stable build was achieved, with a fully functional bridge between sound input and image
output, experimentation turned to the programming of various graphic visualizations
of the captured audio data sets. While not an initial requirement, this process occurs in
8. C
 lose to the minimum of 0.1 as close to real time as the technology in use allows, with negligible8 delay. The initial
milliseconds, the time required
purpose of testing and verifying sound-to-image correlations was progressively skewed
by the Atmel 328p microproces-
sor to perform a reading on an towards exploring possibilities afforded by the images aesthetic properties and the de-
input.
vices physical features. All programs resort to strict black and white on a grid of 128 by 96
pixels and each frame reflects, in some way, the averaged volume of the sampled sound.

Fig. 1. Prototyping and building device 1.

Fig. 2. Three examples of images produced by device 1.

244
In the second device, captured images are used to generate sound. The core of this
device pairs an Arduino with a Video Experimenter Shield,9 where a LM1881 integrated
circuit generates 1 bit images from video frames supplied by a miniature surveillance 9. Arduino shield designed by
Michael Krumpus and distrib
camera. A potentiometer attached to this circuit allows the luminosity threshold to be
uted by Nootropic Design (http://
calibrated according to the surrounding environment. A simple 8 Ohm shielded speaker nootropicdesign.com/).

with standard protection resistors, salvaged from a broken television set, completes the
device. On the Arduino side, the largest continuous bright area in each frame is detected,
with a minimum of 4 by 4 pixels defined for reduced response to noise and faster scan-
ning by skipping pixels in the image analysis stage. The center coordinates of this areas
bounding rectangle are then used as a basis for tone generation. Using pulse width mod-
ulation on an Arduino output pin, monophonic tones between 8 and 1024 Hz are fed to
the speaker, corresponding to the 128 pixel horizontal dimension of the captured images.
When the bounding rectangle fills the screens width, the vertical coordinate is used in-
stead. Finally, if the bounding rectangle remains centered, spanning the entire screen,
the tones frequency slopes down to 8 Hz, at which point the sound is muted for as long
as the captured image remains unchanged.

Fig. 3. Building device 2.

3.
In the wild

Having reached a stable version of both devices with essential programming in place,
a round of tests with a diverse group of ten subjects in different locations was carried
out, where the devices behavior and material properties were submitted to varying
approaches and interpretations. The following is a brief account of these experiences,
focused on the effects of audiovisual products, of the artifacts physical configuration and
of their computational affordances.
The first device is more widely perceived as indicative of its purpose, leading subjects
to prefer visualizations that add to what is seen as functional. Its shape, size and layout
also immediately offer clues as to how it may operate. In particular, the appearance of
a rudimentary digital camera induces a corresponding approach and expectation. The
scale of the artifact favors an introspective experience, in which the subjects interpret
the devices response as taking part in the dialogue they lead. Ambient sounds are usu-
ally the first trigger in outdoor settings. When indoors, speaking, tapping the device and

245
snapping fingers are the most common first interactions, comparing the results of delib-
erate actions with those caused by the surroundings. Most subjects direct their actions at
the screen in an engaged dialogue that surpasses the screens natural magnetism, much
as if it were able to accept input. This happens even after knowing the microphones loca-
tion on the device. The production of sound during the process of interaction is subjected
to the visual dynamics afforded by the visualization programs, the most popular being
those that offer longer resistance to predictability.
The second device imposes upon the user a more exploratory approach, as it instills a
sense of doubt and uncertainty, more evident in subjects less acquainted with experimen-
tal devices and technology in general. Curiously enough, most subjects felt motivated by
this challenge and were keen to decipher the device. Neither its purpose nor the causality
of its operation are self-explanatory, and the inclusion of a screen for monitoring the im-
age being captured proved very helpful to this understanding. Once the screen is activated,
the image to sound correlation is more evident and the device becomes an instrument,
allowing a more analytical experience. The expressive potential triggered by this muta-
tion sometimes borders on the performative, with subjects moving spontaneously and
reading surfaces with the device. The limited tone range encourages a search for patterns
and rhythms, as subjects try to master the machines behavior. In many instances the
generated sound becomes somewhat separate from the device itself as it is more closely
linked to what the camera sees, thus turning the device into a prosthetic mediator, un-
noticed until interest is exhausted.

Fig. 4. Devices under exploratory usage testing.

Mutual interference between devices was the last stage of each experience, in which
subjects restarted the process of improvising activities, exploring features and evaluating
response to expectations (Ribas 2011, 226). Where previously the visual components were
the primary focus for the majority of subjects, sound production became the main point of
interest when using devices together. With few exceptions, subjects mostly held the first
device as a trigger for the second, as if sliding a bow on a string, exploring the potential
of the first devices visualizations as output to the second devices input. Naturally, the
opposite would take place simultaneously, but that part of the process was overcome by
the inversion of the first devices usage: now the subjects pointed the screen away from
them, it was no longer an intimate collocutor but a playful proxy.

246
4.Considerations

These brief observations summarize the expressive potential observed in improvised


experimentations, so long as devices were able to provide a path from cluelessness to
instrumental mastery, a balance of predictability and surprise, and a graceful incorpo-
ration of glitches.10 It became clear that physical properties afford the artifacts expressive 10. I ntentionally or by serendipity,
as discussed by Miguel Carva
qualities even before their use, adding layers of complexity to the interpretation of their
lhais (2010) regarding Peter
experiences and results, while raising additional questions as to what might change with Kubelkas short film Arnulf
Rainer (1960).
each possible reconfiguration. Computational properties are particularly relevant to this
analysis, as devices with procedural behavior clearly benefit more deeply engaging ex-
periences, thus enabling an active role in social contexts. This possibility of mediating or
even generating dialogue through interaction, involving ones surroundings, reinforces
the possible impact effected by this mediation, harkening back to what Ivan Illich des-
ignated as convivial tools (2001).
Current computational technologies lend themselves quite aptly to experimentation
and sharing activities. As makers and designers working with media technology, partici-
patory action in accordance with open source standards adds a sense of accountability, by
reclaiming and rethinking ones role in shaping the tools one uses and defining the na-
ture of their benefits. It is important that this intervention be guided by long-term benign
goals, as it inevitably contributes to reshaping the technological and cultural fabric of our
time in history. In this spirit, most of the materials and components used were recycled
or repurposed, and full documentation is available on a public wiki in http://mdi.takio.
net, under a Creative Commons Share-Alike license, without commercial restrictions.
As this project hopes to demonstrate, open hardware is, both in its spirit and current
state of development, a primed playground for what Janet Murray described as a sand-
box for the development of computational systems and procedures through experimental
exploration (2011, 339). Not just for end users of well-intentioned black boxes, but for an
emerging breed of produsage11 agents. The expressive potential of the devices built and 11. A
 s described by Axel Bruns in
Produsage: Towards a Broader
used over the course of this research is not apparently crippled by lack of processing power,
Framework for User-Led Content
as was observed when they were experimented with by test subjects. Rather, their often Creation (2007).

unexpected configuration details and physical properties added to the perceived richness
and complexity of interaction experiences. As curiosity was piqued by the unconventional
nature and hand-made appearance of the devices, bridges were found to the development
of a deeper hardware literacy, as many subjects felt they too could acquire the skills
needed for similar projects, taking one step further from consumers to creators, actively
engaged in generating value beyond wealth (Bauwens 2006). In retrospective, it is grati-
fying to observe the results achieved by using humble means and obsolete technologies,
in a time where product life cycles end long before significant technological leaps.
The devices here described are by no means considered final, and further variations
are under consideration, especially regarding their programming, physical layout, scale
and connectivity. Also of interest is the research of computational and procedural abilities
in the most rudimentary possible build, for the accessibility and educational potential
of such a device.
It is our humble hope that this project and its documentation may contribute to a
deeper collective hardware literacy and a more distributed control over the tools we use
to define our world and ourselves.

247
References

Ascott, Roy. Telematic Embrace: Visionary Theories of Art, Technology, and


Consciousness. 1st ed. University of California Press, 2008.
Bauwens, Michel. The Political Economy of Peer Production. Post-autistic economics
review 37 (2006): 3344.
Bolter, Jay David, and Richard Grusin. Remediation: understanding new media.
Cambridge: The MIT Press, 2000.
Carvalhais, Miguel. Towards a Model for Artificial Aesthetics. (PhD diss.,
UniversityofPorto, 2010).
Illich, Ivan. Tools for Conviviality. London: Marion Boyars, 2001.
Klver, Billy, and Robert Rauschenberg. The Purpose of Experiments in Art and
Technology. Vol. 1. New York: E.A.T. News, 1967.
Lister, Martin et al. New Media: A Critical Introduction. 2nd ed. Abingdon, Oxon, UK:
Routledge, 2009.
McLuhan, Marshall. Compreender Os Meios De Comunicao. Trad. Jos Miguel Silva.
Lisboa: Relgio Dgua, 2008.
Murray, Janet H. Inventing the medium: principles of interaction design as a cultural
practice. Cambridge, MA: The MIT press, 2012.
Ribas, Lusa. The Nature of Sound-image Relations in Digital Interactive Systems.
(PhD diss., University of Porto, 2011).
Shanken, Edward A. Art and electronic media. London: Phaidon Press, 2009.
Wardrip-Fruin, Noah. The New Media Reader. Cambridge, MA: The MIT press, 2003.

248
Nevermore: Pretext Machine

Bruno Figueiredo
bfigueiredo@arquitectura.uminho.pt
Universidade do Minho, Guimares, Portugal

Susana Loureno Marques


smarques@fba.up.pt
Universidade do Porto, Portugal

Keywords: Computation, Graphical Algorithm, Data Visualization, Generative Design,


Drawing, Poetry, X.

Abstract: Our proposal is to make a graphic representation of the sound syntax and the
syllabic structure enclosed in The Raven, published in 1845 by Edgar Allan Poe, using a
script that works simultaneously as reading-text-machine, a drawing-machine and a
synthesis-text-machine.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


The script translates the poem structure into an abstract grid, generating a drawing.
The geometric definition of the poem is then constrained by the characters and their
correspondent location to the sound code: the word nevermore and the textual reverbera-
tion it produces. A synthesis of the poem is achieved by a recursive selection of syllables,
resulting in a graphical and textual configuration towards a rewritten final stanza.
The process is repeated with the Portuguese translation of the poem, made by
Fernando Pessoa in 1924. Although it follows the same initial structure and algorithms
the change of idiom introduces different geometries and sound reverberations.

249
1.
Introduction

Originally published in February 1845 by Edgar Allan Poe The Raven, widely translated and
illustrated since then, was analysed in Philosophy of Composition (1846) to demonstrate
Poes writing methoda recurrent rhythmic combinatory procedure to emphasize the
mathematic and mechanical structure of the text.
Our proposal is to make a graphic representation of the sound syntax and the syl-
labic structure enclosed in The Raven, using a script that works simultaneously as read-
ing-text-machine, a drawing-machine and a synthesis-text-machine.
The poem is conceived as a pretext to use the sounds contained in the single word of
the refrain. The script defines a set of combinatory rules that move and elapse the text
ac-cording to its basic sound code: the word nevermore and the textual reverberation it
produces. Like Poe refers, considerations inevitably led me to the long o as the most sono-
rous vowel in connection with r as the most producible consonant (Poe, 1846).

2.
Script

Made with the precision and rigid consequence of a mathematical problem (Poe, 1846), it
is possible to describe and deduce an algorithm and represent it in a graphic grid. This
grid gives visibility to the transformations that occur along the script iterations:
a)Reading text machine: The computational model we present, starts by defining a
grid of points that interpret the structure of the poem: stanzas (18), verses (108) and
syllables (1512). Each point of the grid as a correspondence to a character, which as a
value assigned by the script: a geometrical representation of variable length;
b)Drawing machine: in each iteration a repulsive reaction is applied to the geometries
from the points of resonance (nevermore, or);
c)Synthesis text machine: A process of natural selection is undertaken each itera-
tion. The strongest syllable in an equivalent position between two stanzas remains,
the other is obliterated. The elected syllables generate a final stanza. The result is a
graphical and textual synthesis where the poem is rewritten.

The script, developed with the purpose of reading the English version of the poem, can
also read other translations, and in this case running the Portuguese translation made
by Fernando Pessoa in 1924. There were only two parameters that had to be changed: the
number of syllables per verse from 18 to 22; and the points of resonance (textual ele-
ments ais and ro).

The script is made in Grasshopper 0.9 (a graphical algorithm editor integrated in


Rhino modelling software) complemented by some functions written in Pynthon Script.
The script allows the user to read/draw/synthesize other poems by changing the pa-ra-
meters variables on the grasshopper interface, such as: the number of stanzas, verses
and syllables and by defining the strongest rhymes along the poem.
Future advances in this demo will contemplate an interface developed in Processing
in order to the user control of the parameters values.

250
Fig. 17. Pretext drawings generated by reading
The Raven original version, #17 iterations.

251
Fig. 814. Pretext drawings generated by reading
The Raven Portuguese translation, #17 iterations.

252
proph thought that thing night denser tling there course press straight ting still fiend shriek start ing

there tempt whose dreams quaint wheth falls burned wrought saint stock both floor fore wretch this nights shore black

plume shorn sance daunt thee with these feath stood sought aidenn thee ter ed shall scarce demons leave light stream

throws saint cient ping trance friends gloat flown shore bore clasp there dirges there shad floor that heart

still tured gaunt trance nights flown name plore shall press said noth from floor shall more

more more that

Fig. 15. New stanza generated on the pretext drawing # 7 iteration, The Raven original version.

que mais cor gri dis noi mais gres frou quan quei chei ain dum zen dis tris par mar gu

ra tor trou dia tais fron qual quem nhan nhos quem mos trou guem vras trais guais brais brais nais

tens pren lhar mais mim que man nha tris gre guem por mui nio que ses nha vros tra vi gre

do que nha luz lan guem lhe res tris gou nha som bra meus bras brais brais guais ais trais quem

ais trais num ber tar qual que nun meu mais quo fran tes som meus ses brais seus brais guais trais

dis cli nar cor nun nun mais mais

Fig. 16. New stanza generated on the pretext drawing # 7 iteration, The Raven Portuguese translation.

References

Funkhouser, Chris t. Prehistoric Digital Poetry: An archaeology of forms, 19591995.


Tuscaloosa, AL: The University of Alabama Press, 2007.
Pessoa, Fernando Pessoa Indito. Org. Teresa Lopes, Lisboa: Livros Horizonte, 1993.
Poe, Edgar A.P. The Collected Tales And Poems Of Edgar Allan Poe. New York: Modern
Library, 1992.
Portela, Manuel O corvo de Pessoa: Uma Filosofia da Traduo in Revista da Faculdade
de Cincias Humanas e Sociais 7, Porto: Edies Universidade Fernando Pessoa, 2010.

253
254
Profilography

Pablo Garcia
pgarcia@saic.edu
School of the Art Institute of Chicago, USA

Keywords: Drawing, History of Art, History of Technology, Albrecht Drer, Eadweard


Muybridge, Pre-Cinema, 3D Printing, Investment Casting, Digital Fabrication, Compu
tational Art.

Abstract: This art project exploits digital modeling and fabrication techniques to reex-
amine historical images. Using a process I call Profilographytracing and extruding a
series of sequential contours or profilesI transform serial or morphological images
from art history into contemporary works of digital art. The goal of project is to connect
proto-digital artanalog in craft yet digital in conceptionto the software and hard-
ware of today. This both expands the reach of historical art into todays computational
environment and creates a rich historical context for digital art.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

255
1.
Introduction

What would artists from the distant past do with a computer? At a glance, it seems that it
would be a strange device to their analog sensibilities. From todays vantage point, some
artists could be considered digital, even though it would be generations before compu-
tational technology would transform the art world. Artists using technology to automate
processes, or making images through logical, serial, and analytical thinking predate
computers, but their methodologies are quite familiar to digital artists today. One way
to explore the digital nature of work from centuries ago is to apply digital technologies
to historical works.
For this art project, I selected two artistsAlbrecht Drer and Eadweard Muy
bridgetoreconsider. Centuries apart, each artist-explored techniques that, using a com-
puter, would be welcome in todays digital paradigm. In Four Books on Human Proportion
(1528), Drer presents an exhaustive morphological study of human form. Ostensibly
a guide for artists drawing the human figure, from todays digital viewpoint, the book
reads like computer code applied to produce theme and variation from an initial figure.
This parametric distortion is easy with a computer. Drer worked by hand, meticulously
using rulers and arithmetic to produce his treatise.
Muybridge was a 19th century photographer most famous for his serial imagery of
animals in motion. By setting up twelve cameras in regular physical intervals, he cap-
tured time and movement in a series of stills. Compiling these images into a flipbook,
zoetrope, or other animation device transforms the twelve still photographs into moving
pictures. His published volume of work, Animal Locomotion (1887), was vital in sparking
the invention of cinema, undertaken simultaneously by notable inventors like Thomas
Edison in the US and the Lumire Brothers in France.
Using a process I call Profilographytracing and extruding a series of sequential
contours or profilesI extract new data from the historical data sets provided by Drer
and Muybridge. Computation affords new geometric possibilities unavailable to artists
of the past. Transforming Drers facial profiles or Muybridges side views of running
animals into contiguous extrusions yields physical morphing forms. Slicing through the
form at any point produces new frames from the interstitial spaces between the originals.
For Muybridge, it means an animation with a potentially infinite frame rate. For Drer,
profilography expands his morphological study into thousands of new human forms.

2.
Artworks

2.1.
Profilography (after Muybridge)
Eadweard Muybridges 19th century photographic studies of animal locomotion marked
the beginning of cinema. What began as a way to settle a wager (does a galloping horse
ever fully leave the ground?) evolved into a proof of concept: sequential photographs of
action can be assembled to realistically present life in motion. Or, as we know it today:
a movie. To capture the action, Muybridge used twelve still cameras at regular intervals
to capture one cycle of a horses gallop. By cinema standards, this is quite sparse. There
is a lot of data missing between each frame in comparison to the 30 frames per second
of contemporary video.

256
Muybridge produced hundreds of studies of animals in motion. Plate 624a run-
ning horseis the basis for this project. Using Profilography, the twelve photographs
become a continuous profile. Slicing through the extrusion yields new frames, derived
from Muybridge but absent from his original sequence. Since the model is contiguous,
there are an infinite number of frames that can be generated from the original twelve.
After 3D printing the digital model, each print is prepared for investment (lost-wax)
casting. In traditional metal casting, a form must first be molded and cast in a series of
steps to produce a wax version. That wax version is then slowly covered in a ceramic shell.
The wax inside is burned away, leaving a void for the molten bronze. In this project, the
3D prints are directly cast in the ceramic shell and melt away when flash-burned. Since
the parts can be made on demand by a 3D printer, this process obviates the need for in-
tensive manual sculpting. The final piece is a 3D manifestation of 2D images originally
made to represent 4D (temporal) action, exploring artistic methods across time: millennia
of bronze casting, the 19th century (early cinema and photography), and the 21st century
(computers and 3D printing).

2.2.
Profilography (after Drer)
In Vier Bcher von Menschlicher Proportion (Four Books on Human Proportion) (1528),
Albrecht Drer exhaustively examines variations of human form. Not as Vitruvius de-
piction of ideal human measurements, but as a full range of proportional possibilities.
This physiognomic treatise establishes the basic parameters for drawing the human face
and figure, such as relationships between the eye, nose, mouth, and chin. Over dozens of
pages, Drer shows an incredible variety of male and female figures and facial profiles,
drawn by hand but made with a precise mechanical approach to geometric variation.
The six facial profiles Drer presents early in the treatise are the basis for this ma-
chine. Using Profilography, the six faces become a continuous facial profile. Slicing
through the extrusion yields new faces, derived from Drer but absent from his analog
treatise. After making the form into a closed loop, I 3D printed the form and mounted it
onto a motor-driven spindle. As the piece spins, a light casting a shadow along the profile
edge animates the transforming faces. Drers early experiment into parametric trans-
formations arrives at its 21st century digitally-produced conclusion.

Fig. 1. Profilograph (after Drer). Drers six profiles in Four Books on Human Proportion (1528) connected
through digital extrusion.

257
Fig. 2. Profilograph (after Drer). The facial extrusions are wrapped into a closed loop and fabricated
with laser-cut aluminum and 3D printed parts. The form is mounted to a motorized spindle.

Fig. 3. Profilograph (after Drer). As the form spins, Drers original profiles morph
between the six original faces.

258
Fig. 4. Profilograph (after Drer). The installed machine includes a light casting a shadow of
the profile edge. The shadow is a 2-dimensional morphing of Drers original faces.

Fig. 5. Profilograph (after Drer)Video. URL: http://bit.ly/145wDNV.

Fig. 6. Profilograph (after Muybridge) Video. URL: http://bit.ly/Weumxw.

259
Fig. 7. Profilograph (after Muybridge). Muybridges original photographic series is first compiled,
thentransformed through Profilography into a solid.

Fig. 8. Profilograph (after Muybridge). The digital model becomes 3D prints used directly in bronze
investment casting. The bronze parts are welded together into a single form.

260
Fig. 9. Profilograph (after Muybridge). Finished bronze sculpture.

References

Drer, Albrecht. Vier Bcher von Menschlicher Proportion (Four Books on Human
Proportion). (Reprint) Babenberg Verlag GmbH. 2005.
Drer, Albrecht and Strauss, Walter, ed. The Human Figure: The Complete Dresden
Sketchbook. Dover Publications. 1972.
Hendricks, Gordon. Eadweard Muybridge, Father of the Motion Picture. Dover
Publishers. 1975.
Hill, Paul. Eadweard Muybridge. Phaidon. 2001.
Kurth, Willy. The Complete Woodcuts of Albrecht Drer. Courier Dover Publications. 1927.
Muybridge, Eadweard. Animal Locomotion. Dover Publishers. 1887.

261
262
Heimlichkeit des Berhrens: Exploring the
Correlation of Perception and Intimacy

Alexander Mller-Rakow
alexander.mueller@udk-berlin.de
University of the Arts Berlin, Germany

Oscar Palou Rib


palou.o@gmail.com
University of the Arts Berlin, Germany

Michael Pogorzhelskiy
misha.pt@gmail.com
Weissensee School of Art Berlin, Germany

Keywords: Sound Installation, Intimacy, Skin-Based Interfaces, Haptic Interfaces.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


Abstract: Heimlichkeit des Berhrens is a sound installation that invites visitors to expe-
rience the intimacy of touch. A space is split into four separate areas, each of which is
accessible to visitors in a way that seeing each other is inhibited, whereas in a centered
invisible shared area touching is the only enabled form to communicate. The exploration,
the contact and movements of touches, are captured by a specifically designed sound in-
strument (Mller-Rakow 2012). Below we briefly present the concept behind the practical
work and outline setup and interaction methods of the installation.

263
1.
Introduction

Mostly it is the conscious model of human behavior that underlies the development and
design of embodied interfaces to control (social) interaction. The diversity of expressions
in human social interaction includes all human senses and always is conducted by un-
conscious actions.
In their introduction to the work Room#81, dAlessandro et al. refer to the unconscious
parts of communication e.g. small gestures as the foundation for emotional commit-
ment (dAlessandro et al. 2011). With Heimlichkeit des Berhrens we firstly turn our and
the visitors attention to the sense of touch due to that fact that the sense (and act) of
touchin western philosophyis the most intimate and exclusive perception in hu-
man interaction (Benthien 2002). On the other hand we address the sense of hearing
with an individually assigned role for manipulating the sound in order to provoke and
encourage the act of touch and additionally the experimentation with the correlations
of intimacy, touch and sound.
With this interactive exhibit we seek to bridge the gap between the practice-based
research in the field of interaction design for everyday life communication technologies,
and arts with its potential for provocation, reflection and experimentation in order to
excite personal intimate exploration.
The technical setup of the installation was presented before (Mller-Rakow 2012)
and tries to be in line with outstanding works that advanced the development of skin-
1. h
 ttp://www.daanbrinkmann. based instruments and installations (e.g. by Waisvisz 2004; Jaimovich 2011; Brinkmann1).
com (accessed 05-Jan-2013).
However, the concept of the installation marks a new approach in our research bringing
the exhibition context, the composition and its mapping into main focus.

2.
Concept

The installation consists of four perpendicularly arranged walls that split a room into
four areas. Visitors can only access in a way that seeing each other is inhibited, whereas
skin contact esthablishes the unique form to communicate in between each other in a
common space that connects the four areas. Visitorsbecoming participantsmake
nonverbal contact with each other and begin a tactile communication that redraws the
line of interpersonal intimacy and privacy. How may one touch an unknown and invisible
person, to what extend does the interaction feel pleasant to oneself, and how does the
soundscape react to the manipulations of the electronically enhanced bodies?
Occurring within the shared space, contacted movements and gestures are captured
by a specifically designed device that measures electrical resistance on the participants
skin. Their bodies act as a constituent of a specific electrical circuit. In doing so each
affiliated participant assumes a specific role in the joint (invisible) performance. The
action of one touching another, influences the sound synthesis by varying in intensity,
the duration of contact and the movement speed.
The measured values are sent to a PC where the mapping and sound synthesis pro-
ceeds using MAX/MSP.

264
Fig. 1. A shared but invisible space for collaborative skin-based sound control.

Fig. 1. A shared but invisible space for collaborative Fig. 2. Sketch of setup
skin-based
based sound control

3 Equipment
The installation requires the following equipment: 4 walls (size 3 x 2 meter),
m 4
tripods, 4 loudspeakers and the central box, accessible
cessible from each of the 4 aereas,

Fig. 2. Sketch of setup.

3.
Equipment

The installation requires the following equipment: 4 walls (size 3 x 2 meter), 4 tripods, 4
loudspeakers and the central box, accessible from each of the 4 aereas, where the touch-
ing takes place. We would like to ask conference organizers to provide us with a separate,
dark room, the tripods and the loudspeakers. Material, arrangement and alignment of
the walls will be discussed individually.

4.Video Demonstration

Testing the mode of operation with a first prototype at XX: http://www.vimeo.


com/37367946.

265
References

dAlessandro, Nicolas et al. ROOM#81Agent-Based Instrument for Experiencing


Architectural and Vocal Cues. Proceedings of the 2011 Conference on New Interfaces
for Musical Expression (NIME 2011), Oslo, Norway, 2011
Benthien, Claudia. Skinon the cultural border between self and the world. Columbia
University Press, 2002
Jaimovich, Javier. Ground me! An Interactive Sound Art Installation. Proceedings of
the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), Sydney,
Australia, 2010
Mller-Rakow, Alexander and Fuchs, Jochen. The Human Skin as an Interface for
Musical Expression. In Proceedings of the 12th International Conference on New
Interfaces for Musical Expression (NIME 2012), Ann Arbor, Michigan, USA, 2012
Waisvisz, Michel. Crackle History, www.crackle.org/CrackleBox.html, 2004, (accessed
05-Jan-2013

266
Null By Morse: Performing Optical Communication
with Smart Phones

Tom Schofield
tomschofieldart@gmail.com
Digital Media, Culture Lab, School of Arts and Cultures, Newcastle University,Newcastle
upon Tyne, UK.

Keywords: Mobile Art, Morse, Optical Communication.

Abstract: Null By Morse is an installation artwork incorporating a military signaling lamp


and smart phones. A number of Morse messages are transmitted automatically by the
signal lamp. A custom app for iPhone and Android uses the phone camera to identify the
changing light levels of the lamp and the associated timings. The app decodes the Morse
and displays the message on the screen on top of the camera image. The messages are
taken from the 19th C development and testing of Morse code and its subsequent use in
the military and in transport. I discuss theoretical implications of the work by locating

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


it in a rich, material history of optical and telegraphic communication.

267
1.
Introduction

The use of signaling lamps marks only one installment in the varied material history of
optical communication. This history is tightly bound with the development of strategic
military coordination. The development of the optical telegraph for instance allowed
Napoleons army to manage logistical resources across the expanding French military
conquests (Standage 1998). Meanwhile, the birth of Morse code is imbricated with both
art history and American Civil War as Samuel Morses failed ambitions as a salon painter
re-diverted his career into that of an inventor at a time when the rumblings of war be-
tween the North and South encouraged financial support of his new communication
medium (Standage 1998, Gere 2006).

Fig. 1. NBM Documentation, (http://www.flickr.com/photos/92328727@N03/sets/72157632547712877/).

2.
Performing Historicity

Morse code as been employed in a variety of situations which have gone on to fame and
notoriety. The invocation of the Old Testament in Samuel Morses early public transmis-
sion What hath God wrought dramatically foreshadowed later uses where Morse suc-
ceeded or failed to save lives. Its history is closely entwined with the cataclysmic failure
of technologies. The infamous broadcast from the Titanic We have struck an iceberg,
sinking is perhaps the cardinal example where Morse code is employed as call to rescue
after technological hubris helped to cause disaster.

268
Fig. 2. The Null by Morse interface.

3.
Dumb Phones

Morse code maintains an unusual and pervasive presence in civil and military histories.
Its principle strength is that it can be transmitted in a variety of media (sound, light,
radio, telegraph) and this lead to its early adoption in radio, which in turn allowed it to
be transmitted to aircraft. The versatility which allows Morse to exist alongside more
complex communication devices provokes questions as to what other side effect tech-
nologies are being produced alongside mainstream products such as smart phones. Null
By Morse reduces the wide array of interaction possibilities of smart phones to a dumb
minimum. By doing so it critiques the futurism implied by such high-tech devices and
locates them in a rich material history of communication.

References

Gere, C. Art, time, and technology. New York, Berg, 2006.


Standage, T. The Victorian internet: The remarkable story of the telegraph and the
nineteenth centurys online pioneers. London: Weidenfeld & Nicolson, 1998.

269
270
The Lonely Tail

Giselle Stanborough
gisellestanborough@gmail.com
University of New South Wales, Sydney, Australia

Keywords:Internet, Aesthetics, Embodiment, Performance, Animation, Video Art,


Cross-Discipline.

Abstract: The Lonely Tail is a four channel video installation that investigateshu-
man-computer interaction through experimental combinations of abject and glitch
aesthetics. Each channel contains an animated digital collage and sound composition
sourced from the user-generated content of specific web sites. Performative actions by
the artist are then superimposed on the animation using chromakey. The Lonely Tail
is an experiment in the performance of vicarious engagements that are experienced
by Internet users who are frequently privy to other users documented experience of
embodiment.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

271
1.
Introduction

Campbelltown Arts Centre in Sydney commissioned The Lonely Tail for the exhibition
Theres a Hole in The Sky in 2012, which examined themes of anxiety in daily life. The
Lonely Tail was the result of an investigation into unease about the experience and rep-
resentation of embodiment in an age of ubiquitous computer connectivity.

2.
The Lonely Tail Methodology

Displayed originally as a four channel video installation, each screen depicted an


animated digital collage and sound composition containing media sourced from the
user-generated content of specific online communities dedicated to a particular phys-
ical experience that has been displaced into a cyberspatial context. Channel 1 (figure1)
examines dermal grooming and extractions with content sourced primarily from
popthatzit.com and reddit.com/r/popping/. Channel 2 (figure 2) examines the popular
body-building and exercise culture typified by /fit/ (a board of 4chan.org), Channel 3
(figure 3) is concerned with the proliferation of amateur pornography and Channel4
(figure 4) contains images and sound sourced from various food blogging sites. This
content was chosen because it relates to the abject described by Julia Kristeva, as food
loathing a wound with blood and pus, or the sickly, acrid smell of sweat these body
fluids, this defilement, this shit (Kristeva 1982, 23)
Each channel presents an image of superimposed performance actions by the artist
that are related to the content of the animation. Using chromakey effects, the artists
body is replaced by a digital animation comprised of mediated and distorted images
sourced from the online sites and communities mentioned above. The chromakey is
done ineptly, so that evidence of a chromakey green costume and pixilation can be
clearly seen. This misregistration is intended to make visible the media utilised and
to prompt viewer distaste at a degraded style as much as revulsion towards the visibly
abject content. This union between the abject and electronic glitch proposes the pos-
sibility of engaging with such Internet content as an immaterial ritual of defilement
(Kristeva 6364).
The notion of an immaterial ritual of defilement is significant because The Lonely Tail
attempts to illustrate the kind of vicarious sensation that all notional bodies experience
when transgressing physical, categorical distinctions between the viewer of the artwork,
the online user, and the digital body depicted. Such ambiguous relations to the body in
cyberspace challenge the assumption of the Internet as a disembodied environment.
The union of the body as a pictorial depiction and process of embodiment as vicarious
experience allies The Lonely Tail with Feminist criticism of conventional representation
of the female body in cyberspace, most notably the work of N. Katheryn Hales (1999).

272
Fig. 1. Channel 1 Video Still (full video available: http://www.youtube.com/watch?v=x5nD51rkVrA)

Fig. 2. Channel 2 Video Still (full video available: http://www.youtube.com/watch?v=jXvjCReHPQE)

Fig. 3. Channel 3 Video Still (full video available: http://www.youtube.com/watch?v=a0ERkC5ezYs)

273
Fig. 4. Channel 4 Video Still (full video available: http://www.youtube.com/watch?v=4pcMyftm3QY)

Fig. 5. The Lonely Tail installation documentation.

References

Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics,


Literature and Informatics. University of Chicago Press, Chicago, 1999.
Kristeva, Julia. The Powers of Horror: An Essay on Abjection, trans. Leon S. Roudiez.
New York: Columbia University Press, 1982.

274
Funkschatten: a Creative Collaboration
Experience

Michael Trnkner
michael.trankner@gmail.com
KAZOOSH!, Dresden, Germany

Theresa Schnell
theresaschnell@hotmail.com
KAZOOSH!, Dresden, Germany

Keywords: Kazoosh!, Collaborative, Art, Installation, Sound, Visuals, Interactive, Urban,


Landscapes.

Abstract: Creating a piece of art is a deeply personal process inspired by your surround-
ings, society and environment. However, collaborating with over 15 people from different
backgrounds and only five days of preparation for an installation turned out to be a new

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


challenge for most of us. In this report, we cover our approach from techniques for the
creative process to organizing a workgroup.

275
1.
Introduction

In November 2012 the 16th annual CynetArt Festival was hosted in Dresden, Germany. The
INTERNATIONAL FESTIVAL FOR COMPUTER BASED ART is a recognised platform of digital
culture (CynetArt2012) and is funded by the Saxon State Ministry of Science and Art. For
the first time, the CynetArt was organized to take place in different locations (i.e. festi-
val halls, bars, clubs and on the streets) around the city of Dresden all at the same time.
The KAZOOSH!Team took the opportunity to participate for the second time in a row in
this event. KAZOOSH! is a community of like-minded people with a drive to create awe
inspiring installations, its members coming from very different backgrounds and thus
bringing various kinds of expertise to the projects. Members of KAZOOSH! studied the fine
arts, computer science or electronics, however the unique nature of this group is more
complex and will be covered in depth in a future paper.
In this paper we describe the process, difficulties and the overall experience of de-
signing an installation. First, we briefly explain the motivation behind KAZOOSH! and the
installation. Second, we illustrate the circumstances and limitations of the location, the
timeframe and our resources. The third section covers the process of creating the instal-
lation. The outcome is presented in the fourth chapter. We conclude with the results of
the week, questions that were raised and the influence on our future work.

2.
Impulse

2.1.
Motivation
In November 2012 the KAZOOSH! group was given the opportunity to be a part of the
CynetArt by realizing a completely new project in under one week. The main topic:
Funkschatten, meaning the shadow area where radio transmissions are impossible, set
our minds for the installation. Bringing together fine arts and new media in an installa-
tion in which the imperfections of the technical surroundings and the beauty of urban
life would connect. The KAZOOSH! team was fascinated by the contrast of the digital
and real world and wanted to blend these two worlds with their expertise in computer
-based art, sound-installations and sculpturing. The creation process took place at the
exhibitions location, which made it possible for us to work out a spatial concept for the
Club64 (Club2012), a small bar in Dresden. The connection between a specific space, the
different fields of expertise and the interdisciplinary context made it possible to create
a world, which mixed up real and imaginary urban landscapes. The bizarre and subtle
break between daily life and a possible second world behind that reality was displayed
with different media. Electrical and mechanical systems met sensual materials, such as
transparent paper. Cables and light seemed to blend into organic structures, contrasted
by the geometrical forms of polygons (Fig. 1). Images and situations of everyday life were
mixed with sounds to create a multisensory experience that lived on the verge of famil-
iarity and strangeness.

2.2.
Time
These ideas and the motivation behind them were dampened by the dense timeframe of
less than a week. Starting the work on November, 10th and opening the installation to the

276
public on the 15th of November overshadowed almost every part of the creation process.
The short two day period of showcasing and cleaning out the location till the 17th was
also part of the tight schedule. During this time not all of the KAZOOSH! members were
able to take vacation days at work, which sometimes resulted in only a few attending
hours per day. In addition to the temporal factor, the creative process of the group was
further restricted by various limitations.

Fig. 1. Polygon structures.

Fig. 2. The rooms during construction.

2.3.
Limitations
The Club64 is a small bar with a worn down interior and a low ceiling. The team was
given two of the three rooms to work with (Fig. 2). Throughout the exhibition the bar-
tender would still serve drinks which had to be taken into consideration. The owner of
the bar was cooperative but had strict rules about construction and prohibited any kind
of drilling, gluing or bolting to the walls.
Further difficulties were materials and funding, yet through various channels the
team raised a total of 450. Most parts of the structures were built with recycled wood
from previous installations and the CynetArt organizers provided additional lumber. Since

277
the KAZOOSH! Team has currently neither storage space nor a permanent workspace,
most materials especially wood were returned afterwards or donated to friends, artists
and the WERK.STADT.LADEN (WSL2012). Overall, the few resources and limited timeframe
liberated the creative power of the team and were considered a challenge, not a burden.

3.
Process

The week started with a meeting to set up the organizational structure for the upcoming
days. Keeping track of a group of 10 to 15 people, everybody with different assignments and
personal schedules, is the key factor for a successful cooperation. Exchanging telephone
numbers, scheduling a rough timetable or clarifying transportation can be time-con-
suming at first but enhances efficiency during the project week. Furthermore, voting
for a contact person and/or spokesman on behalf of the entire group is often needed
and makes communication with other teams or, in our case, the administration of the
CynetArt easier. Actual work started on the second day, with a session in which every
member of the team pitched in three ideas for the installation. This way, we gathered
everything from technologies and materials to feelings and moods, we wanted to convey
with the installation. Based on these various topics, we established five working-groups
of two to four people. The groups were called: sounds&mechanics, projection, origami,
sculpturing&construction and video and consisted mostly of members with a lot of ex-
pertise in that field. In contrast to the usual goal of a workshop, where participants are
introduced to a concept to broaden their skills or mindset, this project facilitated personal
growth for every member of the team by leveraging their abilities. Within each group, we
brainstormed (Fig. 3) for more detailed concepts on how to combine hardware, interac-
tion concepts and new media to illustrate the gap between the digital and urban world.

Fig. 3. Brainstorming.

278
Fig. 4. Group meeting.

Fig. 5. Final construction.

To make sure the final installation would still be a coherent concept, we set up two
1-hour meetings (Fig.4) and a few presentations, bringing together the working results of
all the groups. Every meeting was attended by at least one representative from each group
to facilitate meaningful decisions while still allowing for flexible scheduling. Naturally,
some groups were more connected from the beginning, and needed to work closely to-
gether throughout the entire week. For example, the construction and the projection group
had to find a material which was easy to sculpt and could still be used as a projection
screen, while being illuminated from behind. Such material tests and early prototypes
began during the third and fourth day. The last days of the process were surprisingly well
organized due to the compartmentalization of tasks, as responsibilities were distributed
among every member of the team (Fig. 5). All this previous planning, continuous feedback
and every meeting throughout the week helped to finalize the installation in the end.

4.Final Stage

The resulting installation consisted of two worlds and the question about the foreign
in between the usual. We tried to connect reality and fiction in our concept inside the
Club64 by working with new media as well as custom software and hardware solutions:
The room, which was used as a bar during the exhibition, was the border between the
well-known and the subtlety of the alien within. The video above the bar showed feet of

279
passengers in everyday live with the difference that these feet walked on the height of
the visitors heads. Additionally, we transformed reality into something foreign as the
sounds of the city resonated quietly from the seats. Small piezo-discs transmitted the
sound waves to the wood of the seats. The visitors sitting there had the chance to indi-
vidually hear that sound. A sound, which is so common and omnipresent in our lives,
that we usually would not notice it.
The interactive polygon (Fig.6) in a sidearm of this room was one of many pa-
per-shaped sculptures. As it was touched an alarm activated a projection where digital
life forms fled from the sculpture as if they were flushed out of their nest.

Fig. 6. Interactive polygon.

Fig. 7. Large sculpture.

280
Fig. 8. Visible electronics.

Fig. 9. Moving origami structures.

The second room was one coherent installation. A large polygon structure grew from
the edges into the room (Fig. 7). The construction of wooden slats was covered by differ-
ent types of paper, which formed the background of a multisensory projection reacting
to the audiences movements. The forms of the sculptures represented abstract cities
and were lit from inside, referring to the luminance of real cities. Furthermore, this
installation combined movements of mechanical objects and virtual projection. Paper,
changing LED-lights and the analog sounds caused by electric motors formed a hybrid
atmosphere. The aim was not to hide the technical background but to make it part of the
final product (Fig. 8). Cables, motors and pulleys were noticeable inside the polygons and
added their sound to the digitally produced atmospheric tunes. The electronics moved
the origami-structures throughout the room creating an impression of living processes
(Fig. 9). Artificial sounds coupled with the movement and the semi-technical appearance
caused associations with natural organisms.

281
5.
On our way

During the week the process of working as a team (Fig. 10) of different people was a central
aspect of the installation. Structures within the group, decision making and communica-
tion are as important as the final product. The symbiosis of analog and digital ideas and
media are the common ground of the exhibition and the principles of KAZOOSH!. Different
people, broad interests and a specific location define the way we work. The installation as
such was finished within one week but the outcome for KAZOOSH! were the experiences
and inspirations we took home. We think of this piece of art as one step of a developing
process which sparked new ideas and fields of interest in each member of the group.

Fig. 10. Final installation with several members of the KAZOOSH!-Team.

References

CynetArt http://t-m-a.de/cynetart/about?lang=en About the CYNETART Festival


Website:2012.
Club64 http://club-64.net/ official Homepage of the Club64 in Dresden, Germany
Website: 2012.
WSL http://www.werkstadtladen.de/ Homepage of the Werk.Stadt.Laden
Website: 2012.

282
The Robot Quartet: a Drawing Installation

Andres Wanner
andres_wanner@sfu.ca
Simon Fraser University, Vancouver, Canada

Keywords: Robots, Generative Art, Drawing Machines.

Abstract: The Robot Quarteta group of four robots receive identical instructions and
jointly draw a repetitive pattern. This project investigates the relation between an abstract
idea and its physical manifestation, and explores the poetry of this dividean aesthetic
space that lies beyond human control over machine.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

283
1.
Introduction

Fig. 1. The Robot Quartet at work.

The robot quartetfour drawing robots, equipped with identical repetitive instructions,
start with symmetrical motions. Their drawing gets increasingly distorted by mechani-
cal imperfections. The project is situated between Kinetic- and Generative Computer Art
(Galanter); inspirations go back to Jean Tinguelys drawing machines (Tinguely). Being
a reflection on properties of a mechanic system as a form-giving principle, the piece
embraces imperfections, rather than eliminating them. Repetitive software patterns, as
well as seemingly organic traces of mechanic deviations, generate an aesthetic between
analog and digital.
The author hopes to argue for a beneficial role of inaccuracies in robotics. While tech-
nology may become increasingly precise, the work hopes to trigger reflections on how
to embrace imprecision.

2.
Description of the system

Four slightly adapted, identical Pololu 3pi robots steer freely in all directions with two
independent motors. Initially synchronized, with inexact motions they follow an exactly
timed choreography. As physical machines, they do not only move back and forth, but
draw from a repertoire of straight and curvy lines, scribbles and zigzag-shapes, and per-
form rattling and wiggling motions, thus balancing considerations between visual output
and dynamics of the robots.

Fig. 2. Repetitive vs. organic lines.

284
Fig. 3. Detail: composition.

Fig. 4. Replicating the same figure.

3.
Documentation

Fig 5. Documentation Video on http://vimeo.com/55805990.

285
Fig 6. Robot Drawing Farewell to Canada. Patterns of different line-qualities
can be read as dancing figures, heart-shapes or falling leaves

Fig 7. This drawing, Composition 1.1 consists of different curves. The


irregular density emerged independently of the algorithm.

References

Galanter, Philip. What Is Generative Art? Complexity Theory as a Context for Art
Theory, Paper presented at GA2003 6th Generative Art Conference 2003, 2003.
Tinguely, Jean. Mta-Matic No. 6, 1959. Museum Tinguely, accessed January 16, 2013,
http://www.tinguely.ch/de/museum_sammlung/sammlung.1954-1959_0110.html

286
Geometries of Flight

Monty Adkins
monty.adkins@hud.ac.uk
University of Huddersfield, England

Julio dEscrivn
julio.descrivan@hud.ac.uk
University of Huddersfield, England

Keywords:Audio-Visual, Remix, Hybridity, Nodalism, Video, Visual Music.

Abstract: Geometries of Flight is an audiovisual work created by the authors in 2013. Adkins
was commissioned by Tobias Fischer to contribute to a publication centred on the work of
Kenneth Kirschner. The brief for the project was to use any of Kirschners compositions as
the starting point for a remix. All of the sound artists commissioned were given free reign
to use his work in any way with no restriction on length or media. The resulting audio

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


piece For Kenneth Kirschner utilizes five short samples taken from Kirschners 10 July,
2012. In response to the composition and its concept of remixing/sampling, dEscrivn
created a video utilizing found footage. The intention was to concentrate on form and
the reshaping of materialsdrawing out the epic, frozen qualities of the harmonic and
gestural content. The authors propose that their use of these materials goes beyond the
accepted notion of the remix and is an example of nodal practice. In Geometries of Flight
it is the process and reframing of the original material that is the most important fac-
tor in determining the identity of the new work rather than the embedding of samples
as referential units. In such works, material, concepts, and ideas are assimilated into
the very fabric of the new work rather than merely weaving quotations into the surface
level of the work.

Video: Julio dEscrivn


Sound: Monty Adkins
2013, 2114

287
288
A Bridge From Nowhere (844)

Alba Francesca Battista


albabattista@inwind.it
Conservatorio D. Cimarosa, Avellino, Italy

Keywords: Electroacoustic Music, John Cage, Clarinet, Quadraphonic, Use of Space,


MaxMSP.

Abstract: A bridge from nowhere is an electroacoustic work written for clarinet and quad-
raphonic electronics. It is a tribute to John Cages music and philosophy.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

289
1.
Introduction

John Cage is a revolutionary figure for music of all time.


A bridge from nowhere is inspired by his masterpiece Lecture on nothing, a brilliant
musical prose composed in the 50s. It is written with the same rhythmic structure used
in his compositions, such as, for example, Sonatas and Interludes.
The basic idea is that

a structure is like a bridge from nowhere to nowhere and anyone may go on it:
noises or tones, corn or wheat. Does it matter which? (...) We really do need a
structure, so we can see we are nowhere. (J. Cage, Lecture on nothing, 1959)

The structure of the piece evokes the text of Cage in its division into five sections. This
is made clear from the stolid fragment that, as in the prose, is repeated at the beginning
of each section and also from the apparent randomness of every musical gesture. The
central section is the bridge that brings together the whole composition, which ends with
a new beginning, like a bridge to nowhere.
The composition is quadraphonic, and the acoustic sounds of the clarinet are oppos-
ing the electronic noises, sometimes reworking of monodic characteristic timbre of the
instrument, sometimes totally synthetic.

2.
Algorithms and strategies

Most of the sounds are coming from clarinet and are acquired ad hoc on the basis of the
compositions purposes.
Each sound is subjected to various editing processes, especially warping, shuffling,
convolutions, delays. All the processes Ive used are related to my idea of composition: Id
like to have an apparently random content into a defined structure, given by the prose.
I formed complex sounds without any harmonic relationship. Changing the envelope
of each sound, I meant that synthetic sounds have a typical profile of the sampled clarinet
sound and clarinet loses its shape to conquer another one.
Many of the sound events are severely distorted, or deprived of their transitional at-
tack, to create events more or less prolonged with an attack transient artificially slow.
On some of these I applied a new transitional character, quick and impulsive, using the
spectrum of the resonance area, commonly less rich in harmonics.
I used other editing processes to create events for which the original material is used
as a modulating vocoding algorithm of spectra as rich as square waves and triangular
ones, to enrich the sonic palette of timbres and the synthesis possibilities.
The continuous bands that characterize the central section are constructed from pink
noise, with an excess of power for the low frequencies, molded with the convolution of
other waveforms, especially clarinet events.
I used a delay in multiple sections, also with feedback, with the possibility to modu-
late or maintain constant the delay time.

290
Clarinet and electroacoustic scores took place simultaneously, evaluating new per-
formance practice for the clarinet, while respecting the structural features of this instru
ment. Every gesture of the clarinet is a bridge between the traditional writing and a new
form of sound.

2.1.
Space
Space is not mainly focusing on the forward axis, but tends rather to a wide distribution
of the composition, also with moments of prevalence of exclusive zone or with obvious
sudden contrasts.
I used MaxMSP to create an algorithm that allows me to manage the mapping of each
sound event by creating random trajectories and rotations.
The space of the live clarinet, instead, is limited to the front stereo external to
quadraphonic.
2.1.1.Spatial description/Diagram for the performance

Fig. 1. Diagram for the performance (1, 2, 3, 4 speakers).

291
292
Impetus Cascading Chaos

Vilbjrg Broch
vilbjorg@antidelusionmechanism.org

Keywords: Iteration, Intuition.

Abstract: When working with computer generated sound I have the past couple of years
been very interested in exploring waveforms created through iteration of mathematical
algorithms such as it is done within chaos and fractals.
Chaos has over the past century become a vast topic within mathematics, so I will in
this context simplify the notion of chaos to a time function system with orbits which has
a sensitive dependence on the initial condition and when mapped to waveforms produce
waveforms of a very high or infinite period.
Till now I have just looked at a few things within that field. Iterating a simple tran-
scendental function like e.g. k*sin(x) already has chaotic properties for most values of
k. I looked at several maps which in some ways are extensions of this factStandard/

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


ChirikovHenonIkedaCurlicueFractal. There will be an almost endless possibility
to design new algorithms out of this basis.

293
1.
Impetus

The particular work, which will be presented at xCoAx2013, is based on an algorithm


twisting the Standard map to something surely different, and then cascading versions
of it to look for something one could call harmonically related chaotic and high-period
orbits Also the notion of cascading chaotic systems has many aspects to it and offer
of course many diverse possibilities to be explored further in the future. This particular
work is done in MaxMsp.
My concept for including voice alongside with these autonomous and happily unpre-
dictable algorithms has been to use the intuition of the voice, the physical action which
comes before thought. After all the human body, being nature in some sense, is acting as
the most sophisticated calculator and possibly beyond that. Very importantly the intuitive
possibilities of the human mind-body has always been a central topic for me, and from I
first started working with computer-music the relation between mind-intuition and com-
puter has been a root question. Recently I listened to an interview with physicist Russell
Targ who has been a part of the US military program for remote viewing. An espionage
program which aimed at a systematic development of extrasensory perception, ESP. Targ
has recently written an iPhone application with which one can train ESP. The idea is that
the mind can be trained to fathom the outcome of the pseudo random algorithm in an
instant. Targ very much expressed some possibilities which are of great interest to me
when working with chaotic algorithms
Back to Impetusin order to make the details of the piece unpredictable for the ra-
tional mind the interaction between calculation-flow and human is done by letting the
person decide on the moments of change from one phase of the piece to the next. Since
we are in a realm of sensitive dependence of initial condition it will create different
paths to initiate this change just 1 sample sooner or later.
Along the way I have as well looked at a few algebraic algorithms like the Mandelbrot
set and the wide notion of Mbius transformations which with suitable parameters
can produce chaotic orbits. Here I have till now spent most time with the Mandelbrot
set. (MaxMSP-patch can be found at http://antidelusionmechanism.org/vilbjorg.
html)spending days on end slowly scrolling towards to boundary in many different
places has been an universal sound- and space-travel in the infinitesimal. When arriving
from the outside one gets an almost physical feel for what number precision means.
When approaching the boundary it takes still more iterations before the point escapes/
blows up. It can sound periodic but then suddenly after ever so many million iterations
the point escapes and proves that the waveform produced never was completely peri-
odicit seems like the boundary itself is wrapped in infinitythat is the fractal one
must agree.
How much chaos can a computer in theory generate from a chaotic map?here is
one simple way to look at it: due to finite bit-rate the period will be finite, but the period
might be longer than the concert, even much longer than your life. One could think about
an algorithm that in continuous form would produce absolute chaos=infinite period if
dealing with some ideal infinite number-precision. When iterating at every sample by
44.1kHz sample rate and working in 64 bits then the maximum possible period (with
maximum rounding luck) could simply be considered to be 2^64 samples (if you return

294
to exactly same number-value you start over) which is about = 1.84467441e19 which very
roughly corresponds to 1.16e10 hours or 1.3 million years. My computer and I will most
likely not be there at the end that ultimate period And I mostly do not iterate at every
sample, so that I can hear whats going on, I do a minimum iteration-period of 2 samples
and often much larger to fully enjoy the waveformsin 32 bits the same calculation
goes to roughly 27 hours, quite a difference!in 128 bits the time-span goes somewhat
beyond my imagination.
Something which looks to me to be standing central todaywhen working with al-
gorithms and computersis the interplay between the mathematical idea on the one
side and the computation on the other. I keep remembering Benoit Mandelbrot telling
about the moments (1980) when he and a programmer from IBM saw the first prints of the
Mandelbrot set. They thought something had gone very wrong, they just did not believe
what they saw. No one had ever been able (or had the time in one life) to do those calcu-
lations and no one had previously seen those richone could be lured to say non-lin-
earconsequences of mathematics. I believe this is a true paradigm shiftthe more
I look into the various fields of mathematics the more it becomes clear that centuries of
excellent mathematical ideas still need to be explored in computational arts. Most is at
this moment undone. Out of such material we will perhaps see a bridge being enforced
between artistic and scientific practices.

295
296
Improvising With Self-Observing Systems: a Duet
For Cellist and Adaptive Delay Network

Alice Eldridge
alice@ecila.org
University of Sussex, Brighton, UK

Keywords: Adaptive Systems, Live Algorithms, Computational Creativity, Feedback.

Abstract: Feedback is a fundamental organising principle of living systems, adaptive sys-


tems and creative activity. This is an obvious point of fact, but a rich and inspiring point of
departure for activity at the intersection of computation, communication and aesthetics.
The proposed performance is an improvisation for cellist and an adaptive circular delay
network coupled via acoustic feedback in the concert hall environment.

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org

297
1.
Introduction

I am interested in the cross-talk between analogue and digital sound in live performance
and the mediation of this conversation with adaptive systems. Systems theory taught us
to think above and beyond the specifics of any particular media and highlights certain
organisational principles which can be observed in biology and instantiated in silico.
From this systemic perspective, the design of Live Algorithms (Blackwell et al, 2012) for
musical improvisationsoftware capable of sustaining a responsive and inspiring live
conversationstarts from the conception of the human performer and performance
software as two adaptive systems coupled via a shared acoustic environment (cf Di Scipios
composing interactions, Di Scipio, 2003).

2.
Improvisation for cello and adaptive feedback circuits

ISOS (Improvising with Self-Observing Systems) is an ongoing project exploring the per-
formance possibilities for a human improviser and self-observing digital systems. The
performance system builds upon some earlier experiments with self-controlling feed-
1. h
 ttp://www.ecila.org/ecila_files/ back circuits1. These experiments were driven by an interest in eco-systemic principles
content/academic_files/
(McCormack et al, 2009) as powerful metaphors for the design of generative and inter-
project_files/selfdirecting
Feedback.html active systems.
The system is based on a circular network of delay lines as shown in Fig.1. A similar
set up has been explored previously (e.g. Burns, 2003). In this case however, the delay
units are adaptive: rather than peak limiting via compression or even non-linear wave-
shaping (ibid), each unit contains a watt-governor style spring model which alters the
length of the delay line if the input amplitude exceeds a pre-specified limit. This creates
a basic homeostatic mechanism (Ashby, 1952) whereby the positive feedback created by
the Larsen effect is stabilised by the adaptive delay mechanism.

Fig. 1. Schematic of the performance software: Eight digital delay lines are connected in a bi-directional
circular fashion and fed by two microphones. Two outputs are fed to a stereo PA. The network induces a
Larsen effect which is fed and perturbed by the performer. The circular set up of delay lines and adaption
of delay times creates an unpredictable yet coherent response and reinterpretation of the cellists input.

298
The sonic complexities arising from the circular delay network and the adaptive
behaviour of the self-observing delay modules create a dynamic environment which at
once reacts to and provokes the human improvisor, creating a beguiling digital-analogue
coupling between human and machine.

References

Ashby, Ross. Design for a Brain, Chapman & Hall. 1952.


Blackwell, Tim, Bown, Oliver, and Young, Michael. Live Algorithms: Towards
Autonomous Computer Improvisers. In McCormack, Jon and DInverno, Mark (Eds)
Computers and Creativity Springer. 2012.
Burns, Christopher. Designing for Emergent Behavior: a John Cage Realization, In
Proceedings, International Computer Music Conference 2004., Stanford, 2003, pp.
193196.
Di Scipio, Augustino. Sound is the interface: from interactive to ecosystemic signal
processing, Organised Sound, vol. 8, no. 3, pp. 269277, 2003.
J. McCormack, A. Eldridge, A. Dorin and P. McIlwain. Generative Algorithms for
Making Music: Emergence, Evolution and Ecosystems, In R.T. Dean (ed.) The Oxford
Handbook of Computer Music, Oxford University Press, pp. 354379, 2009.

299
300
Drive Mind

Hideyuki Endo
endotut@gmail.com
Tokyo University of Technology, Hachiouji, Japan

Hideki Yoshioka
yoshioka@stf.teu.ac.jp
Tokyo University of Technology, Hachiouji, Japan

Keywords: Media Art, Sound Art, Electro Acoustic, Sound Sculpture, Sonification, Tangible
Interface, Max/MSP/Jitter, Noise, LED, Performance, Refraction of Light.

Abstract: Drive Mind is a unique electro acoustic system that provides audiences with a
new sonic experience produced by the refraction of light. The main feature of Drive Mind
is to visualize abstract figures of sound by a ray of LED light, and to manipulate sound by
the refraction of this light. To ease recognition and understanding by the audience, the

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


performer manipulates the acrylic objects physically, and the system produces the sound
with this manipulation data, which was generated through well-understood physical
phenomena. In this way, an audience will have a full sonic and visual experience filtered
through their own imaginations.

301
1.
Introduction

Advances in computing have led to achievements in complex virtualization. Also, with


the development of peripheral devices, such as touch panel screens and remote control-
lers, it has become easy for anyone to have a virtual experience. These tools are actively
utilized in the field of media art.

2.
Aim

Advances in virtualization have not been able to eliminate the incongruity of ones tactile
experience. For example, when a user is using gestures while performing an operation
with motion graphics, ideally the user should feel the weight of the operation, not only
the air resistance. Also, most of the data for the operation is in reality invisible. Therefore,
the user would be unaware of the method working behind the scenes. Because of these
problems, an audience has difficulty empathizing with the performer or system and does
not have a full experience expanded by the audiences own imaginations. To overcome
these difficulties I have developed an electro-acoustic system named Drive Mind.

3.
Approach and Implementation

A ray of LED light visualizes the manipulation data. This ray of light is a metaphor for a
stream of sound. The system is manipulated physically through acrylic objects. The ray
of light is projected onto a panel and the camera shoots an image onto that panel. The
input to this system is an image taken with a Web camera. When the acrylic objects
move, the ray of light gets refracted; in effect, the position of the light on the panel
changes. The moving light on the panel is tracked by an application called Max/MSP/
Jitter and produces positional information. This positional information is converted to
MIDI information, which is used to produce a variety of sounds generated by a software
synthesizer called Reason.

Fig. 1. An LED light and acrylic objects.

302
4.Media Assets

Fig. 2. A video asset (http://www.youtube.com/watch?v=iyS3WUbAQD8).

Acknowledgements: I would like to give special thanks to Dr. Yuta Uozumi and Tomoko
Nakai for their creative advice. I would also like to thank Paul Brocklebank and Dr. Akemi
Iida for proofreading this paper.

References

Hiroshi, I. Fusion of Virtual and Real: Tangible Bits: User Interface Design towards
Seamless Integration of Bits and Atoms. IPSJ-MGN430305. MIT Media Lab, 2002
Roads, Curtis. The computer music tutorial. Massachusetts Institute of Technology,1996.

303
304
Decomposing Electric Brain Potentials for
Audification on a Matrix of Speakers

Titus von der Malsburg


malsburg@gmail.com
University of Potsdam, Germany

Christoph Illing
c@sinuous.de
Sinuous, Berlin, Germany

Keywords: Audification, Electric Brain Potentials, Independent Component Analysis.

Abstract: Audifications of electric brain potentials suffer from the fact that each scalp
electrode records a mixture of signals from all neural generators plus muscle artifacts
resulting in a opaque and noisy rendition. We apply a recently developed computational
technique to separate source signals from the recorded mixtures. These sources are

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


then edited individually and spatialized in a matrix of speakers. The result is a clearer
and more transparent audification of electric brain activity.

305
For at least a half century, musicians and sound designers have used audifications ofelec
tric brain potentials to perceptualize the functioning of the human brain (e.g., Lucier,
1965; Klein, 2001, 2004; Dean, White, Worall, 2004). However, presenting brain signals in a
transparent and appealing manner has proven to be a challenge. One factor contributing
to this is the volume conduction in neural tissue, cerebral spinal fluid, the skull, and skin:
signals of the various neural generators are transmitted to all electrodes on the scalp such
that each electrode records a mixture of all source signals. This poses various problems
for audifications: Since the source signals are not separated, it is not possible to edit them
individually and to spatialize them freely. The result is an obscured rendition of the sig-
nal in which potentially interesting components mask each other. Parameter mapping
sonification circumvents this problem by extracting features from the raw signal that are
used to control parameters of a sound generator. While this approach produces intriguing
results (e.g., Monro, 2004; Potard, Schiemer, 2004; Rangel, 2012), it forgoes the authen-
ticity and immediacy sought by audifications. The present work addresses this problem
using a computational technique that has recently been introduced in neuroscience: in-
dependent component analysis (ICA) identifies source signals in mixtures heuristically
by assuming certain statistical properties of the sources: minimal mutual information
(Bell & Sejnowski, 1995) or non-Gaussianity (Hyvrinen & Oja, 2000). In previous work,
ICA has successfully been used to separate muscle artifacts such as those generated by
eye movements from brain signals (Jung et al., 2000). In the present work, we use ICA to
separate source signals from the mixtures recorded at 30 scalp electrodes. The 30 sources
obtained from this procedures are individually edited for clarity and presented using a
multi-speaker audio system. The result is a transparent and revealing, yet faithful ren-
dition of the recorded brain signals.

Methods and Materials


The EEG signals were acquired in a psycholinguistic experiment that studied human lan-
guage processing (von der Malsburg et al., 2013). In each recording session, a participant
read 360 sentences with varying grammatical structures. The signals were recorded in a
shielded chamber at 30 scalp sites following an extension of the 10-20 electrode layout.
The sampling rate was 512 Hz. The recording sessions lasted about an hour. BrainVision
Analyzer 2 (Brain Products GmbH, Munich) was used to band-pass filter the raw data and
to conduct independent component analysis. The source signals recovered by ICA were
then further edited using the Soundhack software by Tom Erbe.

Sound projection
Our demo will be presented using a custom made 120cm x 120cm sound module on which
30 speakers are arranged in the layout of the electrodes on the scalp (see figure). Each
source signal will be assigned to the position where it was most active. The spatial ar-
rangement of the sources will therefore resemble their distribution during the experiment
while preserving a clear separability of the signals. Audio samples can be found here:
http://sinuous.de/soundpanel.html.

306
References

Bell, Anthony, and Terrence Sejnowski. An Information-Maximization Approach to


Blind Separation and Blind Deconvolution, Neural Computation 7, 11291159. 1995.
Hyvrinen, Aapo, and Erkki Oja. Independent component analysis: Algorithms and
applications, Neural Networks 13, 411430. 2000.
Jung Tzyy-Ping, Scott Makeig, Colin Humphries, Te-Won Lee, Martin McKeown,
Vicente Iragui, and Terrence Sejnowski. Removing electroencephalographic
artifacts by blind source separation, Psycholphysiology 37. 163178. 2000.
von der Malsburg, Titus, Paul Metzner, Shravan Vasishth, and Frank Rsler.
Coregistration of eye movements and brain potentials as a tool for research on
reading and language comprehension. In Johansson, R., editor, Proceedings of the
17th European Conference on Eye Movements, Lund, Sweden. 2013.

307
308
Cmara Neuronal: a Neuro/Visual/Audio
Performance

Joo Martinho Moura


jm@jmartinho.net
engageLab, University of Minho, Guimares, Portugal

Adolfo Luxria Canibal


macedo.adolfo@gmail.com
Mo Morta, Braga, Portugal

Miguel Pedro Guimares


o.miguelpedro@gmail.com
Mo Morta, Braga, Portugal

Pedro Branco
pbranco@dsi.uminho.pt
engageLab, University of Minho, Guimares, Portugal

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


Keywords: Digital Performance, Brain, EEG, Music, Digital Art, Body.

Abstract: Cmara Neuronal is a neuro, audio-visual performance. In this project the


movement/physical interpretation, as well as mental and sensory interpretation of the
performer, are translated, in real time, into sound and visual compositions within an
immersive projection environment.

309
1.
Introduction

Cmara Neuronal is a neuro, audio, visual performance unfolding around the character
of the Adolfo Luxria Canibal, a Portuguese poet and performing artist. In this piece, the
movement and physical interpretation, as well as mental state of the performer, are
translated into sound and visual compositions within an immersive projection environ-
ment, in real time.


Fig. 1. Public presentation of Cmara Neuronal at Guimares European Capital of Culture 2012.

One of the most innovative aspects explored in this project is the close link between
the narrative and emotional aspects of the performer, achieved through a neural-phys-
iological signal recording device, Electroencephalogram (EEG) in synchronization with
the visual and sound aesthetics. The EEG helmet, from Emotiv Inc, adorned with cables
that connect to the ceiling evoke a brain connection to the system. The helmet signals
are transmitted via wireless to a software trained specifically to the mental states of the
artist Adolfo Lxuria Canibal. The heartbeat of the performer is also acquired in real-time
during the performance.

2.
Project Development and Performance

The performance, with the duration of 45 minutes, involves a single character on the stage,
with his body connected to the visual and audio system. The connections include one EEG
helmet, from Emotiv Inc with 16 electrodes and 1 Polar Inc device for heart rate measure-
ment. The body movement is captured through a Microsoft Kinect 3D depth camera. The
heat rate is transformed into image and sound by beat detection and transmitted through
open sound control protocol. The EEG Emotive system provides a neural network soft-
ware to train the detection of specific mental states. Through a set of rehearsals the EEG
Emotive neural network was trained to respond to a set of mental states of the performer.

3.
Challenges behind physiological performances

Through out the piece, there are several moments where the visuals react to the intensity
of the neural recordings. While the piece was created and adjusted through a series of
approximations and trials conducted during rehearsal, in a real performance the readings
from the EEG are necessarily different, as many factors, including the levels of anxiety,

310
stress, concentration, focus, contribute to changes of the performers mental state and the
EEG readings. The graphical and sound design that was done for a set of signal intensity
might work differently from what was originally intended when the intensity of those
readings change. So that might cause a change in respect to the aesthetics of the piece
and add an unknown variable to what the public will see as a final outcome.
Ways that we dealt with that was to rehearse with the performer just a few minutes
before the show started, for one.

Fig. 2. Presentation and stage tests of Cmara Neuronal at Theatro Circo, Braga, 2013.

4.Relevant media assets

Access to relevant media assets about this artwork at:


http://camara-neuronal.jmartinho.net

Acknowledgements: Centro de Computao Grfica, Guimares European Capital of


Culture 2012, Rdio Universitria do Minho.

311
Keynote
314
Post Digital Publishing, Hybrid and
Processual Objects in Print

Alessandro Ludovico
Neural / Academy of Art Carrara

Computation Communication Aesthetics and X. Bergamo, Italy. xcoax.org


The influence of digital on publishing has reached a preponderant level, questioning the
very core of the practice. But more than speeding up a much touted definitive transition
from traditional to fully digital publishing (still to be accomplished on mass) there are
various practices which are pervading the timeless stoicism of the printed page with
calculated processes, transforming it into something new. This had lead to the creation
of hybrids which can be considered as new types of publications with the potential
for having both physical and digital qualities, and which are helping to pave the way
towards more complex and less predictable transitions.

315
1.How a medium becomes digital (and how publishing did)

For every major medium we can recognize at least three stages in the transition from
analogue to digital, in both production and consumption of content.
The first stage concerns the digitalization of production. It is characterized by soft-
ware beginning to replace analogue and chemical/mechanical processes. These pro-
cesses are first abstracted, then simulated, and then restructured to work using purely
digital coordinates and means of production. They become sublimated into the new
digital landscape. This started to happen with print at the end of seventies with the
first experiments with computers and networks and continued into the eighties with
so-called Desktop Publishing, which used hardware and software to digitalize the print
production (the pre-press), a system perfected in the early nineties.
The second stage involves the establishment of standards for the digital version of
a medium and the creation of purely digital products. Code becomes standardized, en-
capsulating content in autonomous structures, which are universally interpreted across
operating systems, devices and platforms. This is a definitive evolution of the standards
meant for production purposes (consider Postscript, for example) into standalone stan-
dards (here the PDF is an appropriate example, enabling digital printed-like products),
that can be defined as a sub-medium, intended to delivering content within certain
specific digital constraints.
The third stage is the creation of an economy around the newly created standards,
including digital devices and digital stores. One of the very first attempts to do this
came from Sony in 1991, who tried to market the Sony Data Discman as an Electronic
Book Player [1] unfortunately using closed coding which failed to become broadly
accepted. Nowadays the mass production of devices like the Amazon Kindle, the Nook,
the Kobo, and the iPad and the flourishing of their respective online stores has
clearly accomplished this task. These online stores are selling thousands of e-book titles,
confirming that we have already entered this stage.

2.
The processual print as the industry perceives it
(entertainment)

Not only are digitalization processes yet to kill off traditional print, but they have also
initiated a redefinition of its role in the mediascape. If print increasingly becomes a
valuable or collectable commodity and digital publishing also continues to grow as ex-
pected, the two may more frequently find themselves crossing paths, with the potential
for the generation of new hybrid forms. Currently, one of the main constraints on the
mass-scale development of hybrids is the publishing industrys focus on entertainment.
Lets take a look at what is happening specifically in the newspaper industry: on one
hand we see up-to-date printable PDF files to be carried and read while commuting back
home in the evening, and on the other hand we have online news aggregators (such as
Flipboard and Pulse) which gather various sources within one application with a slick
unified interface and layout. These are not really hybrids, but merely the products of
industrial customisation the consumer product choice of combining existing fea-
tures and extras, where the actual customising is almost irrelevant.

316
Even worse, the industrys best effort at coming to terms with post-digital print is
currently the QR code those black-and-white pixelated square images which, when
read with the proper mobile phone app, allow the reader access to content (almost al-
ways a video or web page). This kind of technology could be used much more creatively,
as a means of enriching the process of content generation. For example, since they use
networks to retrieve the displayed content, printed books and magazines could include
QR codes as a means of providing new updates each time they are scanned and these
updates could in turn be made printable or otherwise preservable. Digital publications
might then send customised updates to personal printers, using information from dif-
ferent sources closely related to the publications content. This could potentially open up
new cultural pathways and create unexpected juxtapositions. [2]

3.
Printing out the web

Many possibilities emerge from the combination of digital and print, especially when
networks (and therefore infinite supplies of content that can be reprogrammed or re-
contextualized at will) become involved. A number of different strategies have been
employed to assemble information harvested online in an acceptable form for use in a
plausible print publication.
One of the most popular renders large quantities of Twitter posts (usually span-
ning a few years) into fictitious diaries. My Life in Tweets by James Bridle is an early
example, realized in 2009 [3], which collected all of the authors posts over a two-year
period, forming a sort of intimate travelogue. The immediacy of tweeting is recorded in
a very classic graphical layout, as if the events were annotated in a diary. Furthermore,
various online services have started to sell services appealing to the vanity of Twitter
micro-bloggers, for example Bookapps Tweetbook (book-printing your tweets) or
Tweetghetto (a poster version).
Another very popular web sampling strategy focuses on collecting amateur photo-
graphs with or without curatorial criteria. Here we have an arbitrary narrative employ-
ing a specific aesthetic in order to create a visual unity that is universally recognizable
due to the ubiquitousness of online life in general and especially the continuous and
unstoppable uploading of personal pictures to Facebook.
A specific sub-genre makes use of pictures from Google Street View, reinforcing the
feeling that the picture is real and has been reproduced with no retouches, while also
reflecting on the accidental nature of the picture itself. Michael Wolfs book a series
of unfortunate events [4], points to our very evident and irresistible fascination with
objets trouv, a desire that can be instantly and repeatedly gratified online.
Finally theres also the illusion of instant-curation of a subject, which climaxes in
the realization of a printed object. Looking at seemingly endless pictures in quick suc-
cession online can completely mislead us about their real value. Once a picture is fixed
in the space and time of a printed page, our judgements can often be very different.
Such forms of accidental art obtained from a big data paradigm, can lead to in-
stant artist publications such as Sean Raspets 2GFR24SMEZZ2XMCVI5... A Novel, which
is a long sequence of insignificant captcha texts, crowdsourced and presented as an
inexplicable novel in an alien language [5].

317
There are traces of all the above examples in Kenneth Goldsmiths performance
Printing Out The Internet [6]. Goldsmith invited people to print out whatever part of
the web they desired and bring it to the gallery LABOR art space in Mexico City, where
it was exhibited for a month (which incidentally also generated a number of naive
responses from environmentally concerned people). The work was inspired by Aaron
Swartz and his brave and dangerous liberation of copyrighted scientific content from
the JSTOR online archive [7].
Its what artist Paul Soulellis calls publishing performing the Internet [8].
All this said, the examples mentioned above are yet to challenge the paradigm of pub-
lishing maybe the opposite. What they are enabling is a transduction between two
media. They take a sequential, or reductive part of the web and mould it into traditional
publishing guidelines. They tend to compensate for the feeling of being powerless over
the elusive and monstrous amount of information available online (at our fingertips),
which we cannot comprehensively visualize in our mind.
If print is quintessential of the web, such practices sometimes indulge in something
like a miscalculation of the web itself the negotiation of this transduction is reduc-
ing the web to a finite printable dimension, denaturalizing it. According to Publishers
Launch Conferences co-founder Mike Shatzkin, in the next stage publishing will be-
come a function... not a capability reserved to an industry... [9]

4.Hybrids, calculated content is shaped and printed out

This functional aspect of publishing can, at its highest level, imply the production of
content that is not merely transferred from one source to another, but instead produced
through a calculated process in which content is manipulated before being delivered. A
few good examples can be found in pre-web avant-garde movements and experimental
literature in which content was unpredictably generated by software-like processes.
Dada poems, for example, as described by Tristan Tzara, are based on the generation of
text, arbitrarily created out of cut-up text from other works.[10] One of the members of
the avant-garde literature movement Oulipo created a similar concept later: Raymond
Queneaus Cent Mille Milliards de Pomes [11] is a book in which each page is cut into
horizontal strips that can be turned independently, allowing the reader to assemble an
almost infinite quantity of poems, with an estimated 200 million years needed to read
all the possible combinations. That an Oulipo member created this was no accident - the
movement often played with the imaginary of a machinic generation of literature in
powerful and unpredictable ways.
Contemporary experiments are moving things a bit further, exploiting the combi-
nation of hardware and software to produce printed content that also embeds results
from networked processes and thus getting closer to a true form.
Martin Fuchs and Peter Bichsels book Written Images [12] is an example of the
first baby steps of such a hybrid post-digital print publishing strategy. Though its still
a traditional book, each copy is individually computer-generated, thus disrupting the
fixed serial nature of print. Furthermore, the project was financed through a networked
model (using Kickstarter, the very successful crowdfunding platform), speculating
on the enthusiasm of its future customers (and in this case, collectors). The book is a

318
comprehensive example of post-digital print, through the combination of several ele-
ments: print as a limited-edition object; networked crowdfunding; computer-processed
information; hybridisation of print and digital all residing in a single object a
traditional book. This hybrid is still limited in several respects, however: its process is
complete as soon as it is acquired by the reader; there is no further community process
or networked activity involved; once purchased, it will forever remain a traditional
book on a shelf.
A related experiment has been undertaken by Gregory Chatonsky with the artwork
Capture [13]. Capture is a prolific rock band, generating new songs based on lyrics re-
trieved from the net and performing live concerts of its own generated music lasting an
average of eight hours each. Furthermore the band is very active on social media, often
posting new content and comments. But we are talking here about a completely invented
band. Several books have been written about them, including a biography, compiled by
retrieving pictures and texts from the Internet and carefully (automatically) assembling
them and printing them out. These printed biographies are simultaneously ordinary
and artistic books, becoming a component of a more complex artwork. They plausibly
describe a band and all its activities, while playing with the plausibility of skilful au-
tomatic assembly of content.
Another example of an early hybrid is American Psycho by Mimi Cabell and Jason
Huff[14]. It was created by sending the entirety of Bret Easton Ellis violent, masoch-
istic and gratuitous novel American Psycho through Gmail, one page at a time. They
collected the ads that appeared next to each email and used them to annotate the orig-
inal text, page by page. In printing it as a perfect bound book, they erased the body of
Ellis text and left only chapter titles and constellations of their added footnotes. What
remains is American Psycho, told through its chapter titles and annotated relational
Google ads only. Luc Gross, the publisher, goes even further in predicting a more perva-
sive future: Until now, books were the last advertisement-free refuge. We will see how
it turns out, but one could think about inline ads, like product placements in movies etc.
Those mechanisms could change literary content itself and not only their containers.
So thats just one turnover.
Finally, why cant a hybrid art book be a proper catalogue of artworks? Les Liens
Invisibles, an Italian collective of net artists have assembled their own, called
Unhappening, not here not now [15]. It contains pictures and essential descriptions of
100 artworks completely invented but consistently assembled through images, generated
titles and short descriptions, including years and techniques for every artwork. Here
a whole genre (the art catalogue or artist monography) is brought into question, show-
ing how a working machine, properly instructed, can potentially confuse a lot of what
we consider reality. The catalogue, indeed, looks and feels plausible enough, and only
those who read it very carefully can have doubts about its authenticity.

5.
Conclusions

Categorising these publications under a single conceptual umbrella is quite difficult and
even if they are not yet as dynamic as the processes they incorporate, its not trivial
to define any of them as either a print publication or a digital publication (or a print

319
publication with some digital enhancements). They are the result of guided processes
and are printed as a very original (if not unique) static repository, more akin to an ar-
chive of calculated elements (produced in limited or even single copies), than to a classic
book, so confirming their particular status. The dynamic nature of publishing can be
less and less extensively defined in terms of the classically produced static printed page.
And this computational characteristic may well lead to new types of publications, em-
bedded at the proper level. It can help hybrid publications function as both: maintaining
their own role as publications as well as eventually being able to be the most updated
static picture of a phenomenon in a single or a few copies, like a tangible limited edition.
And since there is still plenty of room for exploration in developing these kind of process-
es, its quite likely that computational elements will extensively produce new typologies
of printed artefact, and in turn, new attitudes and publishing structures. Under those
terms it will be possible for the final definitive digitalization of print to produce very
original and still partially unpredictable results.

References

[1] http://en.wikipedia.org/wiki/Data_Discman
[2] Alessandro Ludovico. Post Digital Print, Onomatopee, Eindhoven, 2012,
ISBN9789078454878
[3] http://booktwo.org/notebook/vanity-press-plus-the-tweetbook/
[4] http://photomichaelwolf.com/#asoue/14
[5] Sean Raspet. 2GFR24SMEZZ2XMCVI5L8X9Y38ZJ2JD
25RZ6KW4ZMAZSLJ0GBH0WNNVRNO7GU 2MBYMNCWYB49QDK1NDO19JONS66QMB
2RCC26DG67D187N9AGRCWK2JIHA7E2 2H1G5TYMNCWYM81O4OJSPX11N5VNJ0 ANovel.
PoD, 2013, 516 pages.
[6] http://printingtheinternet.tumblr.com/
[7] http://tech.mit.edu/V131/N30/swartz.html
[8] http://soulellis.com/2013/05/search-compile-publish/
[9] http://www.idealog.com/blog/atomization-publishing-as-a-function-rather-than-
an-industry/
[10] Florian Cramer. Concepts, Notations, Software, Art, 2002.
http://www.netzliteratur.net/cramer/concepts_notations_software_art.html
[11] http://en.wikipedia.org/wiki/Hundred_Thousand_Billion_Poems
[12] http://writtenimages.net/
[13] http://chatonsky.net/project/capture/
[14] http://www.mimicabell.com/gmail.html
[15] http://www.atypo.org/it/work/unhappening-not-here-not-now/

320
321
Biographies
Monty Adkins Gabriella Arrigoni

Gabriella Arrigoni is a PhD candidate in Digital Media at


Newcastle University. Her research interests lie at the in-
tersection of collaborative practices, connectivity, urban
and innovation studies. Former editor in chief of u
ndo.net,
she has curated a number of exhibitions and talks and
published articles and essay across Europe, with a spe-
cial focus on public art and the relationship between art
and the socio-economical context. Her current research
Monty Adkins is a composer, performer and professor explores the concept of Living Lab as a curatorial strategy
of experimental electronic music. His work is character- where practitioners work in a public setting to enhance
ised by slow shifting organic textures often derived from the audience understanding of the artistic, technological
processed instrumental sounds. Inhabiting a post-acous- and scientific dimensions of the work.
matic sensibility, his work draws together elements from
ambient, acousmatic and microsound music. Adkins has http://cargocollective.com/fatlines
worked collaboratively on a number of audio-visual proj-
ects, including Four Shibusa with the painter Pip Dickens
and most recently with composer/digital artist Julio
dEscrivn. Adkins has been commissioned by the BBC,
Radio 3, IRCAM and INA-GRM amongst others. His most
recent albums are published by Audiobulb.

324
lvaro Barbosa Stephen Barrass

lvaro Barbosa (Angola, 1970) is an Associate Professor,


Dean of the Creative Industries Faculty at University of
Saint Joseph (USJ) in Macau SAR-China and a Researcher
at the Centre for Research in Science and Technology of Stephen Barrass is an Associate Professor in Digital
the Arts (CITAR). He was the acting director of the Sound Design and Media Arts at the University of Canberra.
and Image Department at the School of Arts from the He holds a Ph.D. in Information Technology from the
Portuguese Catholic University (UCP-Porto) until September Australian National University 1997, a Bachelor of Electrical
2012. In 1995 He was awarded with a Graduate Degree in Engineering from the University of NSW in 1986, and
Electronics and Telecommunications Engineering from a Graduate Certificate in Higher Education from the
the University of Aveiro, in 2006 a PhD in Computer Science University of Canberra in 2010.
and Digital Communication by the University Pompeu
Fabra in Barcelona and in 2011 he concluded a Post-Doc at http://stephenbarrass.com/
Stanford University in the USA.

www.abarbosa.org

325
Alba Francesca Battista Ruth Beer

Alba Francesca Battista (1987) graduated in Musica Elet Ruth Beer is a Vancouver-based artist whose artistic prac-
tronica with the highest possible marks at D. Cimarosa tice includes sculpture, video, photography and inter-
Conservatoire of Avellino, Italy, with M Damiano Meacci. active projections. She is interested in interdisciplinary
Her compositions are selected for many international approaches to artistic, collaborative and pedagogical prac-
contests (Biennale di Venezia 2013, Eclettica 2012, ). Her tices. Her artwork has been shown in national and inter-
electroacoustic work Eusebius is among the winners of the national exhibitions, she is a member of the RCA of Canada
Internation Competion PianoForteMix2012 by VoxNovus of and has been awarded several public art commissions.
New York. Her recent projects address social history and geological /
She graduated in Piano and in Physics, specialized marine conditions focused on the Pacific northwest region.
in Acoustic and Nanotechnologies. She is the author of She is a Professor of Visual Art in the Faculty of Visual
Elementi di Acustica Fisica e sistemi di diffusione sonora. Art and Material Practice at Emily Carr University of Art
She works as Electronics Professor for the Master Degree and Design.
in Sound Engineering at D. Cimarosa Conservatoire.
www.catch-and-release.ca

326
Sara Bergamaschi David Bouchard

Sara Bergamaschi earned a bachelor degree in Industrial David is an omnivorous New Media artist, technologist
Design at the Politecnico di Milano in February 2011. and educator. His work explores the expressive potential
In December 2012, she graduated in Design & Engi of computation, both in software and hardware forms.
neering. She developed her master thesis in collaboration His research interests include generative art, interactive
with the Design Department at Politecnico di Milano, that and responsive environments, digital fabrication, display
is entitled Changing face. Le possibilit comunicative dei technology for public spaces, electronic music interfaces
prodotti industriali (Changing face. The communicative and wireless sensor networks to name a few. He is cur-
possibilities of industrial products). In her thesis she ex- rently an Assistant Professor of New Media within the RTA
plored the theme of dynamic communication. School of Media at Ryerson University. He holds a Bachelor
Today, she performs collaborative activities at Poli of Computer Science from Concordia University and a
tecnico di Milano. Masters of Media Arts & Sciences from MIT.

http://www.deadpixel.ca

327
Pedro Branco Vilbjrg Broch

I grew up in Denmark and in the early 90s I went to


Amsterdam to study dance. Three very important teachers
Pedro Branco is Assistant Professor at the Department of have been: Katie Duck, improvisation and dance; Enrique
Information Systems, University of Minho where he is cur- Pardo (Roy Hart Theatre), text in performance and colora-
rently the director of the master program in Technology tura soprano Marianne Blok by whom I have studied tra-
and Digital Art. He is working on several funded research ditional voice for more than a decade. I have worked solo
projects focusing on diverse aspects of human-computer as well as in large collaborative performance, theatre and
interaction, ranging from physical computational interfac- music projects as a vocalist and dancer during the past 20
es, to systems that are aware of users non-verbal language. years. It had a huge impact on me when I , only around
Within the master program in Technology and Digital 6 years ago, got acquainted with the work of people like
Art he works closely with students from a wide range of John Bedini, Tom Bearden and first of all Nikola Tesla.
backgrounds developing interactive systems that explore This, in very short, complete critic or reinterpretation of
a synergy of technology and aesthetics, exploring future the 2nd law of thermodynamics, profoundly changed my
directions for our interaction with technology. view on the future for humanity on this planet. It became
clear for me that our retarded utilization of destructive
energy sources such as fossil fuels and uranium not is
due to technological impossibility but rather is caused by
financial dominance and centralization. Since this reali-
zation I have picked up a self-study of mathematics. This
is still a process which I at the moment am applying to
music/algorithmic composition.

http://antidelusionmechanism.org

328
Adolfo Luxria Canibal Pedro Cardoso

Pedro Cardoso is a communication designer, researcher,


professor and a PhD student at the University of Porto pur-
suing studies in video games in the context of new media
and interaction design, and developing experimental work
in this scope.
Adolfo Luxria Canibal is the artistic pseudonym of Adolfo
Morais de Macedo. Graduated in Law from the Lisbon www.pedrocardoso.pt.vu
University, he practiced law in this city and is a legal
advisor. He was the founder of the rock group Mo Morta,
where he is vocalist and lyricist, has created several per-
formances of spoken word in his own name, and joined
the French collective of electronic music Mcanosphre.
Participated as an actor in the television series The Smoke
Dragon and some short films. He was also an author and
broadcaster of radio programs. He is the author of dis-
persed texts in newspapers and magazines and published,
among others, the books of poetry Rock & Roll and Shards.
In 2003 he was considered one of the fifty most important
living persons of the Portuguese culture.

329
Andr Carita Miguel Carvalhais

Miguel Carvalhais is a designer and musician. He holds a


Ph.D. in art and design by the University of Porto, Portugal,
Born in Oporto (Portugal) in 1984, Andr Carita has a where he currently is an assistant professor at the Faculty
Ph.D in Fine Arts at Universidad Politcnica de Valencia of Fine Arts. His practice and research have been focusing
(Spain) where he developed a thesis on Videogame stud- on digital media and on computational design and art
ies, design, art and culture as the main focus. Currently, practices. He collaborates with Pedro Tudela in the @c
Andr is an assistant professor at Universidade Lusfona project since 2000 and he helped to start the Crnica media
de Humanidades e Tecnologias in Lisbon and coordina- label, a platform for experimental music and media art,
tor/professor of a post graduation in Game Design at which he has been running since 2003.
Alquimia da Cor in Oporto. He has also collaborated in
several videogame magazines such as Mega Score, Hype! carvalhais.org
and GameCultura. In a small team, he helped planning and at-c.org
designing the iOS videogame Poproids: Suicide Mission!. cronicaelectronica.org

http://pensarvideojogos.blogspot.com

330
Sara Colombo Arne Eigenfeldt

Arne Eigenfeldt is a composer of live electroacoustic music,


and a researcher into intelligent real-time music systems.
His music has been performed around the world and his
collaborations range from Persian Tar masters to contem-
porary dance companies to musical robots. His research
has been presented at conferences such as ICMC, NIME,
GECCO, SEAMUS, ISMIR, EMS and SMC. He is an associate
Sara Colombo has a background in Product Design with a professor of music and technology at Vancouvers Simon
Master degree in Design&Engineering. She has been work- Fraser University (Canada) and is the co-director of the
ing on her PhD in Design at Politecnico di Milano since MetaCreation research group, which aims to endow com-
2011. She has spent some months in Sweden cooperating puters with creative behaviour.
with the Interactive Institute research group as visiting
PhD student. Her research interests deal with the user www.sfu.ca/~eigenfel
sensory experience within the Human-Product Interaction,
focusing on how products can communicate information
through physical sensations instead of virtual interfaces.
The aim is to explore all the sensory modalities, to design
emotional and meaningful experiences.

331
Hideyuki Endo Julio dEscrivn

Born in Yokohama City, Kanagawa Prefecture, Japan. Julio dEscrivn is a composer and audio-visual artist
Currently a doctoral student studying sound art, with a working in creative technologies and moving image
particular research interest in the sonification of natural through laptop comprovisation. Julio is active as a laptop
phenomenon. and video artist in the UK and abroad with performanc-
es this year in Brazil, Spain and the UK. His work com-
http://on.fb.me/Y1UDvQ [Evoke the source] bines, live coding and visual loop remixing along with
found-objects amplification. Julios recent written work
includes the Music Technology book from the Cambridge
Introductions to Music series published in early 2012 by
Cambridge University Press. He is also coeditor of the
Cambridge Companion to Electronic Music (C.U.P.) and
co-author of the chapter on Composing with SuperCollider
for The SuperCollider Book (MIT Press). At present he is
Senior Lecturer at the Music Department of the University
of Huddersfield in the United Kingdom.

332
Luis Eustquio Christian Faubel

Born in Oporto in 1974. Attended the Faculdade de Belas-


Artes da Universidade do Porto, where he graduated in
Communication Design and completed his MA thesis in
Image Design. Currently employed in a web and mobile
development house, has spanned his activity throughout
various areas in just over two decades, including active Christian Faubel works at the lab3the laboratory for
politics, graphic design, illustration, visual coding, VJing, experimental computer science at the Academy of Media
teaching, sound works and parenting. Arts Cologne. Till 2012 he worked at the Institute for Neural
Computation in Bochum, where he received his PhD in
electrical engineering in 2009.
In his work he is interested what it is that enables au-
tonomous behavior? How complex autonomous behavior
may result from the interaction of very simple units and
from the dynamics of interaction between such units. He
explores the assembly of simple units into systems and the
emergence of autonomous behavior both in artistic and in
scientific research.

http://interface.khm.de/index.php/people/lab3-staff/
christian-faubel/

333
Bruno Figueiredo Sofia Figueiredo

Bruno Figueiredo (Porto, 1977).


PhD Candidate of Architecture at Escola de Arquitectura
da Universidade do Minho (EAUM), Guimares, since 2010.
Master of Contemporary and Modern Architecture Culture
at Faculdade de Arquitectura da Faculdade Tcnica de
Lisboa, with the dissertation Design, Computation and
Fabricationthe integration of digital technologies in
Architecture, in 2009.
Graduate on Architecture by Faculdade de Arquitectura,
Universidade do Porto, Porto, in 2000.
Visiting student at the Design and Computation Group, MIT,
Cambridge, in 2012. Sofia Figueiredo was born in Oporto, Portugal, and lives in
Lecturer at EAUM (Guimares) since 2005. Viseu, Portugal.
Research Member at Centro de Estudos Sociais (Coimbra). She currently lectures in several subjects art-related at
ESEVIPV, while pursuing doctoral research at Universi
dade de Coimbra. Her research seeks to explore the rich
fields of interactivity, animation and autobiography, as
they intertwine in her artistic production.

http://www.avidaecruel.com (Personal website)


http://www.esev.ipv.pt

334
Pablo Garcia Miguel Pedro Guimares

Pablo Garcia is an Assistant Professor in the Department


of Contemporary Practices at the School of the Art Institute
of Chicago. Previously he served as the Lucian and Rita
Caste Chair in Architecture and Assistant Professor at
Carnegie Mellon University from 20082012 and the 2007
2008 Muschenheim Fellow at the University of Michigan Miguel Pedro is a Portuguese composer, multi-instru-
Taubman College of Architecture + Urban Planning. From mentalist, born in Braga. Founded the band Mo Morta,
20042007 he worked as an architect and designer for Mundo Co, Palmer Eldritch, among others, having already
Diller Scofidio + Renfro. Garcia has also taught media and recorded and produced over 50 works (among vinyls, CDs
representation technologies at Parsons The New School and DVDs). He was the author of soundtracks for movies,
for Design and Princeton University. He holds architecture plays, ballets and is one of the responsibles for program-
degrees from Cornell and Princeton Universities. ming the Semibreve Festival, dedicated to electronic music
and digital arts.
http://www.pablogarcia.org

335
Rainer Guldin Jingyin He

Jingyin He (1986, Singapore) is an experimental compos-


er/ performer, researcher, sound artist, and multimedia
installation artist. Working within a hybridized culture of
technology and the arts, the style of his work evolves with
time through investigating, experimenting and formaliz-
Rainer Guldin is lecturer for German Language and ing new creative processes in contemporary sonic arts and
Culture at the Faculties of Communication and Economic visual arts practices.
Sciences at the Universit della Svizzera Italiana in Lugano Jingyin completed his Bachelor of Arts in Music Tech
(Switzerland). He studied English and German Literature nology at LASALLE College of the Arts (Singapore) in 2010.
at the University of Zurich and at Ashton University in In 2011, he was invited to STEIM (Amsterdam, NL) for a
Birmingham (England). His diploma was dedicated to the workshop residency to discuss and explore matters relat-
work of the American writer H. P. Lovecraft, and his Ph.D. ing to instrument design for electronic music performance.
thesis focused on the work of the German writer Hubert Jingyin has recently completed his Masters of Fine Arts
Fichte. He is Editor-in-Chief of the peer-reviewed multi- in both Music Technology: Interaction, Intelligence and
lingual open access e-journal Flusser Studies: http://www. Design, and Integrated Media at California Institute of the
flusserstudies.net/. Rainer Guldin taught courses at the Arts (California, USA), and will be pursuing his PhD in Sonic
Universidade do Estado do Rio de Janeiro (UERJ) in Brazil, Arts at Victoria University of Wellington (New Zealand).
the Bauhaus Universitt in Weimar (Germany) and the
Centre for Translation and Intercultural Studies of the
University of Manchester (England). He was also visiting
professor (Cathedra IEAT/Fundep) at the Universidade
Federal de Minas Gerais in Belo Horizonte (UFMG), Brazil.

http://www.com.usi.ch/en/personal-info?id=323

336
Christoph Illing Vitor Joaquim

Christoph Illing (1969). An artist, programmer and com-


poser of electronic music, he is based in Berlin, Germany.
Runs Studio SinuousSound & Code. After beginning as
sound artist with publications in electronic music, studies
in philosophy and informatics, a Masters degree in Sound
Studies at the University of the Arts in Berlin resulted in his
plunge into the combination of sound and philosophical Vitor Joaquim (Portugal 1963) laptop experimentalist,
concepts. In collaboration with Ulrike Sowodniok composi- sound and visual artist, graduated in sound and film di-
tion relating to the sound of the voice and its meaning. He recting. He started performing improvised music by the
has participated in numerous sound art exhibitions and mid 80s and has created extensively for contemporary
performances including Notations 21 (US), Erbil Theater dance, theatre, installations and cross media platforms.
Festival (IQ), Raumstimme (DE). He has five solo releases, several collaborations and a long
list of compilations and remixes.
http://www.sinuous.de/ He has worked as curator and advisor in several fes-
tivals and events. In 2000 he started EME, a festival ded-
icated to experimental arts and non-standard music. He
has been teaching and coordinating audio-visuals in art
schools since the 90s, working now as a researcher at
CITAR/UCP, Porto, where he is also teaching.

www.vitorjoaquim.pt

337
Ajay Kapur Ricardo Lafuente

Ajay Kapur is currently the Director of the Music Technol Ricardo Lafuente spent the better part of his recent years
ogy program (MTIID) at the California Institute of the oscillating between the roles of designer, hacker, teacher
Arts, as well as the Associate Dean for Research and and artist. He lives and works from the beautiful city of
Development in Digital Arts. He is also a Senior Lecturer Porto, forming one half of the design studio Manufactura
of Sonic Arts Engineering at the New Zealand School of Independente, and teaches as a guest assistant teacher at
Music at Victoria University of Wellington. He received an the Faculty of Fine Arts of the University of Porto.
Interdisciplinary Ph.D. in 2007 from University of Victoria
combining computer science, electrical engineering, me- http://manufacturaindependente.org
chanical engineering, music and psychology with a focus
on intelligent music systems and media technology. Ajay
graduated with a Bachelor of Science in Engineering and
Computer Science from Princeton University in 2002.
Kapur has published over 80 technical papers and
presented lectures across the world on music technology,
human computer interface for artists, robotics for making
sound, and modern digital orchestras. His book Digitizing
North Indian Music, discusses how sensors, machine
learning and robotics are used to extend and preserve tra-
ditional techniques of Indian Classical music.

338
Titus von der Malsburg Marianne Markowski

Titus von der Malsburg (1977) is a postdoctoral researcher


in the research group for mind and brain dynamics at the Marianne Markowski is in her third year fulltime Ph.D.
University of Potsdam, Germany, where he investigates studies at Middlesex University, Art & Design Research
language processing in the human brain. For this research Institute, London. Her research is on the design of on-
he uses techniques such as eye tracking and the record- line social interaction for older people. For this she has
ing of electrical brain potentials. Alongside his studies in designed a physical research toolthe Teletalkerthat
computational linguistics and mathematics, he worked facilitates online face-to-face interaction for older people.
as a freelance software developer and consultant. Earlier, Prior to returning to academia Marianne has been
he founded a company that produced 3D visualizations for working in user research for over 8 years. She has evalu-
customers in industry and science. During that time, he ated a wide range of software and platforms starting from
also operated a recording studio for electronic music. kiosk, desktop, interactive television to mobile applications
and handsets. She led and worked on UX projects B2C and
http://www.ling.uni-potsdam.de/~malsburg/ B2B in the retail, banking, education, mobile and govern-
ment sectors.

www.teletalker.org

339
Susana Loureno Marques Jon McCormack

Susana Loureno Marques (Caldas da Rainha, 1975).


PhD Candidate of Communication Sciences at the
Faculdade de Cincias Sociais e Humanas, Universidade
Nova de Lisboa (FCSH.UNL), since 2010.
Master on Contemporary Culture and New Technol
ogies at FCSH.UNL, with the dissertation Copy and appro-
priation in art after 1839, in 2007. Jon McCormack is an Australian-based electronic media
Graduate on Communication Design by Faculdade de artist and researcher in computing. He holds an Honours
Belas Artes, Universidade do Porto, in 1999. degree in Applied Mathematics and Computer Science, a
Recherches Doctorales Libres at cole des Hautes Graduate Diploma of Art (Film and Television) and a Ph.D.
tudes en Sciences Sociales (EHESS), Paris, in 2010/2011. in Computer Science. He is currently Associate Professor
Lecturer of Photography and History of Photography in Computer Science, an ARC Australian Research Fellow
at IntermediaFine Arts Department, FBA.UP since 2004. and director of the Centre for Electronic Media Art (CEMA)
at Monash University in Melbourne, Australia. Since the
http://88dots.tumblr.com late 1980s McCormack has worked with computer code as
a medium for creative expression. His work is concerned
with electronic after naturesalternate forms of artifi-
cial life that may one day replace the biological nature lost
through human progress and development. His artworks
have been widely exhibited at leading galleries, museums
and symposia, and have received numerous awards for
new media art and computing research. He is co-editor
(with Mark dInverno) of the book Computers and Creativity,
published in 2012.

http://jonmccormack.info

340
Alex McLean Ricardo Melo

Ricardo Melo is a Portuguese designer, writer, researcher


Alex McLean is a Research Fellow in Human/Technology and comic book aficionado. He has a B.A. in Communica
Interaction working from the Interdisciplinary Centre for tion Design from the Faculty of Fine Arts of the University
Scientific Research in Music. As a live coding musician, he of Porto and since 2008 works as a graphic and interface
performs with Adrian Ward and Dave Griffiths as the live designer at the Fraunhofer Portugal Research Center for
coding band Slub (http://slub.org/), getting people to dance Assistive Information and Communication Solutions,
to code including at the Sonar (Barcelona), Transmediale where he collaborates in academic and industrial R&D
(Berlin), Ars Electronica (Linz), STRP (Eindhoven), Sonic projects in the field of Ambient Assisted Living and Human-
Acts (Amsterdam), Lambda (Antwerp), Make Art (Poitiers), Computer Interaction.
Piksel (Bergen) and /* vivo */ (Mexico City) festivals. He In 2012 he completed his M.Sc. in Multimedia at the
also collaborates with Jake Harries in the spam-pop Faculty of Engineering of the University of Porto with the
band Silicone Bake (http://siliconebake.lurk.org/). Alex thesis entitled: Call to Adventure: Designing for Online
is active across the digital arts, for example as organiser Serendipity.
of algoraves (http://algorave.com/), of regular dorkbot
events in Sheffield and London. He also chaired the first
International Conference on Live Interfaces in 2012 (http://
lipam.lurk.org).

341
Joo Martinho Moura Alexander Mller-Rakow

Alexander Muller, born 1982, Germany. He works as re-


search scientist and PhD Candidate at the Design Research
Joo Martinho Moura, Digital Artist. His interests are fo- Labs Berlin, where he investigates the relation between
cused in intelligent interfaces, digital art, digital music bodily movements, interfaces and situational meaning.
and computational aesthetics. Joo Martinho Moura has His project-grounded research is strongly influenced by his
a special interest in the development of interfaces between interest in experimental and embodied interfaces, new in-
human behaviors and digital artifacts. Guest lecturer in struments for musical expression and sound reactive per-
the Master of Digital Art and Technology at the University formance. In addition to his research he was working as
of Minho, he is the author of several publications in the lecturer, e.g. at Hochschule fur Kunste Bremen, University
area of digital art, computer interaction and aesthetics, of Applied Sciences Magdeburg and Berlin University of
and has presented his artwork in several countries and the Arts.
conferences, such as Ars Electronica in 2012 in Linz, the
OFFF festival in Lisbon, Artech International, Guimares www.design-research-lab.org
European Capital of Culture or Chemins Numriques in
the Centre Culturel Saint-Exupry in France.

342
Michael Pogorzhelskiy Lucia Rampino

Michael Pogorzhelskiy is a product and interaction de- Lucia Rampino, PhD, is an assistant professor at the
signer living in Berlin. He is interested in social and be- Politecnico di Milano, Design Department. Her theoretical
havioural consequences of design decisions and the aes- and applied research focuses mainly on the role of design
thetics of physical computing. in new product development processes aimed at innova-
tion. She has participated in a number of European and na-
tionally funded research projects. Since January 2009, she
has been a member of the faculty of the Doctoral Program
in Design at the Politecnico di Milano Doctoral School.

343
Jason Reizner Lusa Ribas

Jason Reizner is a dyslexic hypochondriac originally from Lusa Ribas holds a PhD in Art & Design (2012), a Master
Chicago. After stints in film, print and interactive, he in Multimedia Art (2002) and a Degree in Communication
works now as a researcher in Interaction and Experience Design (1996) from FBAUP (Faculty of Fine Arts, University
Design on the Faculty of Computer Science and Languages of Porto). She is a member of ID+ (Research Institute for
at Anhalt University of Applied Sciences in Kthen (Anhalt), Design, Media and Culture), researching sound-image re-
Germany. He holds a Bachelors in Film, Video & Integrated lations and audiovisuality in digital interactive systems.
Media from Emily Carr Institute of Art & Design in Van As a professor at FBAUL (Faculty of Fine-Arts, University
couver, Canada and a Masters in Media Art and Design of Lisbon) she teaches Communication Design, Editorial
from BauhausUniversitt Weimar. Design and New Media, and Sound and Image. She con-
tributes to events and publications with articles on digital
http://reizner.org art and design.

http://lmlr.wordpress.com
lribas@fba.ul.pt

344
Oscar Palou Rib Theresa Schnell

Theresa Schnell is a student of Fine Arts in Dresden,


Germany. In her second year of university she is inter-
ested in social progresses and how it is possible to trans-
form them. Currently she is in the class of Christian Sery.
In 2012 she joined KAZOOSH! and took part in the actual
working-processes. The different backgrounds of the group
(such as computer sciences, electronics, fine arts etc.) try
to find one expression through their creative work. The
process of trying, researching and finaly forming some-
thing is highly interesting, especially because of the fact
that KAZOOSH! often works project-based in a very limited
period of time.

www.kazoosh.com

Oscar Palou studied Electronic arts and Digital Design


in Barcelona. Considered himself as a pilgrim of sound,
his activity holds on the acoustic sensibility as a way to
achieve communicational, ecological and aesthetical pur-
poses. His current interests root in computational solu-
tions for sound generation and installation.

www.manglart.es

345
Tom Schofield Giselle Stanborough

Tom Schofield is an artist, researcher and Ph.D. candi-


date. He studies and teaches at Culture Lab, Newcastle, UK
(http://culturelab.ncl.ac.uk). His research interests and
art practice centre around the use of data as a material
for artists. Recent self initiated projects include Neurotic
Armageddon Indicator, a wall clock for the end of the
world (http://tomschofieldart.com/Neurotic-Armageddon- Giselle Stanborough is an emerging intermedia artist
Indicator), null by morse, an installation with vintage whose practice often addresses online user generated
military equipment and iPhones (tomschofieldart.com/ media and the way in which such technologies encourage
null-by-morse), and Burj Babil (with Guy Schofield http:// us to identify and perform notions of self. She graduated
fieldventures.org/burj_babil.html) a video installation from the College of Fine Art in Sydney in 2010 with the
which warps computer models using the Google translate University Medal and since then has exhibited in galleries
api. around NSW and in Melbourne. Her work has been shown
online in The Washington Posts Pictures of The Day and in
www.tomschofieldart.com Hennessy Youngmans Art Thoughtz.

http://gisellestanboroughart.blogspot.com.au/

346
Michael Trnkner Andres Wanner

Michael Trnkner graduated in computer sciences at the


University of Applied Sciences Dresden (Germany), spe-
cializing in multimedia programming. He is currently
employed at the research and development division in Andres Wanner is a Swiss-Canadian artist, interaction
the computer graphics department, working on natural designer and educator.
user interfaces and creating software for unique museum His interdisciplinary practice investigates generative
experiences. In 2011 he joined KAZOOSH!, a group of analog systemsmachines and computer programs producing
makers, digital tinkerers and creative hobbyist, who want pictures. He likes to tinker, invent and to play.
to utilize the expertise of their members from backgrounds He has taught internationally and is an Adjunct
such as the fine arts, computer sciences and electronics to Professor at Simon Fraser University, Vancouver, Canada.
build public installations on the verge of art and technol- He has worked as a designer and programmer and holds
ogy. Taking part in KAZOOSH!s mission to develop skills, an MSc in Physics and a BA in Visual Communications.
spread knowledge and broaden the mindset of its mem- His work has been exhibited in major exhibitions
bers and the community. such as SIGGRAPH, IDEAS 10, New Forms Festival, Re-new
Festival, Hyperkult and other international venues. He
www.kazoosh.com has chaired the arts track of the Computational Aesthetics
conference 2011.

www.pixelstorm.ch

347
Heimlichkeit des Berhrens: Exploring the Correlation of Perception and IntimacyInstallation by Alexander Mller-Rakow, Oscar Palou Rib
&MichaelPogorzhelskiy.

The Robot Quartet: a Generative Robotic Drawing InstallationInstallation by Andres Wanner.

348
Null By Morse: Performing Optical Communication with Smart Phones Decomposing Electric Brain Potentials for Audification on a Matrix of
Installation by Tom Schofield. SpeakersInstallation by Titus von der Malsburg & Christoph Illing.

Rhythm Apparatus For the Overhead Projector: a Metaphorical DevicePresentation by Christian Faubel.

349
Performance by Arne Eigenfeldt. Performance by Alba Francesca Battista & Michele Brogna.

Performance by Hideyuki Endo.

350
Performance by Vilbjrg Broch. Performance by Monty Adkins, Julio DEscrivan & Iigo Ibaibarriaga.

Performance by Alex McLean.

351
This project is partially funded by FEDER through the Operational Competitiveness ProgramCOMPETEand
by national funds through the Foundation for Science and TechnologyFCTin the scope of project PEst-C/EAT/
UI4057/2011 (FCOMP-Ol-0124-FEDER-D22700);

352

You might also like