Sound and Video Anthology: Program Notes
Media Compositions and
Performances: Doug Van Nort,
Curator
Curator’s Note
It is with great pleasure that I have
curated Computer Music Journal’s
2014 Sound and Video Anthology. I
decided upon a theme of distributed
agency in digitally mediated performance. In particular, my interest here
is to showcase a multiplicity of ways
in which shared agency manifests
between human performers, as well
as between human and machine performers. The collection begins with
“Part A: Distributed Composition”;
this section presents audio/video
documents that highlight five unique
approaches to distributing and sharing
expressive voices between composerperformers. In these works, the
resulting compositional voice does
not reside in one central location,
but rather is a product of collective
co-creation, at varying levels of spatial and temporal remove. This set
includes a work by Chris Chafe and
colleagues, wherein large-scale compositional qualities are influenced by
global sea levels as well as by a live
audience, resulting in a piece that
is not only artful but consciousnessraising at the same time. In contrast
to this “outsourcing” of the details of
compositional form, the works by Pedro Rebelo and The Hub both present
two very different takes on “network music”: Rebelo’s work defines
a global feedback network whose
sonic character and overall shape are
the product of a large-scale interconnection of disparate acoustic spaces
and performers, whereas The Hub—
the fathers of “computer network
music”—present us with a canonical
example of their ever-groundbreaking
approach to composing for shared,
living network structures. The piece
Notes: doi: 10.1162/COMJ a 00274
Content: doi: 10.1162/COMJ x 00276
by CLOrk (the Concordia Laptop
Orchestra) eschews the classically
calculated and precise world of the
laptop orchestra in favor of the messy
and risky world of interdisciplinary
improvisation. The result is a work
whose shared agency is a product
of listening for gestural engagement
across forms (kinetic, sonic). Finally,
Bill Hsu and Chris Burns present
a piece that intersects this world
of cross-media improvisation with
shared control at the level of their
interactive performance systems,
resulting in a document that demonstrates the possible richness discovered when sharing gestures across
media, between human performers,
and with the system itself.
This sharing of system-level gestural and compositional forms is the
focus of “Part B: Musical Metacreation.” This section highlights
cutting-edge machine improvisation systems in performance with
two top-level human improvisers:
Paul Hession on drums and Finn Peters on flute and saxophone. Hearing
these disparate systems at play with
the same performer begins to hint
at the stylistic differences of their
composer-designers, as well as the
virtuosic flexibility of the human
players. In order to bring focus towards listening to these differences, I
have decided that this section should
be audio-only. Each of these excerpts
comes from a single concert of the
same name that took place at Cafe
OTO in London in July 2014. The
curation of this concert was the work
of Ollie Bown, and so the excellent
selection of the included systems is
purely to his credit. Aside from being
privileged to take part in the concert,
from a curatorial point of view I simply had the good sense to incorporate
these works into the in-progress curation of this collection, both because
they fit so nicely with my chosen
theme and because I could feel the
strong improvisational musicianship
on the evening of performance. I will
leave the description of each system
and piece for the program notes; taken
as a whole, I feel that these works
create an excellent counterpoint to
Part A by virtue of their cohesion
as well as a concentrated focus on
both stylistic engagement and sonic
gestural forms (as compared with
the expansive and organic crossing
of media and expressive types found
within the first set). As a collection,
I hope that you will find the diversity and quality of these works as
compelling as I have, and that they
might provide for a moment to reflect
on the creative insights that may be
gained when one “loosens the reins”
on one’s own artistic control, instead
distributing it among a collective of
listening and expressing performers,
be they present or tele-present, musical beings or meta-musical machines.
Part A – Distributed Composition
1.
Polartide—Chris Chafe
Polartide started as a project for
the 2013 Venice Biennale Maldives Pavilion. A team of musicians
and artists banded together at UC
Berkeley’s Center for New Media
(bcnm.berkeley.edu) to create a sound
marker that tracks sea water levels
in coastal cities. A sound marker is
an alarm of sorts that sounds out to
all members of a community within
earshot of a bell tower. The first version worked with simulated bells, and
this version, Spillover, works with a
live audience and a carillonneur.
The carillonneur plays a fixed score
that is a “musification” of global sealevel data. The audience, using the
Spillover Web app, controls the speed
or tempo at which the carillonneur
plays the score. The audience controls
how fast the music plays from note
to note, and metaphorically explores
how our actions affect the rise of
global sea water.
The Polartide team includes:
Chris Chafe, Composer, Stanford University Center for Computer Research
in Music and Acoustics (CCRMA)
Sound and Video Anthology: Program Notes
119
Rama Gottfried, Musician, Berkeley
Center for New Media
Perrin Meyer, Sound Designer, Meyer
Sound
Tiffany Ng, Musician, Berkeley Center for New Media
Greg Niemeyer, Artist, Berkeley Center for New Media
The Polartide team would like to
thank the following people for their
support: Monica Lam, June Holtz,
Sharon Eberhart, and The Open
Source Community.
Chris Chafe is a composer, improvisor, and cellist, developing much of
his music alongside computer-based
research. He is Director of Stanford
University’s Center for Computer
Research in Music and Acoustics
(CCRMA). At IRCAM (Paris) and
The Banff Centre (Alberta), he pursued methods for digital synthesis,
music performance, and real-time
Internet collaboration. CCRMA’s
SoundWIRE project involves live concertizing with musicians the world
over. Online collaboration software
including JackTrip and research into
latency factors continue to evolve.
An active performer both on the net
and physically present, his music
reaches audiences in dozens of countries and sometimes at novel venues.
A simultaneous five-country concert
was hosted at the United Nations
in 2009. Chafe’s works are available
from Centaur Records and various
online media. Gallery and museum
music installations are into their
second decade with “musifications”
resulting from collaborations with
artists, scientists, and MDs. Recent
work includes the Brain Stethoscope
project, Polartide for the 2013 Venice
Biennale, Tomato Quintet for the
transLife:media Festival at the National Art Museum of China, and
Sun Shot played by the horns of
120
large ships in the port of St. John’s,
Newfoundland.
2.
Netrooms: The Long
Feedback—Pedro Rebelo
Netrooms: The Long Feedback is a
participative network piece which
invites the public to contribute to an
extended feedback loop and delay line
across the Internet. The work explores
the juxtaposition of multiple spaces
as the acoustic, social, and personal
environment becomes permanently
networked. The performance consists
of live manipulation of multiple
real-time streams from different
locations that receive a common
sound source. Netrooms celebrates
the private acoustic environment as
defined by the space between one
audio input (microphone) and output
(loudspeaker). The performance of
the piece consists of live-mixing a
feedback loop with the signals from
each stream.
Visuals by Rob King.
Pedro Rebelo is a composer, sound
artist, and performer working primarily in chamber music, improvisation,
and sound installation. In 2002, he
was awarded a PhD by the University
of Edinburgh, where he conducted research in both music and architecture.
His music has been presented in
venues such as the Melbourne Recital
Hall, National Concert Hall Dublin,
Queen Elizabeth Hall, Ars Electronica, and Casa da Música, and at events
such as Weimarer Frühjahrstage fur
zeitgenössische Musik, Wien Modern
Festival, Cynetart, and Música Viva.
His work as a pianist and improvisor
has been released by Creative Source
Recordings, and he has collaborated
with musicians such as Chris Brown,
Mark Applebaum, Carlos Zingaro,
Evan Parker, and Pauline Oliveros.
Pedro has recently led participatory
projects involving communities in
Belfast and favelas in Maré, Rio de
Janeiro. This work has resulted in
sound art exhibitions at venues such
as the Metropolitan Arts Centre in
Belfast, Espaço Ecco in Brası́lia, and
Parque Lage and Museu da Maré in
Rio de Janeiro.
His writings reflect his approach to
design and creative practice in a wider
understanding of contemporary culture and emerging technologies. Pedro
has been Visiting Professor at Stanford
University (2007) and senior visiting
professor at Universidade Federal do
Rio de Janeiro, Brazil (2014). He has
been Music Chair for international
conferences such as ICMC 2008, SMC
2009, and ISMIR 2012. At Queen’s
University Belfast, he has held posts
as Director of Education and Acting
Head of School in the School of Music and Sonic Arts and is currently
Director of Research for the School
of Creative Arts, including the Sonic
Arts Research Centre. In 2012 he
was appointed Professor at Queen’s
and awarded the Northern Bank’s
“Building Tomorrow’s Belfast” prize.
3.
Multiple Issues—The Hub
Multiple Issues is a composite video
constructed of legacy video shots
from video footage that Hub member Scot Gresham-Lancaster made
onstage during various American
and European performances over the
last 25 years. The soundtrack—made
from Hub pieces such as “WaxLips”,
“Stuck Note”, etc.—drives the jump
shot editing decided algorithmically with the Movie.Py python
program set to trigger at various
auditory thresholds. This editing
technique reflects the egalitarian
and cooperative nature of all Hub
collaborations.
The Hub, an American “computer
network music” ensemble formed in
Computer Music Journal
1986, consists of John Bischoff, Tim
Perkis, Chris Brown, Scot GreshamLancaster, Mark Trayle, and Phil
Stone. The Hub was the first live
computer music band whose members are all designers and builders
of their own hardware and software
instruments.
The Hub grew from the League of
Automatic Music Composers: John
Bischoff, Tim Perkis, Jim Horton,
and Rich Gold. Perkis and Bischoff
modified their equipment for a
performance at The Network Muse
Festival in 1986 at The Lab in San
Francisco. Instead of creating an ad
hoc wired connection of computer
interaction, they decided to use a
hub—a general-purpose connection
for network data. This was less
failure-prone and enabled greater
collaborations. The Hub was the first
band to do a telematic performance,
which took place in 1987 between
the Clocktower and Experimental
Intermedia venues in New York.
Because this work represents some
of the earliest work in the context
of the new live music practice of
networked music performance, they
have been cited as the archetypal network ensemble in computer music.
The Hub’s best-known piece, “Stuck
Note” by Scot Gresham-Lancaster,
has been covered by a number of
network music bands, including the
Milwaukee Laptop Orchestra (MiLO)
and the Birmingham Laptop Ensemble (BiLE). They have collaborated
with the Rova Saxophone Quartet,
Nic Collins, Phil Niblock, and Alvin
Curran. They currently perform
around the world after a seven-year
hiatus that ended in 2004.
4.
Dancing with
Laptops—CLOrk
Dancing with Laptops is an improvisatory collaboration between
Concordia Laptop Orchestra (CLOrk)
and the dance group Le Collab’Art de
Steph B. Twenty laptopists and three
dancers improvised freely without
prescribed compositional or technological restrictions and without an
assigned leader. This performance
was a first in a series of interdisciplinary, non-hierarchical improvised
performances designed to develop
listening, dialogical, and performative skills in collaborative settings,
which are typically democratic in
the synchronous (performances) and
the asynchronous (planning, realizing, researching) time frames. After
two Dancing with Laptops rehearsals,
CLOrk members decided to improvise
in response to (rather than leading)
the dancers. Though arguably a hierarchical entrance strategy, it proved to
be effective in generating a conversational setting in which all participants
had opportunities to lead or respond to
others.
The Concordia Laptop Orchestra
(CLOrk) is an ensemble of 20–25
laptop performers that operates in
the framework of a university course
for electroacoustic music majors at
Concordia University in Montreal.
It was established by Eldad Tsabary
in 2011 with a curriculum built
around highly participatory planning,
production, and realization of interdisciplinary and networked laptop
orchestra performances, including
collaborations with a symphonic
orchestra, jazz and chamber ensembles, other laptop orchestras, dancers,
VJs, actors, and various soloists.
CLOrk performances are typically
used as opportunities to investigate
and explore new aesthetic, performative, conceptual, technological,
social, and educational possibilities. Every performance serves as a
research-creation platform for advancing the practice of digital music
performance and our understanding
thereof.
5.
Xenoglossia/Leishmania—
Bill Hsu (interactive
animation), Christopher
Burns (live electronics)
Xenoglossia/Leishmania is a structured audiovisual improvisation,
utilizing live electronics and interactive animations. Video is projected
on stage, above and behind the musicians. The musical and visual
performances are highly interdependent, guided together through the
actions of the performers, automated
real-time analysis of the audio, and
the exchange of networked messages
between the audio and animation
systems.
The Xenoglossia audio software facilitates high-level control of complex
polyphonic output. The performer
initiates multiple simultaneous generative processes, each with distinct
gestural and textural content, then
controls their continuation and development. The software provides
the ability to alter and reshape the
ongoing processes along dimensions
including pitch, rhythm, timbre, and
rate of evolution. The performer
can also clone and reproduce the
behavior of interesting sonorities and
textures, and shape the large-scale
form of the performance using tools
that generate contrast, variation, and
synchronization between processes.
Leishmania is an interactive animation environment that visually
resembles colonies of single-cell organisms in a fluid substrate. Each
cell-like component has hidden initial connections to and relationships
with other components in the environment. The colonies evolve and
“swim” through the substrate, based
on a combination of colonial structure and inter-relationships and flows
in the fluid substrate that might be
initiated by gestural input. Protean,
organic-looking shapes emerge and
evolve in the system in a highly
Sound and Video Anthology: Program Notes
121
unpredictable manner; the colonies
alternately congeal into relatively
well-defined forms, or disperse into
chaos. The system resembles an
abstract painting environment; a
gestural interface sets the fluid substrate in motion and influences the
behavior of the colonies of cell-like
components.
These two systems communicate
with one another in a variety of
ways. The animation is influenced by
the real-time analysis of audio from
Xenoglossia. High-level tempo, spectral, and other features are extracted
and sent via Open Sound Control to
the animation environment. Simple
and overly obvious mappings of sound
to visual parameters are avoided, but,
as can be observed in the video
clips provided later, the audio clearly
affects the overall coherence and
behavioral trends of the colonies.
The systems also exchange messages over a network interface.
Xenoglossia conveys information
about phrase-level timing and formal
evolution to the animation environment. In turn, Leishmania sends
visual descriptors regarding the density and position of cell clusters to
Xenoglossia, influencing the rhythmic density, sonic character, and the
coordination of audio layers. The
result is a closed loop of high-level
descriptive information between the
two systems. Hence, we are improvising with our respective generative
systems; in addition, each system
monitors and is influenced by the
behavior of the other.
Bill Hsu works with electronics
and real-time animation systems.
He is interested in complex generative systems, inspired by natural
processes, that interact with live performers. He has built systems, tools,
installations and compositions in
collaboration with Peter van Bergen,
Chris Burns, John Butcher, James Fei,
122
Matt Heckert, Lynn Hershman, Paula
Levine, Jeremy Mende, and Gino
Robair. He has recently performed
and presented work at the Blurred
Edges Festival 2014 (Hamburg), Zero
One Garage (San Jose), Yerba Buena
Center for the Arts (San Francisco),
San Francisco Electronic Music Festival 2013, ACM Creativity and
Cognition 2013 (Sydney), and NIME
2013 (Daejeon and Seoul). He teaches
and does research in the Department
of Computer Science at San Francisco
State University.
Christopher Burns is a composer and
improviser developing innovative
approaches to musical architecture.
His work emphasizes trajectory, layering and intercutting a variety of
audible processes to create intricate
forms. The experience of density is
also crucial to his music: His compositions, which often incorporate
materials that pass by too quickly to
be grasped in their entirety, present
complex braids of simultaneous lines
and textures. Several recent projects
incorporate animation, choreography,
and motion capture, integrating performance, sound, and visuals into a
unified experience.
Burns’ work as a music technology researcher shapes his work
in both instrumental chamber music and electroacoustic sound. He
writes improvisation software incorporating a variety of unusual user
interfaces for musical performance
and exploring the application and
control of feedback for complex and
unpredictable sonic behavior. In
the instrumental domain, he uses
algorithmic procedures to create
distinctive pitch and rhythmic structures and elaborate them through
time. Burns is also an avid archaeologist of electroacoustic music,
creating and performing new digital
realizations of music by Cage, Ligeti,
Lucier, Stockhausen, and others.
His recording of Luigi Nono’s La Lontananza Nostalgica Utopica Futura
with violinist Miranda Cuckson was
named a “Best Classical Recording of
2012” by The New York Times.
A committed educator, Burns
teaches music composition and technology at the University of Wisconsin,
Milwaukee. Previously, he served as
the Technical Director of the Center
for Computer Research in Music
and Acoustics (CCRMA) at Stanford
University, after completing a doctorate in composition there in 2003.
He has studied composition with
Brian Ferneyhough, Jonathan Harvey,
Jonathan Berger, Michael Tenzer, and
Jan Radzynski.
Burns is also active as a concert producer. He co-founded and
produced the Strictly Ballroom contemporary music series at Stanford
University from 2000 to 2004, and has
contributed to the sfSound ensemble
in the San Francisco Bay Area since
2003. Since 2006, he has served as the
artistic director of the Unruly Music
festival in Milwaukee.
Part B – Musical Metacreation
Curator’s Note The “musical
metacreation” concert event was
recorded by Cafe OTO, and received
funding from the Design Lab at the
University of Sydney. It was further
supported by NIME 2014 (Goldsmiths) as a satellite event, which fed
into a musical metacreation workshop presented at NIME by Brown,
Eigenfeldt, and Philippe Pasquier.
1.
Paul Hession—drums,
Isambard
Khroustaliov—software
Being inside this cyclotron of
atomized information from my
Computer Music Journal
own vantage point produces a palpable sense of vertigo. A feeling
that it could be anything in any
order by anyone at any time for
any reason. Everything pointing
in all directions quaquaversally
but arriving at no destination.
And its effect is a cancellation of
affect. A feeling like Baudrillard’s
screen stage of blank fascination
has reached its terminal phase
and all previous depths are collapsing into an endless vista of
dazzling surface play.
—Eric Lumbleau of Mutant Sounds, quoted online
at www.theawl.com/2012/11/
the-rise-and-fall-of-obscure-musicblogs-a-roundtable. The piece employs a computer model of a penguin,
some cellular automata, and analysisdriven concatenative synthesis to
manifest and interrogate this mal
d’archive.
2.
The Indifference Engine
versus Paul Hession (software
by Arne Eigenfeldt)
My software is often built around
the concept of negotiation, in which
virtual musical agents attempt to
come to some understanding in
terms of what they want to achieve
musically, and how they try to get
there. This can be translated into
the notion of desires and intentions.
In this particular work, the virtual
agents have to deal with a Paul
Hession, who has his own desires
and intentions, unknown to them.
The agents must decide whether
to try to follow the live performer,
or continue with their own plans.
To make things more complicated,
each agent is given only a short
“view” of the outside world (a quarter
second, every two seconds) in order to
form their individual beliefs of what
the performer is doing. Since these
beliefs will often be contradictory, the
agents end up spending a lot of time
arguing, resulting in the occasional
indifference to the live performer.
3.
Paul Hession—drums, Doug
Van Nort—FILTER system
This piece presents the Freely Improvising Learning and Transforming
Evolutionary Recombination (FILTER) system, in an improvised duo
with percussionist Paul Hession.
The project explores themes such as
sonic gestural understanding, stylistic tendencies, textural shifts and
transformations of the lived episodic
memory as it develops in the moment of performance. The work was
born from a desire to reflect upon,
and perhaps model, my own human performance practice with my
Granular-feedback Expanded Instrument System (GREIS), wherein I often
capture and transform the musical
streams from other performers on the
fly.
4.
Zamyatin (software by Oliver
Brown) with Finn Peters (sax)
Zamyatin is part of an ongoing study
into software systems that act in performance contexts with autonomous
qualities. The system comprises an
audio analysis layer, an inner control
system exhibiting a form of complex dynamical behavior, and a set
of “composed” output modules that
respond to the patterned output from
the dynamical system. The inner systems consists of a bespoke “Decision
Tree” that is built to feed back on
itself, maintaining both a responsive
behavior to the outside world and
a generative behavior, driven by its
own internal activity. The system
has been evolved using a database of
previous work by the performer, to
find interesting degrees of interaction between this responsivity and
internal generativity. Its output is
“sonified” through different output
modules, mini generative algorithms
composed by the author. Zamyatin’s
name derives from the Russian author whose dystopian vision included
machines for systematic composition that removed the savagery of
human performance from music.
Did he ever imagine the computer
music free-improv of the early 21st
century?
5.
Finn Peters—sax, Nick
Collins—FinnSystem
This is the second outing for FinnSystem, a live musical agent originally
born on 14 April 2012. The agent was
educated on a corpus of Finn Peters’
sax and flute playing. While Finn will
have developed new techniques in
the intervening two years, the system
remains frozen on an earlier version
of himself; so Finn will be encountering the agent at an interesting
remove via a previous iteration of
himself.
6.
Finn Peters—sax, Shlomo
Dubnov and Greg
Surges—software
This work explores a novel type of
interaction between a live musician
and a computer that was pre-trained
to improvise on a known, different
piece of music. While each partner
in the human-machine duo is free
to improvise on its own materials,
they both listen to each other, coming in and out of sync and creating
a human-machine musical dialog
in a dynamic and often unexpected
mechanically driven plot. This piece
is the next step in the development
Sound and Video Anthology: Program Notes
123
of the Audio Oracle method that
adds a “listening” component to the
improvisation process. The Audio
Oracle analyses repetitions in music
and uses them to create variations
in the same style. Moreover, during
the improvisation, the computer tries
to match its choice of improvisation materials to those of the live
musician. From time to time, the
computer also imitates the live musician by mirroring the ambiguity of
his or her style, thus altering between
sections of contrasting dialog and a
machine-augmented “imitative” solo
performance.
[This work is based on research
on stylistic modeling carried out by
Gerard Assayag and Shlomo Dubnov
and on research on improvisation
with the computer by G. Assayag,
M. Chemillier, G. Bloch, and Arshia
Cont (aka the OMax Brothers) in
the Music Representations Group at
l’Institut de Recherche et Coordination Acoustique/Musique (IRCAM).]
8.
Finn Peters/Paul Hession/the
Matt Yee-King simulator
The Matthew Yee-King simulator
attempts to model and reproduce the
improvisational behavior of Matthew
Yee-King. The performance begins
with the real Matthew manipulating
two sampling machines and a set of
effects implemented in the SuperCollider environment, controlled via
an Akai MPD24 MIDI controller. A
probabilistic model of the sequence
of control data he generates is built in
real time. When Matthew is satisfied
that he has demonstrated a range of
interesting and appropriate control
data patterns to the system, he flicks
the system to “generate” mode and
steps away. The model is then used
to autonomously control the samplers and effects for the rest of the
performance.
Finn Peters (sax, flute) has worked
with such pioneers as Frederick
Rzewski, Bill Frisell, DJ Spinna, Sam
Rivers, and Sa-Ra creative partners.
He has been involved in upwards of
200 recordings for other artists, and
has released a number of his own
recordings. In the words of Straight
No Chaser magazine, he is “the
blazing definition of a seriously heavy
player.” Awards and recognition
include the London Young Jazz
Musician Award, the BBC Jazz
Awards Best Band, the Jerwood Rising
Stars program, a nomination for the
Paul Hamlyn Composition Award,
and the Radio 1 Worldwide Awards
“Best Session” category. Throughout
2010 Peters worked on a new electroacoustic project entitled “Music of
the Mind” which deals with brain
waves in music and new forms of
algorithmic composition and improvisation. The album was described by
the Independent (London) as “nothing
like you have ever heard before.”
Part B Bios
7.
piano prosthesis—Michael
Young
This is one of a developing series
of duos for a human and a machine
performer. Both “musicians” adapt to
each other through mutual listening
(i.e., via audio only) and response
as the performance develops. The
human’s improvisation is encoded
by the computer through statistical
analysis of extracted features and by
cataloguing these in real time. Each
observation made by the computer is
assigned to a set of musical output
behaviors. Recurring features of the
player’s improvisation can then be
recognized by the computer. The
machine “expresses” this recognition
by developing, and modifying, its own
musical output, just as another player
might.
124
Paul Hession (drums) was born in
Leeds in 1956. He took up drumming at the age of 15 and since then
has played and broadcast in many
European and Scandinavian countries as well as Argentina, Mexico,
Cuba, the USA, and Canada. He has
played with many of the major figures on the free music scene, such
as Peter Brötzmann, Derek Bailey,
Evan Parker, Lol Coxhill, Sunny Murray, Marshall Allen, Frode Gjerstad,
Peter Kowald, Joe McPhee, Borah
Bergman, Otomo Yoshihide, and his
old friends Alan Wilkinson, Simon
Fell, Mick Beck, Hans-Peter Hiby,
Petter Frost-Fadnes, and Rus Pearson.
Collaborators from a different scene
are Squarepusher and DJ/producer
Paul Woolford. He is known to relish
the interaction of collective musicmaking, but also responds to the
challenge of solo performance.
Isambard Khroustaliov is the solo
alias of electronic musician and composer Sam Britton from the groups
Icarus, Fiium Shaarrk, and Leverton
Fox. Britton trained as an architect
at the Architectural Association in
London but took up music after securing a recording contract as an
undergraduate. Since 1997 he has
recorded and released music for a series of independent electronic music
labels in the UK and the US (Output
Recordings, Temporary Residence,
Domino and The Leaf Label, among
others) and performs internationally
with his various groups, solo, and in
collaboration with numerous improvising musicians and ensembles. In
2006 he completed a masters course
in electronic music and composition at IRCAM in Paris and in 2011
worked with the London Sinfonietta
as part of their Writing the Future
commissioning scheme.
Computer Music Journal
Arne Eigenfeldt is a composer of
live electroacoustic music and a researcher into intelligent generative
music systems. His music has been
performed around the world, and his
collaborations range from Persian
tar masters to contemporary dance
companies to musical robots. He has
presented his research at conferences
and festivals such as ICMC, SMC,
ICCC, EMS, EvoMusArt, GECCO,
and NIME. He teaches music technology at Simon Fraser University
and is the co-director of the Metacreation Agent and Multi-Agent Systems
(MAMAS) lab.
Doug Van Nort is a sonic artist and
researcher whose work is concerned
with the complex and embodied nature of listening, improvisation both
with and by machines, and the phenomenology of time consciousness
and of collective co-creation. His
research takes the form of scholarly
writings on these phenomena, composed and improvised electroacoustic
music, pieces of sound-focused art,
and digital artifacts designed and
developed in these pursuits. Van
Nort’s work is a synthesis of his
background in mathematics, media
arts, music composition, and performance. Van Nort has recently joined
the School of Arts, Media, Performance and Design at York University
in Toronto, continuing his work
in digitally mediated performance.
He often performs solo as well as
with a wide array of artists spanning
musical styles and artistic media.
Regular collaborators include Pauline
Oliveros and Al Margolis, and he
also works as a member of the Composers Inside Electronics. His music
appears on several labels (e.g., Pogus,
Deep Listening, Attenuation Circuit,
and Zeromoon), and his writings on
sound/performance/electroacoustics
have been published by a number
of outlets (e.g., Organised Sound,
Leonardo Music Journal, and Journal of New Music Research). See
www.dvntsea.com.
Ollie Bown is a researcher, programmer, and electronic music maker. He
creates and performs music as one
half of the duo Icarus, and he performs
regularly as a laptop improviser in
electronic and electroacoustic ensembles. He has worked with musicians
such as Tom Arthurs and Lothar
Ohlmeier of the Not Applicable
Artists, Brigid Burke, Adem Ilhan, Peter Hollo, and Adrian Lim-Klumpes.
Bown has designed interactive sound
for installation projects by Squidsoup
and Robococo, at venues such as the
Powerhouse Museum in Sydney, the
Oslo Lux, the Vivid Festival, Sydney,
and the Kinetica Art Fair, London. In
his research role he was recently local
co-chair of the 2013 International
Conference on Computational Creativity and is on the organizing
committee of the Musical Metacreation Workshop and events series.
Nick Collins is Reader in Composition at Durham University. His
research interests include live computer music, musical artificial intelligence, and computational musicology, and he is a frequent international
performer as composer-programmerpianist, from algoraves to electronic
chamber music. He co-edited The
Cambridge Companion to Electronic
Music (Cambridge University Press,
2007) and The SuperCollider Book
(MIT Press, 2011), wrote the Introduction to Computer Music (Wiley
2009), and co-wrote Electronic Music
(Cambridge University Press Introductions series, 2013). Sometimes,
he writes in the third person about
himself, but he is trying to give it up.
Shlomo Dubnov is a Professor in
Music and Computer Science at the
University of California, San Diego
(UCSD). His main research is on
applying statistical and machine
learning techniques to the modeling
of music, stories, and entertainment
media. His work on computational
modeling of style and computer audition has led to development of
several computer music programs for
improvisation and machine understanding of music. Dubnov studied
composition and computer science in
Jerusalem and served as a researcher
at IRCAM in Paris. He currently
directs the Center for Research in Entertainment and Learning (CREL) at
UCSD’s Qualcomm Institute (Calit2)
and serves as a lead editor in ACM
Computers in Entertainment.
Greg Surges makes electronic music,
software, and hardware. His work has
been released on the various labels,
and his research and music have
been presented at multiple festivals
and conferences. He is currently a
PhD student at the University of
California, San Diego. Previously, he
earned a MM in Music Composition
and a BFA in Music Composition
and Technology at the University of
Wisconsin, Milwaukee.
Michael Young is a composer and
researcher currently based at Goldsmiths, University of London, and
will soon take up the post of Pro-Vice
Chancellor (Teaching and Learning)
at De Montfort University. He is cofounder of the EPSRC-funded “Live
Algorithms for Music” network 2004,
which investigates autonomous systems for live music creation. He studied at the Universities of Oxford and
Durham. The “ prosthesis” series has
been developing since 2007 and includes versions for clarinet, trio, flute,
oboe, and piano. Chris Redgate’s latest
CD release (Electrifying Oboe, Metier
Records) includes two versions of
“oboe prosthesis.” For more info and
audio, visit www.michaelyoung.info.
Sound and Video Anthology: Program Notes
125
Matthew Yee-King is a lecturer in
creative computing at Goldsmiths
College as well as a computer music
composer, performer, and researcher.
His work covers a range of styles,
from the use of agent-based live improvisers to regular electronic music.
Recent activities include chairing
the workshops at the 2012 Supercollider Symposium, including a live
algorithm hackathon, and extensive
involvement in the Arts Council–
funded Music of the Mind project
alongside composer Finn Peters. He
has been involved in significant public engagement activities, presenting
his arts/science crossover projects at
Science Festivals around the UK, as
well as on national television and
radio. He has performed live internationally and nationally as well
as recording many sessions for BBC
Radio. In the past his solo music has
been released on electronic music
imprints such as Warp Record James’
Rephlex Records. Past collaborators
include Jamie Lidell, Tom Jenkinson
(Squarepusher), Finn Peters, and Max
de Wardener.
Supplementary Audio/Video
Examples for Articles
from CMJ 38:4
1.
An Intuitive Synthesizer of
Continuous Interaction
Sounds: Rubbing, Scratching,
and Rolling—Simon Conan,
Etienne Thoret, Mitsuko
Aramaki, Olivier Derrien,
Charles Gondre, Richard
Kronland-Martinet, and
Sølvi Ystad
In this video, an intuitive synthesizer
for rubbing, scratching, and rolling
interaction sounds is presented. This
real-time synthesizer allows the user
126
to independently control the different
types of interaction and to morph
between them, and to control the
properties of the object, such as shape
and material and to morph between
the material categories. In the first
part of the video, the intuitive control
of the properties of an impacted,
resonant object is presented. In
the second section, the intuitive
control of the interaction part is
demonstrated, by controlling the
synthesizer with a graphical tablet.
2.
Cellular Automata
Histogram Mapping
Synthesis—Jaime Serquera
and Eduardo Reck Miranda
See the article’s appendix for description of the provided examples.
3.
Sound Synthesis of a
Gaussian Quantum Particle
in an Infinite Square—
Rodrigo F. Cadiz and Javier
Ramos
Audio Example 1
First 90 seconds of the spectrogram of
the full revival time. The spectrum
retakes its initial form towards the
end of the revival time, as predicted
by Equation 29, and exhibits a mirror
revival at half the revival time, as
predicted by Equation 31. (Please
refer to Figures 7 and 8.)
Audio Example 2
Spectrogram of a single bounce. The
slopes of the frequency bands change
according to the direction of the
wavepacket. When a bounce occurs,
approximately at time t = 7, a change
in slope happens. (Please refer to
Figure 9.)
Audio Example 3
Spectrogram for a linear increase
in α from 1.7 to 400. The initial
frequency band gets narrower around
the value associated with p0 , in this
case approximately 3200 Hz. (Please
refer to Figure 10.)
Audio Example 4
Spectrogram for a linear increase
in the mass from 2 to 2.06. The
phase bands get further apart in
frequency as the wave packet’s group
velocity diminishes. (Please refer to
Figure 11.)
Audio Example 5
Spectrogram for a linear increase in
the initial momentum from −0.2 to
0.43. The frequency band around p
moves along with it, from a center
frequency of 70 Hz in the case p =
−0.2 to 3,500 Hz when p = 0.43.
(Please refer to Figure 12.)
Audio Example 6
Spectrogram for a linear increase in
the length of the well in steps of
600, 700, and 800, at times t = 0,
t = 2, and t = 4. The frequency band
gets narrower and moves towards
the lower frequencies. As changing
the length of the well implies a
full recalculation of the quantum
particle’s dynamics, audible clicks
appear in the sound signal when the
length is changed in real time. (Please
refer to Figure 13.)
Audio Example 7
Spectrum for linear increase of N
from 0 to 60. The cutoff frequency
depends directly on N. Because N is an
integer in this implementation, clicks
Computer Music Journal
are produced when this parameter is
changed continuously. (Please refer to
Figure 14.)
Audio Example 8
Spectrum for change in the mass from
0 to 1 (at time t = 2) and back to 0
(at time t = 13). The ordering of the
frequencies is affected when the
mass is varied around small values.
The behavior of the momentum
distribution is chaotic when the mass
is near zero, and it is very organized
and similar to a sawtooth signal when
the mass is near one. (Please refer to
Figure 15.)
4.
Sound Synthesis with
Auditory Distortion
Products—Gary S. Kendall,
Christopher Haworth, and
Rodrigo F. Cadiz
Please see the article’s appendix for
description of the provided examples.
Sound and Video Anthology: Program Notes
127