PORTFOLIO OF ORIGINAL ELECTROACOUSTIC COMPOSITIONS
A thesis submitted to the University of Manchester for the degree of Doctor
of Philosophy in the Faculty of Humanities
2016
Daniel Saul
SCHOOL OF ARTS, LANGUAGES AND CULTURES
Contents
Written commentary
List of Figures ………………………………………………………………………….. 5
List of Tables and Diagrams ………………………………………………………… 6
Portfolio of Musical Works …………….……………………………….….………… 7
USB Content …………………………………………………….……………………… 8
Abstract ……………………………………….………………………………………… 9
Declaration/Copyright Statement ………………………………………….……… 10
Technical Information (surround works)……………..………………….……….. 11
Acknowledgements ….……………………………………………………………… 13
Introduction ..……………………………………………………………………….… 15
CHAPTER 1. FRICTIONS/STORMS: SOURCE BONDING AND GESTURAL
SURROGACY AS COMPOSITIONAL AIDS
1.1 Compositional methodology ……………………………………….……...18
1.2 Source materials: spectromorphological archetypes and variants .…. 19
1.3 Transformation, gestural surrogacy and inferred sources ….….……... 21
1.4 The function of space …………………………………………….………. 22
1.5 Structural analysis……………………………..……………….……….… 22
CHAPTER 2. RISE: STRUCTURAL FUNCTIONS, EXPECTATION AND SPACE
IN THE STEREO IMAGE
2.1 Overview …………………………………………………………………… 26
!2
2.2 Transformation of source recordings ..…………………………….….… 26
2.3 Space-forms in Rise ……………………………………………………… 28
2.4 Spectromorphological expectation ….………….……………….……… 30
2.5 Structure and function attribution ………………………………….……. 33
2.6 Alternate 10-channel stem version ……………………………………… 34
CHAPTER 3. GLITCHES/TRAJECTORIES: BEHAVIOUR, MOTION AND
GROWTH PROCESSES, AND METHODOLOGIES FOR 8-CHANNEL SOUND
SPATIALISATION
3.1 Overview …………………………………………………….……….……. 36
3.2 Blurred sources/blurring the gesture–texture continuum …………..…. 37
3.3 Live electronics development ………………………………………….… 39
3.4 Vectorial space in the 8-channel image ..…………………………….… 41
3.5 Behaviour, and motion and growth processes ..……………………..… 43
CHAPTER 4. TRANSMISSIONS/INTERCEPTS: STRUCTURAL COHERENCE IN
LONG-FORM AND METHODOLOGY FOR CONCEPTUAL WORK
4.1 Concept ..………………………………………………………..….……… 50
4.2 Extrinsic associations/intrinsic spectromorphologies …..….………….. 51
4.3 Utterance and voice transformation ..…………………………..…….…. 52
4.4 Tonality, glissando and structural functions ..……………………..….… 53
4.5 Aleatoric development ..…………………………………………..……… 54
4.6 Typology of sounds ..……………………………………………….…..… 54
4.7 Structural analysis (spectromorphology and space-form) ……….…… 58
CHAPTER 5. REDUCTIONS/EXPANSES: SPATIAL TRANSCENDENCE AND
STRATEGIES FOR SOUND DIFFUSION PERFORMANCE
5.1 Overview …………………………………………………………..…….… 71
5.2 Source materials and transformation ……………………….……….…. 72
5.3 Structure and development ……………..…………….…………………. 73
5.4 Embracing superimposed space: considerations for
sound diffusion performance ………………….………………..……..…. 74
!3
5.5 Diffusion strategy 1 (large-scale concert system) ………………..…… 79
5.6 Diffusion strategy 2 (small-scale concert system) …………….………. 82
CHAPTER 6. ITERATION/BANGER: COMBINING ALGORITHMIC,
GENERATIVE AND ALEATORY PROCEDURES FOR MULTIPLE
PERFORMABLE OUTCOMES
6.1 Overview ………………………………………………………..…….…… 84
6.2 Development in Max and Live/signal path overview ……………….…. 86
6.3 Genre hybridity and stylistic similitude …………………………….……. 89
6.4 Development/extraction of materials and organisation ..……………… 90
6.5 Structure ..……………………………………………………………..…… 91
6.6 Multiple performable outcomes/additional performance functionality .. 93
CHAPTER 7. CONCLUSIONS AND CONTRIBUTION TO RESEARCH
7.1 The music ………………………………………………………………….. 95
7.2 Responses to research questions ………………………………………. 96
7.3 Research contribution: harnessing aleatoric elements in
electroacoustic composition ………………………..……………………. 98
7.4 The future ..………………………………………………….…….……… 100
Bibliography …………………………………………………………………….…… 102
Discography .……………………………………………….………….……….…… 106
Appendix A: Programme notes and key performances ………………….….. 108
Appendix B: Iteration/Banger technical information ………………………… 114
Appendix C: (Audio appendix, USB drive)
Final word count: 19,114
!4
List of Figures
Figure 1: Morphological models ………………………………………….…….….… 19
Figure 2: Space-forms in the stereo image ………………………………………… 28
Figure 3: Sonogram displaying spectral rise/fall glissando in Rise, Section 5 …. 29
Figure 4: Gesture units/spectromorphological variants .……………………….…. 30
Figure 5: Rise sonogram and waveform representation to timeline ………..…… 32
Figure 6: Three visual analogies of audio processing ..…………………………… 37
Figure 7: Proximate panoramic vectorial movement …………………………..….. 41
Figure 8: Vectorial space passing through egocentric space …………………..… 41
Figures 9 & 10: Vectorial motions ……………………………………………..….…. 41
Figure 11: Perspectival trajectories in circumspace ………………………….……. 42
Figure 12: Transmissions/Intercepts sonogram (Parts 1 and 2) ………….…..….. 56
Figure 13: Transmissions/Intercepts sonogram (Parts 3 and 4) …………………. 57
Figure 14: Morphological stringing of sound materials in
Transmissions/Intercepts Part 1 ………………………..………….… 59
Figure 15: Spectral development in Transmissions/Intercepts Part 2
(09:35 - 12:50) ………….….……….……………………….………… 64
Figure 16: Spectral development in Transmissions/Intercepts Part 3
(15:20-19:10)………………………………………………….….…..… 67
Figure 17: MANTIS 48-channel performance system and four possible
8-channel groupings …………..…….……………………..…….….. 77
Figure 18: Stage 8 quasi-ring (on-stage positions and directions) ………………. 78
Figure 19: Inner 4 (8-channels to 4 loudspeakers and directions) ………………. 78
Figure 20: Reductions/Expanses structural overview/MANTIS loudspeaker
group assignments ………………………………..………..…..…… 79
Figure 21: Reductions/Expanses channel assignments for a small-scale
12 x loudspeaker diffusion system ……….……………………….. 81
Figure 22: Iteration/Banger software and hardware setup/signal path ..…..….… 87
Figure 23: Iteration/Banger structural overview…..……………………….……….. 92
Figure 24: A mutually beneficial methodology for the merging of
acousmatic composition and live electronics practices………… 99
Figure 25: Reshaping phasor~ ramp signal via log~ …………………..……..…. 114
Figure 26: Logarithmic output of the phasor~ signal (four examples)………..… 116
Figure 27: MIDI controller assignment overview …………………………….…… 120
!5
List of Tables
Table 1: Source materials and related spectromorphologies developed for
Frictions/Storms …………………………………..……………….. 20
Table 2: Typology of sounds in Transmissions/Intercepts ……………..….………. 55
Table 3: Figure 15 pitch references ……………………………………………….… 63
Table 4: Max Patch 1, 8-channel output to Live overview ……………………….. 121
List of Diagrams
Diagram 1: 8-channel loudspeaker setup …………………………………………… 11
Diagram 2: 5-channel loudspeaker setup for Transmissions/Intercepts .…..……. 12
!6
Portfolio of Musical Works
1. Frictions/Storms
(2013)
8-channel fixed media
12:17
2. Rise
(2013)
Stereo fixed media
12:10
3. Glitches/Trajectories
(2014)
8-channel fixed media
11:28
4. Transmissions/Intercepts
(2015)
5-channel fixed media
24:32
5. Reductions/Expanses
(2016)
8-channel fixed media
13:39
6. Iteration/Banger
(2016)
8-channel fixed media
7:51
Total duration: 81:57
!7
USB Content
Folder
Title
Format: wav/aif/aiff
files (24BIT 48kHz)
Duration
Electroacoustic
works
Frictions/Storms
8-channel and stereo
versions
12:17
Rise
Stereo
12:10
Glitches/Trajectories
8-channel and stereo
versions
11:28
Transmissions/Intercepts
5-channel and stereo
versions
24:32
Reductions/Expanses
8-channel and stereo
versions
13:39
Iteration/Banger
8-channel and stereo
versions
7:51
Iteration/Banger audio
samples for Max Patch 1
8 x wav files to be placed Approx 1
in Max file search path.
sec each
Iteration/Banger (live
electronics 8-channel
version)
8-channel
6:41
Iteration/Banger (live
electronics stereo version)
Stereo
6:40
Live electronics set 1,
7 January 2014
Stereo
17:09
Live electronics set 2,
25 May 2014
Stereo
15:19
Reductions/Expanses
excerpt (diffusion example)
11-channel (plus read me 4:30
file)
Rise (stem version)
10-channel (plus read
me file)
12:10
Iteration_Banger_Ableton_
Session
Folder containing .als file
n/a
Iteration_Banger_Patch 1
Max patch (.maxpat file)
n/a
Iteration_Banger_Patch 2
Max patch (.maxpat file)
n/a
Technical setup read me
file
pdf
n/a
Iteration/Banger Max Patch
1 video tutorial
.mov file
14:39
Iteration/Banger Max Patch
2 video tutorial (plus read
me file)
.mov file (w. stereo
audio) and additional 8channel interleaved file.
Requires playback in
DAW
11:46
Appendix C (Audio)
Appendix C
(Software)
Appendix C
(Tutorials)
!8
Abstract
This commentary accompanies the portfolio of electroacoustic works realised at
the NOVARS Research Centre, and intends to provide insight into methodologies
for acousmatic composition as researched at the University of Manchester
between 2013 and 2016. Six compositions are presented in order of realisation, as
follows: Frictions/Storms, Rise, Glitches/Trajectories, Transmissions/Intercepts,
Reductions/Expanses, and Iteration/Banger. An analysis of each work in relation to
research-specific topics is provided, adopting Denis Smalley's concepts of
spectromorphology and space-form as appropriate syntax in the elaboration of
compositional methodologies and overall outcomes.
The research focuses primarily on the appropriation of transformed and
synthesised sound materials in acousmatic spatial composition. Resulting works
are intended for presentation in concert via the practice of live sound diffusion
performance. The portfolio documents an arc of development working in fixed
media formats incorporating live electronics processes into the realisation of multichannel compositions, to finally arrive at a methodological merging of fixed media
studio composition and live electronics performance practices.
Additional supplementary materials in support of the portfolio and commentary are
provided including Max coding patches, video tutorials, technical information and
related audio materials.
!9
Declaration
I hereby declare that no portion of the work referred to in the thesis has been
submitted in support of an application for another degree or qualification of this or
any other university or other institute of learning.
Copyright Statement
i. The author of this thesis (including any appendices and/or schedules to this
thesis) owns certain copyright or related rights in it (the “Copyright”) and s/he has
given The University of Manchester certain rights to use such Copyright, including
for administrative purposes.
ii. Copies of this thesis, either in full or in extracts and whether in hard or electronic
copy, may be made only in accordance with the Copyright, Designs and Patents
Act 1988 (as amended) and regulations issued under it or, where appropriate, in
accordance with licensing agreements which the University has from time to time.
This page must form part of any such copies made.
iii. The ownership of certain Copyright, patents, designs, trade marks and other
intellectual property (the “Intellectual Property”) and any reproductions of copyright
works in the thesis, for example graphs and tables (“Reproductions”), which may
be described in this thesis, may not be owned by the author and may be owned by
third parties. Such Intellectual Property and Reproductions cannot and must not be
made available for use without the prior written permission of the owner(s) of the
relevant Intellectual Property and/or Reproductions.
iv. Further information on the conditions under which disclosure, publication and
commercialisation of this thesis, the Copyright and any Intellectual Property and/or
Reproductions described in it may take place is available in the University IP
Policy (see http://documents.manchester.ac.uk/DocuInfo.aspx?DocID=487), in any
relevant Thesis restriction declarations deposited in the University Library, The
University Library’s regulations (see http://www.manchester.ac.uk/library/aboutus/
regulations) and in The University’s policy on Presentation of Theses.
!10
Technical Information (surround works)
Multi-channel audio and stereo reductions of each portfolio work are provided in
24-bit 48kHz interleaved wav, aif or aiff file formats. Channel assignments to
loudspeaker placements are displayed in the diagrams below:
8-channel setup
Diagram 1: 8-channel loudspeaker setup.
8-channel portfolio works:
Frictions/Storms
Glitches/Trajectories
Reductions/Expanses
Iteration/Banger
!11
5-channel setup
Note: Transmissions/Intercepts may be presented in standardised 5-channel (as in
Dolby Digital 5.1) surround configurations, but was composed in the configuration
shown below: channel 2 is a front centre speaker output and channels 1, 3, 4 and
5 create a quadrophonic surround setup i.e., loudspeakers 4 and 5 are intended to
mirror the positions of loudspeakers 1 and 2. As opposed to being left surround
and right surround positioned, they are rear positioned.
Diagram 2: 5-channel loudspeaker setup for Transmissions/Intercepts.
5-channel portfolio works:
Transmissions/Intercepts
!12
Acknowledgements
I wish to thank my supervisor Professor David Berezan whose support, guidance
and encouragement over the past five years has been invaluable.
I thank Professor Ricardo Climent for guidance and (in conjunction with Professor
Berezan) his support in securing funding for my research.
I thank my co-supervisor Dr. Kevin Malone and my independent reviewer
Professor Camden Reeves for support, encouragement and valuable insight.
Thanks to technicians Andrew Davison and Jon Tipler for technical support in the
studios throughout my time at NOVARS.
Additionally I thank all the NOVARS postgraduate students I have been fortunate
enough to work alongside during my time at the University of Manchester.
This research was funded by the Arts & Humanities Research Council, UK.
This work is dedicated to my parents Janice Mary and Charles Dennis Saul, and
my grandmother Brenda Deakin. For your endless support and love, I thank you.
!13
‘The results are profoundly monotonous. Furthermore, all these
noises are identifiable. As soon as you hear them, they suggest
glass, a bell, wood, a gong, iron … I’m giving up on music.’
Pierre Schaeffer, 15 April 1948
‘Always record! Always record!’
Jack Black, 28 November 1997
!14
INTRODUCTION
This portfolio and supporting commentary document a four-year journey of
development as an electroacoustic composer, and are intended to highlight
several core aspects of research through composition. Many of the key concepts
can be identified across the body of work, and each chapter considers one or
more research topics in relation to a specific composition. The writings of Denis
Smalley have been highly influential on my work, specifically the analytical
concepts of spectromorphology 1 and space-form. 2 Both are applied throughout the
commentary as appropriate syntax in delineating compositional methodologies,
musical structures and overall outcomes. Through practice-based research
focused on fixed media or acousmatic composition (intended for playback and/or
performance through multiple loudspeaker configurations), 3 my work addresses
the following questions:
• How can the electroacoustic composer create musical coherence when
employing predominantly abstract sound materials in non-linear musical
structures? 4
• How can relationships between studio-based composition and live electronics
performance practices be merged to develop composed musical outcomes?
• How might aleatoric processes be successfully incorporated into composition
and sound generation techniques?
1
‘The two parts of the term refer to the interaction between sound spectra (spectro-) and the ways
they change and are shaped through time (-morphology).’ Denis Smalley, ‘Spectromorphology:
explaining sound-shapes’, Organised Sound, 1997, 2(2), Cambridge University Press, 107 - 26,
107.
2
‘An approach to musical form, and its analysis, which privileges space as the primary articulator.
Time acts in the service of space.’ Denis Smalley, ‘Space-form and the acousmatic image’,
Organised Sound, 2007, 12(1), Cambridge University Press, 35 - 58, 56.
3
‘According to the definition in Larousse, the Acousmatics were initiates in the Pythagorean
brotherhood, who were required to listen, in silence, to lectures delivered from behind a curtain so
that the lecturer could not be seen. The adjective ‘acousmatic’ thus refers to the apprehension of a
sound without relation to its source.’ Trevor Wishart, ‘Sound Symbols and Landscapes’, in The
Language of Electroacoustic Music, 1986, Macmillan Press Ltd., 41 - 60, 41.
4
‘A sound could be labelled abstract simply through the inability of the listener to ascribe to it any
real or imagined provenance. Many electroacoustic musicians conceive of a continuum between
the ’abstract’ and ‘referential’ which may function as a micro- or macro- structuring principle, or
determine the overall narrativity of the music. The pairing of terms abstract/referential is also
referred to as intrinsic/extrinsic (by the composer and theorist Denis Smalley, for example).’ Ears
ElectroAcoustic Resource Site. [online] Available at <http://ears.pierrecouprie.fr/spip.php?
article198>, accessed 24 July 2016.
!15
• What approaches might be adopted in order to extend and embellish composed
multi-channel fixed media works through concert presentation, in relation to
contemporary sound diffusion methods?5
• How might tonality be successfully employed alongside abstract sound materials
in acousmatic works?
• What potential might the creation of multiple performable variations of composed
works hold for the composer/performer?
The commentary is structured as follows: Chapter 1, Frictions/Storms, focuses on
source bonding and gestural surrogacy as related to recorded materials and sound
transformations. Chapter 2, Rise, focuses on the role of structural functions in an
acousmatic composition, giving additional consideration to musical expectation
and space in the stereo image. Chapter 3, Glitches/Trajectories, examines
behavioural relationships between sound types and outlines methods for studio
spatialisation techniques, aleatoric development of materials, and the transference
of semi-improvised live electronics performance techniques into composed fixed
media. Chapter 4, Transmissions/Intercepts, investigates coherence in long-form
composition and through detailed musical analysis considers the application of
remote (and synthesised) sound materials within a concept-based musical
framework. Chapter 5, Reductions/Expanses, investigates the transference of a
fixed media composition into multi-channel live performance environments,
explicating methodologies for spatial reinterpretation as achievable through sound
diffusion performance. Chapter 6, Iteration/Banger, explores an alternate approach
to the development of materials, incorporating aleatoric algorithmic procedures in
the generation and organisation of sounds, resulting in a composed work that is
performable through both fixed media and live electronics formats. Additional
consideration is given to stylistic hybridity within the work. Finally Chapter 7
provides conclusions to my research and proposes a compositional methodology
as reached through the convergence of research areas outlined above.
5
Sound diffusion refers to the performance practice of redistributing sound spatially via playback of
a fixed media audio file through multiple loudspeakers, usually presented in a concert hall
environment. Discrete channels of audio (such as channels 1 and 2, or left and right, in the case of
a stereo piece) may be assigned to multiple loudspeakers, and amplitude levels of loudspeakers
are then manually adjusted by way of a mixer or purpose-built control interface, allowing the
potential for immersive or dramatic, spatial listening experiences.
!16
Through processes applied in the production of the portfolio, and through the postcompositional analyses and conclusions provided, I seek to strengthen the
composer–listener relationship via meaningful musical discourse and further the
ongoing development of electroacoustic music and its presentation.
!17
CHAPTER 1. FRICTIONS/STORMS: SOURCE BONDING AND GESTURAL
SURROGACY AS COMPOSITIONAL AIDS
‘One approach to electroacoustic composition is to use sourcecauses which are intended to be recognised. They are used
precisely because we can recognise them, associate with them
and because they have a reality. Heard sources have a dual
identity, an intrinsic existence (within the context of the musical
work), and an extrinsic existence (in real-world experience outside
the work). In entering the musical work they carry with them their
identities and activities from the world outside. They automatically
have two contexts and are therefore transcontextual.’6
1.1 Compositional methodology
Frictions/Storms (12:17) is illustrative of my approach to acousmatic composition
during the early period of my PhD research, being a bottom-up constructed, 7 multichannel work themed on the exploration of one or more recognisable sound
objects. 8 Featured source materials are linked through friction, as integral to the
cause behind sound generation. The work is concerned with Smalley’s concepts of
source bonding 9 and gestural surrogacy; 10 as sounds are subjected to
transformation their identities become masked, and through increasing
remoteness imagined sources and causes may be inferred.11 Transformed
materials may suggest (electronic) storm-like shifting weather patterns (hence the
work’s title), and evoke images of trains passing through distant landscapes.
Works inspiring this approach to source materials include David Berezan’s Cyclo
6
Denis Smalley, ‘Defining transformations’, Interface, 1993, 22(4), 279 - 300, 281.
7
‘[…] many composers make bottom-up works, that is, works based on materials they have
assembled which they subsequently manipulate and place in sequences to form structures.’ Leigh
Landy, Understanding the Art of Sound Organisation, 2007, MIT Press, 34.
8
Previous compositions produced during my MusM degree at the University of Manchester include
Blow (2012), a stereo fixed media work made exclusively from saxophone recordings, and Jaws
(2012), an 8-channel fixed media work themed on recordings made of my pet cat.
9
Source bonding: ‘the natural tendency to relate sounds to supposed sources and causes, and to
relate sounds to each other because they appear to have shared or associated origins.’ Smalley,
‘Spectromorphology’, 110.
10
‘The process of increasing remoteness I refer to as gestural surrogacy.’ Ibid., 112.
11
‘We should not think of the gesture process only in the one direction of cause–source–
spectromorphology, but also in reverse – spectromorphology–source– cause. When we hear
spectromorphologies we detect the humanity behind them by deducing gestural activity, referring
back through gesture to proprioceptive and psychological experience in general.’ Ibid., 111.
!18
(2003), Adrian Moore’s Study in Ink (1997) and Manuella Blackburn’s Switched on
(2011), where sounds derived from identifiable sources are reshaped to take on
new and fantastical forms.
1.2 Source materials: spectromorphological archetypes and variants
Figure 1: Morphological models. 12
Recordings were derived from gestural play with three groupings of sound
sources: tiles (clay roofing tiles and ceramic bathroom tiles) dragged across one
another and struck, various sizes of saws and hacksaws (sawing wooden planks),
and bowed violin strings. Additional recordings were made of large plastic bin lids
being slammed shut. Through auditioning of gesture captures, relationships and
contrasts between spectromorphologies were identified in order to consider their
potential for application in a musical context. 13
12
See Denis Smalley, ‘Spectro-morphology and Structuring Processes’, in The Language of
Electroacoustic Music, 1986, Macmillan Press Ltd., 61 - 93, at 68 - 73.
13
‘In the acousmatic studio, the fixity of sounds on the medium allows us to stop and repeat sound,
inviting probing analysis of any sound object and in turn investigating the nature of our responses
to and relationships with sound.’ John Young, ‘Sound morphology and the articulation of structure in
electroacoustic music’, Organised Sound, 2004, 9(1), Cambridge University Press, 7 - 14, 7.
!19
Source material/sound type
Agential activity
Spectromorphology
Scraped together
Graduated onset-closed
termination (gesture)
Scraped together in rapid
succession
Stable/unstable iterative
continuant (texture)
Struck together
Resonant attack-decay
(gesture)
Iterative sawing motion
Stable/unstable iterative
continuant (gestural or
textural)
Single forward sawing motion
Graduated onset-graduated
termination (gesture)
Tiles (clay and ceramic)
(Noise-based)
(Internal resonance)
Saws on wood
(Noise-based featuring rising/
falling pitch content)
Graduated onset-closed
termination (gesture)
Bowed violin
(Pitch-based)
Iterative bowing motion
Stable iterative continuant
(texture)
Slammed shut
Attack-closed termination
(gesture)
Bin lids
(Noise-based)
Table 1: Source materials and related spectromorphologies developed for Frictions/Storms.
Table 1 identifies a selection of spectromorphologies rendered from gestural play
with source materials.14 Figure 1 provides a visual representation of these
spectromorphologies. Scraping tiles together produced a variety of noise-based
gestural spectromorphologies, and striking tiles revealed inherent internal
resonances. Sawing on wood resulted in timbrally discrete noise gestures with
internal rising/falling pitch content (as audible through a single forward or
backward sawing motion: from stasis, to sawing motion, speeding up, slowing
down and terminating back to stasis). Bowing a violin produced a variety of
archetypal sound-shapes and variants, contrasting the noise-based qualities of
tiles and saw sounds due to the instrument producing pitch-based
spectromorphologies. Furthermore, iterative gestural interaction with materials
produced textural continuants. Motions of dragging, rubbing and bowing all
14
The terms graduated onset-closed termination, iterative continuant, and attack-decay, refer to
Smalley’s terminology relating to gesture units and spectromorphological expectation. See Smalley,
‘Spectromorphology’, 112 - 113.
!20
resulted in varied degrees of tension/release gestural energy, acceleration/
deceleration behaviour, and/or textural iteration, forming behavioural links between
spectromorphologies.
1.3 Transformation, gestural surrogacy and inferred sources
In their untreated states the recordings of tiles and saws may be classified as firstorder surrogates.15 The violin recordings fit within the second-order surrogate
classification. 16 Through transformation a variety of new spectromorphologies
were produced, classifiable as third-order 17 and/or remote surrogates.18 Thirdorder transformations open the work (0:00 - 2:13); short gestures of tiles dragged
across one another were subjected to spectral processing (FFT analysis and
resynthesis allowing for interpolation between frequency components over time,
smearing the sonic detailing inherent in the untreated sound), resulting in
extended durations of noise-based textural continuants. The layering of several
variations of this transformation created a noise-based 8-channel texture, that may
suggest sandstorm-like weather patterns as a possible source and cause.
Another example is audible at 6:12 - 6:57 where several saw recordings have
been re-pitched, slowed down, filtered and layered; here short pulse-like iterations
(saw teeth dragging on wood) become slower iterative continuants, lower in pitch
and too long in duration to suggest sawing motions, masking the original source
and cause. These new textural noise-based continuants may evoke images of
trains on railway tracks or more industrial sources and causes.
Elsewhere transformation removes all trace of source-cause associations, and
remote surrogacy is achieved; an audible example of this is between 7:37 - 8:00,
where unstable, noise-based and behaviourally active granulations suggest
15
‘First-order surrogacy projects the primal level into sound, and is concerned with sonic object use
in work and play prior to any ‘instrumentalisation’ or incorporation into a musical activity or
structure.’ Ibid., 112.
16
‘Second-order surrogacy is traditional instrumental gesture, a stage removed from the first order,
where recognisable performance skill has been used to develop an extensive registral articulatory
play.’ Ibid.
17
‘Third-order surrogacy is where a gesture is inferred or imagined in the music. The nature of the
spectromorphology makes us unsure about the reality of either the source or the cause, or both.’
Ibid.
18
‘Remote surrogacy is concerned with gestural vestiges.’ Ibid.
!21
neither implied nor ascertainable sources or causes. Spectromorphological
relations and contrasts between the untreated and transformed materials provided
the basis for compositional exploration. 19
1.4 The function of space
Spatialisation of materials in 8-channels augments the potential for alternate
sources and causes to be suggested. The immersive distribution of third-order
surrogates – treated with filters (reducing high frequency content) and amplitude
envelopes – allows for the illusion of proximate and distal20 sound events occurring
in circumspace.21 The 8-channel image may produce an immersive and
transformed storm-like surrounding weather pattern experience for the listener
(example at 7:54 - 8:32).
1.5 Structural analysis
The work is formed of three sections, each focusing on one of the three primary
sound sources:
Section 1 (0:00 - 4:56): clay and ceramic tiles
Section 1 establishes an increasingly remote soundworld predominantly defined
by third-order surrogates while briefly introducing sound materials (violins and
saws) to be further developed in Sections 2 and 3.
The work opens with a slowly evolving noise-based graduated continuant texture,
punctuated by gentle attack-decay gestures (resonant tile strikes audible at 1:27 19
‘[…] in the time domain one might use granular methods to synthesise new timbres with spectral
signatures that are not perceptually related to the original sound structures. Those are processes
which generate new sound signals. On the other hand, the process of repeated audition itself
enables us to listen ‘into’ the sound ever more acutely, which can alter the perception of the
sound’s musical potential during the compositional process, as listening contexts evolve through
generation of new materials and the process of testing of these against each other.’ Young, ‘Sound
morphology’, 10.
20
‘I use the term ‘proximate’ to designate space nearest to the listener, and ‘distal’ for space
furthest from the listener. The relationship between proximate and distal space creates depth of
image.’ Smalley, ‘Space-form’, 36.
21
‘‘Circumspace’, which incorporates the Latin preposition for ‘around’ or ‘about’ as a prefix, seems
appropriate to represent the aesthetic notion of relations of position, movement and scale in this,
the most comprehensive type of perspectival space.’ Ibid., 51.
!22
1:44). Amplitude modulation applied to the noise continuants produces reciprocal
internal texture motion,22 and varying the frequency of oscillations suggests
accelerating/decelerating notions of time (1:47 - 1:57). 23 Granular transformations
of tile strikes are introduced (1:59) leading to the first major contrast of materials at
2:13, where all sounds terminate to reveal layered, pitch-based violin loops. 24 A
dominating gesture concludes the violins at 2:33, where a brief section of subtle
tile transformations (re-pitched and reversed) retains clear spectromorphological
relationships with first-order tile sounds. The introduction of increasingly third-order
transformation types (2:59) leads into a behaviourally active section of noisebased granulations (from 3:10). At 3:32 resonant granulations extracted from tile
strikes are briefly introduced, creating a contrast of stable iterative pitched
material. At 3:48 remote noise gestures briefly force out all other sound materials,
leading at 4:00 to the climax of the section – the return/establishment of granular,
resonant and stable pitched iterations, gradually terminating while crossfaded with
a new granular texture (transformations of saw sounds), hinting at materials to be
established in Section 2 and concluding with a closed termination to silence (an
untreated, closing door).
Section 2 (4:56 - 8:42): saws and hacksaws
Section 2 explores contrasting timbres and behaviours to those featured in Section
1 and functions to develop the work’s temporal pacing and spectral content.
Tension initially builds via two opening crescendos, progressing to a slower,
sparse train-like sequence. As activity and spectral density gradually increases,
materials become progressively remote, leading to a dense return passage of saw
transformations, in turn leading to spectral clearing.
22
‘In reciprocal motion, movement in one direction is balanced by a return movement. Oscillation
and undulation, which are contour variations, could apply to internal, textural motions, as well as
being descriptions of external contour.’ Smalley, ‘Spectromorphology’, 116.
23
Amplitude modulation was achieved using the Auto Pan Ableton Live plug-in. For more
information visit Ableton. [online] Available at <https://www.ableton.com/en/>, accessed 27 July
2016.
24
Granular transformations featured were created using the BEASTtools Granul8 module, part of
the BEASTtools modular multi-channel transformation environment (for Max). [online] Available at
<http://www.birmingham.ac.uk/facilities/ea-studios/research/beasttools.aspx>, accessed 5 July
2016.
!23
Section 2 opens with two successive crescendo variations created from
transformations of saws on wood (4:57 - 5:26 and 5:28 - 6:05). Stable granulations
(applied to lower-pitched saw material) emerge and maintain a pulse. These are
layered with unstable continuant granulations (randomised grain lengths of
hacksaw recordings, higher in pitch). Low-pass filtering is applied to some of the
noise-based hacksaw material, gradually revealing higher spectral content in
combination with applied amplitude envelopes, resulting in increasing spectral
density. Considering both crescendos as individual spectromorphological events,
the overall result is two texture-carried variations evocative of hurricane-like
weather patterns. 25 A final low-pitched saw gesture (5:58) processed with reverb
creates a graduated termination transitioning into the next stage of development.
Several layers of saw recordings are introduced (slowed/pitched down and filtered)
resulting in material evocative of trains passing in both proximate and distal space.
From 6:25 a new violin sequence emerges and departs. From 6:56 filtered and repitched variations of a repeating saw transformation sequence lead into a gesture
suggestive of a wave crashing against rocks (7:15 - 7:21). The saw sequence is
then given prominence through less filtering and proximate spatial positioning,
complimented by emerging and underlying filtered noise continuants. All materials
suddenly terminate at 7:36 revealing new remote surrogate granulations. From
7:49 - 8:38 the saw transformation sequence returns alongside dominant granular
noise continuants; this passage may evoke source-causes of (electrical/
transformed) storm-like weather patterns through a gradually increasing spectral
density and activity. High-pass filtering applied to background noise materials
(example at 8:08 - 8:16) produces a sense of ascension/termination, as sounds
fade out clearing the spectral image to reveal violin material, opening Section 3.
Section 3 (08:42 - 12:17): violins and bin gestures
Section 3 functions as a gradual crescendo, contrasting the previous sections by
establishing pitched violin materials as dominant, following brief appearances in
the preceding sections.
25
‘Where one or the other dominates in a work or part of a work, we can refer to the context as
gesture-carried or texture-carried.’ Smalley, ‘Spectromorphology’, 114.
!24
Section 3 comprises pitch-based violin transformations created primarily from
repetitions of two discrete edits (re-pitched and spatialised in circumspace). The
most prominent of these two loop-based edits is briefly introduced in Section 2.
These are layered with a third granular violin edit (creating a stable pitched drone,
audible from 8:42), and a fourth, lower-pitched transformation (a graduated onsetgraduated termination, audible example at 9:19 - 9:24). Through applied amplitude
envelopes, a slow crescendo of amassed violin loops is spatialised, emerging from
distal space in the front loudspeaker pair of the 8-channel image (from 8:42),
eventually defining proximate circumspace. Dynamic content is enhanced through
the introduction of gestural bin lid transformations (processed with multiple delays
through amplitude envelopes) resulting in iterative drum roll-like graduated onsetclosed termination gestures. The reintroduction of train-like saw transformations as
featured in Section 2 provides further structural coherence (audible from 9:48). Bin
lid iterations eventually increase in speed to propel the pacing of events forward to
the final climax, where spectral density peaks and a repetitive pitch-based
spectromorphology (evocative of a railroad crossing bell, audible from 10:56 11:14) arrives and departs, functioning as an agent to clear spectral density. The
work concludes with a brief spectrally sparse section (11:20 - 12:17) focusing on
variations of bin transformations combined with a filtered drone that emerges (at
11:17) and gradually terminates.
Conclusion
Frictions/Storms explores the potential of transcontextual sound materials in
composition; through electroacoustic processes (sound transformation), sound
objects of a relatively mundane first-order nature are reshaped, and via concepts
of source bonding and gestural surrogacy, reveal the potential for fantastical
abstractions of themselves, able to evoke unreal and imagined extrinsic
associations.
!25
CHAPTER 2. RISE: STRUCTURAL FUNCTIONS, EXPECTATION AND SPACE
IN THE STEREO IMAGE
‘Our acquired knowledge of the contexts of spectral change
provides an almost ‘natural’ reference-base not only for developing
the wider, more imaginative spectromorphological repertory into
the third-order surrogacy of electroacoustic music, but for
decoding patterns of expectation in musical form. We predict or try
to predict the expected tendencies of spectral change.
Electroacoustic music, even when deprived of known instrumental
spectromorphologies and tonal harmonic language, still relies on
culturally acquired expectation patterns.’26
2.1 Overview
Rise (12:10) marks a departure from my work concerned with notions of masking
and revealing recognisable sources, and highlights a growing interest in working
with predominantly abstract sound materials. As such, source bonding in Rise is
almost exclusively achieved via aurally perceptual relationships between
spectromorphologies. The title refers to the work’s exploration of rising and falling
movement in spectral space,27 and was in part influenced by Bernard Parmegiani’s
Géologie Sonore (1975) and Denis Smalley’s Pentes (1974) – both works focusing
primarily on abstract and/or synthesised sound materials and featuring
explorations of spectral space through glissando.
2.2 Transformation of source recordings
Source materials featured include recordings of gestural play with tea-towels
(mainly noise-based, attack-closed termination gestures), the closing of a sliding
wardrobe door (a graduated onset-closed termination), and the strike of a resonant
bowl-shaped sink (an attack-decay). All three sound types are featured in their
untransformed states, however it can be strongly argued that actual sourcecauses are not possible to ascertain in the context of the composition; sourcecauses are neither alluded to in the work’s title, nor via any emphasis on extrinsic
26
Ibid., 113.
27
‘Put crudely, spectral space is concerned with space and spaciousness in the vertical dimension
– up, down, height, depth, along with infill and clearing.’ Smalley, ‘Space-form’, 45.
!26
identities. Recordings produced for Rise were minimal, being no more than two or
three captures of each sound type. From a palette of approximately ten short edits,
multiple related third-order and remote surrogate spectromorphologies were
developed (both gestures and textures). Sections 1 to 3 of Rise each focus on
transformations derived from one of the sources:
Section 1 (0:00 - 2:48) was created primarily from transformations of a single
recording of a sliding wardrobe door closing, treated with two instances of the
GRM Delays plug-in, using multiple rapid delays (under 500ms).28 The overlapping
of delay lines produced stable pitch (similar in effect to comb filtering, audible
examples from 0:00). Further application of freeze tools to these gestures resulted
in a texture drone (introduced at 0:59) that is source bonded to the gestures
through both pitch and internal iterative content (an audible variation of the delay
processed gestures).29
Section 2 (2:49 - 4:32) focuses on transformations produced from a selection of
recordings of tea-towels. Source materials were time-stretched using Ableton
Live’s Warp tool. One particular algorithm (the Beats warp mode) staggers the
warping process to result in granular iteration (repetition of micro-segments of
audio); this allowed the creation of varied spectromorphologically-linked gestures.
Both these and the wardrobe door gestures in Section 1 were further processed
with distortion and filtering to create spectrally dense variations (audible wardrobe
door variations at 0:46 - 0:51 and tea-towel variations combined with wardrobe
door transformations from 3:45 - 4:00).
Section 3 (4:33 - 7:23) focuses on pitched attack-decay sink strikes. Pitch-shifted
variations of a single untreated strike were layered and aligned to create tonally
rich gestural events (example at 5:06). Further transformations were created by
applying delay lines to a reversed sink strike resulting in decelerating iterations
(audible at 4:34 - 4:56).
28
GRM Delays is part of the GRM Tools Classic bundle. [online] Available at <http://
www.inagrm.com/delays>, accessed 27 July 2016.
29
Freeze tools used included GRM Freeze (Classic bundle); Jean-Francois Charles’ Live spectral
processing patches. [online] Available at <https://cycling74.com/toolbox/live-spectral-processingpatches-for-expo-74-nyc-2011/#.V719SLUllE4>, accessed 24 August 2016.
!27
Rise features several other sound types that were developed through extensive
transformation processes, and the resulting remote spectromorphologies have no
direct associations to recurring sound materials (an example being the sequence
of remote iterative noise variations moving from right to left in the stereo image at
7:07 - 7:22). Further examples include high-pitched drone continuants featured in
Section 2 (audible at 3:30 - 3:57) and the dense glissando texture dominating
Section 5 (audible from 10:00).
2.3 Space-forms in Rise
Figure 2: Space-forms in the stereo image.
Space-forms explored within the work were informed by the choice of spatial
format: perspectival space, 30 panoramic space 31 and spectral space were
appropriate for developing composed space32 in stereo. Combined filtering and
amplitude envelope editing of sounds may imply distal and proximate spatial
positions; proximate sound events are achieved by placing materials in prominent
positions in the mix, whereas distal events appear quieter, with reduced high
30
‘I define the ‘perspectival space’ of the acousmatic image as the relations of position, movement
and scale among spectromorphologies, viewed from the listener’s vantage point.’ Smalley, ‘Spaceform’, 48.
31
Panoramic space: ‘The breadth of prospective space extending to the limits of the listener’s
peripheral view.’ Ibid., 55.
32
Composed space: ‘[…] the space as composed on to recorded media.’ Smalley,
‘Spectromorphology’, 122.
!28
frequency content, as if perceived from a distal vantage point (an example of
perspectival layering of sounds is audible at 5:15 - 5:28). Furthermore, movement
in panoramic space (panning and/or processing with doppler effect plug-ins), may
result in vectorial space (example gesture at 7:41 - 7:42). 33 Vectorial space and
spectral space may be combined though panning and band-pass filtering, creating
the illusion of gravitational trajectories of sound ascending or descending from left
to right or vice versa (an example gesture at 3:24 - 3:27 appears to move from the
top left of the stereo image across to the bottom right). Through subtle amplitude
envelope processing and additional attenuation of high frequency content, it
becomes possible to create vectorial spatial trajectories occurring within the
combined frames of panoramic, spectral and perspectival space (an example
gestural passage audible at 5:41 - 5:48).
Figure 3: Sonogram displaying spectral rise/fall glissando material in Rise, Section 5.
Spectral space is explored via sound materials occupying (or transitioning
between) discrete spectral ranges: for example, at 0:58 - 2:11 where a dominant
textural continuant drone – defining a grounded root position – is complimented
by high-pitched continuant materials, seeming to occupying a higher/canopy
33
Vectorial space: ’The space traversed by the trajectory of a sound, whether beyond or around the
listener, or crossing through egocentric space.’ Smalley, ‘Space-form’, 56.
!29
position.34 Spectral density via an amassing of sounds, or processing with
distortion, may also imply proximate space, as materials appear to block out other
sound events or fill spectral regions. Equally, reduced spectral content and activity
results in spectral clearing (for example, the transition from the spectral density of
Section 2’s conclusion, into Section 3 at 4:33). Glissando material (Section 5 from
10:00), emphasises gravitational notions of spectral space, through rising/falling
motion (see Figure 3).
2.4 Spectromorphological expectation
‘The ideas of onset (how something starts), continuant (how it
continues) and termination (how it ends) can be expanded into a
list of terms, some of them technical, some more metaphorical,
which can be used to interpret the function-significance of an
event or context. These functions can be applied at both higher
and lower levels of musical structure, referring, for example, to a
note, an object, a gesture, a texture, or a type of motion or growth
process, depending on our focus of attention.’35
Figure 4: Gesture units/spectromorphological variants.
34
‘Canopies and roots can be regarded as boundary markers which may have functions. For
example, textures can be hung from canopies and use them as goals or departure points, while we
already know that the drone can act as a root-reference.’ Smalley, ‘Spectromorphology’, 121.
35
Ibid., 115.
!30
Rise explores musical expectation through structure and development; gesture
and texture types are established, reprised and reestablished throughout the work,
forming source bonded links between materials and musical sections, creating
familiarity for the listener as the work unfolds. Lower-level spectromorphological
expectation is addressed through variations on constructed sound units, 36 and
morphological stringing of events.37 Examples of sound unit variations are audible
in Section 3 (from 5:15 - 6:06); a sequence of gestures comprising several
component parts occur and the components are assigned structural functions
(onset, continuant and termination roles). Each returning function features
reshaped materials, resulting in successive variations on an established sound
unit structure. Example variations may include extended duration of the onset,
tonally re-pitched resonant components, or reshaping to result in a sound unit of
compressed duration. Figure 4 provides four visual analogies of possible sound
units/spectromorphological variations.
36
Manuella Blackburn identifies sound units: ‘I view sound unit construction within my work as a
fundamental compositional strategy built entirely on the premise that every sound event has a start,
a middle and an end. Construction possibilities are vast and particularly well suited for dealing with
shorter sounds that yield gestural shapes through this combination process.’ Manuella Blackburn,
‘The Visual Sound-Shapes of Spectromorphology: an illustrative guide to composition’, Organised
Sound, 2011, 16(1), Cambridge University Press, 5 - 13, 6.
37
‘[…] morphologies are not just isolated objects. They may be linked or merged in strings to
create hybrids.’ Smalley, ‘Spectro-morphology and Structuring Processes’, 71.
!31
Figure 5: Rise sonogram and waveform representation to timeline.
!32
2.5 Structure and function attribution
Rise is structured as follows (see Figure 5):
Section 1 (0:00 - 2:48): Introduces gestural materials that play a recurring role
throughout the work, and establishes a statement texture continuant 38 that also
functions as a disappearance termination. 39 (See Figure 5, Section 1, highlighted
in red.)
Section 2 (2:49 - 4:32): A gradual noise-based crescendo passage develops from
sparse beginnings to achieve dense spectral occupancy, climaxing to reveal
pitched sink-strike iterative transformations.
Section 3 (4:33 - 7:23): A spectrally sparse and tonal passage centred around a
sequence of gesture units (created from component materials) results in
morphological stringing. Sounds emerge from distal space to converge with
prominent tonal attack-decay gestures in proximate space, then retreat into distal
space. Tonal cadence provides suspension and resolve.
Section 4 (7:24 - 9:14): Establishes remote materials briefly featured in Section 1.
Sounds are organised to imply one event causes the onset or termination of
another (causality) through dominant and subordinate behaviour types (7:45 8:21).40 Eventually, stable iterative noise-based ‘tick-tock’ pulses are introduced
(from 8:32), functioning as an anacrusis onset leading into Section 5. (See Figure
5, Section 4, highlighted in purple.)
Section 5 (9:15 - 12:10): Reestablishes the statement continuant and variations
on associated gestural materials from Section 1 (see Figure 5, Section 5,
highlighted in red). The functional role of the statement continuant develops into a
38
Statement, disappearance, prolongation and anacrusis are taken from Smalley’s table of function
descriptors. See Smalley, ‘Spectromorphology’, 115.
39
‘Function attribution may be double or ambiguous. A context may have different, simultaneous
functions. This is particularly so when events are overlapped or motion is continuous. For example,
a contour which seems to resolve a motion could also form part of the anacrusis to a following peak
– in this case the termination function is also an onset function on the same level.’ Ibid.
40
‘Causality, where one event seems to cause the onset of a successor, or alter a concurrent event
in some way, is an important feature of acousmatic behaviour.’ Ibid., 118.
!33
disappearance termination, and via morphological stringing, merges with the onset
of a new dense prolongation continuant texture, featuring internal glissando (rising/
falling) motion through spectral space.41 Eventually the anacrusis onset of ‘ticktock’ pulses are reintroduced, growing in presence to occupy proximate space.
The implication of this material (as a previous anacrusis function) is to suggest this
to be a passage preceding further musical development. Expectation is then
denied, as the work is concluded with a surprise closed termination of all sound
events. (See Figure 5, Section 5, highlighted in purple.)
2.6 Alternate 10-channel stem version42
While the stereo version of Rise is the definitive version, an alternate 10-channel
version was also produced in order to explore further possibilities for live diffusion
of the work on large-scale multiple loudspeaker performance systems (see
Appendix_C/Audio/Rise_10_Channel_Version/Rise_10_Channel.aiff). Here two
channels contain the gestural materials and higher spectral content, and the
additional eight channels contain isolated textural continuant materials (four stereo
pairs of variations developed from the original stereo continuant passages from
Sections 1 and 5 of the work). The intention was to explore spatialisation of
gestural content independent of the more grounded textural material (fixing the
textural materials onto a ring of eight loudspeakers surrounding the audience).
This allows, for example, an extended exploration of spectral space in the concert
hall by placing higher-pitch materials in roof loudspeakers. Alternately it may allow
for a more defined exploration of vectorial spatial movement to occur without
expanding/contracting or repositioning a fixed circumspatial texture-setting.43 After
41
‘Merged correspondences may also occur through the cross-fading of termination and onset, or
more rapidly as a consequence of a reversed onset-termination.’ Smalley, ‘Spectro-morphology
and Structuring Processes’, 71.
42
‘Stems constitute the submixes or – more generally speaking – discretely controllable elements
which mastering engineers use to create their final mixes. In a similar fashion, one can compose in
stems, separating out elements that need to be treated discretely in a final spatialisation, which in
itself may vary to a small or great extent from one performance to another.’ Scott Wilson and Jonty
Harrison, ‘Rethinking the BEAST: Recent developments in multichannel composition at Birmingham
ElectroAcoustic Sound Theatre’, Organised Sound, 2010, 15(3), Cambridge University Press, 239 250, 245.
43
‘This is an example of texture-setting – texture provides a basic framework within which
individual gestures act.’ Smalley, ‘Spectromorphology’, 114.
!34
diffusing both versions on the MANTIS44 large-scale loudspeaker system, I
concluded that the stereo version yielded a more convincing spatiality in
performance; in separating gestural and textural content, the act of diffusion
considerably altered the balance of materials (in relation to one another), resulting
in a performance where spatial development was somewhat lost at the expense of
seeking a balance closer to that rendered onto stereo fixed media. 45
Conclusion
Rise is given coherence through three main processes: use of contrasting gesturecarried/texture-carried and noise-centric/pitch-centric passages, the establishing
(and later reintroduction) of a statement continuant passage, and finally, the
forging of relationships between sound types through use of gesture units,
spectromorphological variation and considerations of musical expectation. The
assigning of structural functions aid musical development in order that abstract
sound materials may maintain source bonded relations, aurally guiding the listener
through the work’s development, before intentionally misleading the listener via a
final, unexpected closed termination.
44
MANTIS (Manchester Theatre In Sound), features a 48-loudspeaker sound diffusion
performance system, initially designed by Professor David Berezan, currently maintained by staff
and postgraduate students at the Novars Research Centre, University of Manchester.
45
Moore highlights related issues: ‘Increased timbral separation can be achieved by multichannel
tape where, should the composer require, separate sounds can be recorded to separate channels.
This kind of information is often difficult to perceive precisely because it breaks the listener's
perception of an integrated space. This compositional problem stems from the ill-defined notions of
the boundaries between monophony and polyphony that can arise from mixing.’ Adrian Moore,
‘Sound diffusion and performance: new methods – new music.’ Proceedings of the Music Without
Walls? Music Without Instruments? conference, De Montfort University, June 21 - 23, 2001. [online
article] Available at <http://www.dmu.ac.uk/documents/technology-documents/research/mtirc/
nowalls/mww-moorea.pdf>, accessed 28 July 2016.
!35
CHAPTER 3. GLITCHES/TRAJECTORIES: BEHAVIOUR, MOTION AND
GROWTH PROCESSES, AND METHODOLOGIES FOR 8-CHANNEL SOUND
SPATIALISATION
‘The metaphor of behaviour is used to elaborate relationships
among the varied spectromorphologies acting within a musical
context. I believe that listeners can intuitively diagnose behavioural
relationships (or a lack of them) in electroacoustic music contexts
and that this diagnosis affects the listener’s interpretation of and
reactions to the music. In this respect, behaviour is archetypal.’46
3.1 Overview
Glitches/Trajectories (11:29) is, as the title suggests, themed on both sound
representative of defective audio (glitches) and the exploration of vectorial space
(trajectories) in the 8-channel format. The work is partly inspired by Bernard
Parmegiani’s Capture éphémère (1967), where imitative and reactionary sound
behaviours are developed through interactions between third-order and remote
surrogates;47 in Glitches/Trajectories, behavioural relationships between sounds,
and motion and growth processes direct the work’s structural development. 48
Developed glitch materials were inspired by a live recording of modular analogue
synthesist Keith Fullerton Whitman, entitled Occlusion (Rue De Bitche), (2012).
The performance features metallic-sounding attack-closed termination gestures
(possibly generated using FM synthesis), resulting in fragmented glitch-like
variations of spectromorphologies. Glitches/Trajectories, however, employs no use
of analogue synthesis; third-order and remote surrogates featured were
predominantly derived from transformed recorded materials.49
46
Smalley, ‘Spectromorphology’, 117.
47
It is safe to assume that the synthesised sounds featured in Capture éphémère would be
unknown sources and causes for many listeners, particularly at the time of the work's realisation in
1967.
48
‘Motion and growth have directional tendencies which lead us to expect possible outcomes, and
they are helpful guides in attributing structural functions.’ Smalley, ‘Spectromorphology’, 116.
49
The work features some subtractive synthesis. At this time I would not have associated sounds in
either Capture éphémère, or Occlusion (Rue De Bitche) specifically with analogue synthesis
techniques. The transformation of recorded sound materials was of primary interest in developing
my own sound-shaping techniques during this period.
!36
3.2 Blurred sources/blurring the gesture–texture continuum
Sound materials featured in Glitches/Trajectories were developed through
extensive transformation and re-rendering of previously rendered transformations
– third-order surrogate sounds that had been intentionally labelled without
reference to sources and causes. With the exception of synthesised sounds
included in the work, it is impossible (for me as the composer) to determine
specific sources and causes from which featured materials were derived.
A series of continuant spectromorphologies with varied dynamic contours and
timbres were rendered in stereo. Some featured internal spectral development –
harmonic up/down shifts of pitch, similar to that achievable when controlling guitar
feedback (by holding a string to create resonance, producing a stable feedback
pitch and adjusting the angle of the guitar to the amplifier to find positions that
produce alternate feedback pitches – audible example at 5:48 - 6:12). Others were
spectrally dense drones (layered examples at 9:00 - 9:30). Some featured grittysounding and unstable noise with less spectral occupancy (thinner frequency
ranges – an edited/fragmented passage transformed from these continuant types
and layered with additional materials is audible at 0:09 - 0:15). From the creation
of these sequences two processes were identified in order to fragment and further
shape the materials prior to musical incorporation, as illustrated below.
Figure 6: Three visual analogies of audio processing.
!37
Figure 6, Shape 1 is analogous to one of the aforementioned sequences; a
continuant spectromorphology featuring dynamic contouring (through, for example,
amplitude envelope processing and/or timbral morphing). Shape 2 visualises the
application of amplitude modulation to the Shape 1 sequence, resulting in a new
continuant spectromorphology with relatively stable fragmentations. Material of this
nature proves highly suitable for deployment in vectorial space; internal reciprocal
motion as illustrated in Shape 2 produces a sense of pulse (derived from regular
amplitude modulations) and helps the listener to perceive vectorial motion through
its grain-like iterations moving through a vectorial spatial setting. Additionally,
temporal expansion and compression can be suggested (also visualised in Shape
2) through applied acceleration/deceleration of amplitude oscillations (hence its
relative stability, audible example at 5:48 - 6:12). Shape 3 visualises a possible
spectromorphological outcome derived from the second key transformation
process used in developing materials; a transformation of the Shape 1 sequence
as achieved through simple subversion of MaxMSP playback tools. 50 Randomly
dragging a playback timeline marker back and forth when play mode is engaged
results in a sequence of unstable and fragmented glitch material similar to the
playback of a faulty compact disc (random jumps between points along a timeline,
audible example layered with additional materials at 10:20 - 10:30). 51
As musical materials came into focus, the lines between gesture and texture
became increasingly blurred. The fragmentation of continuants into iterative
continuants may reduce the gesture–texture continuum; continuant
spectromorphologies such as those visualised in Shape 2 may be perceived as a
sequence of aurally source bonded, attack-closed terminations, assuming gestural
roles (the lower the frequency rate of amplitude oscillations, the more gesture may
be implied). Equally, Shape 3 fragmentations may be considered gestural or –
through multiple layering – adopt the role of unstable texture. Amassing variations
50
MaxMSP is a visual coding language, originally developed by Miller Puckette, maintained by
software company Cycling ’74. [online] Available at <https://cycling74.com>, accessed 29 June
2016.
51
This form of processing regularly required additional treatment to remove undesirable clicks
created by the randomised playback of continuous audio. A few clicks, however, were intentionally
incorporated into the final piece, where sequences of materials featuring prominent clicks were
found to be of spectromorphological interest (as glitch material), and not deemed to be problematic
with regards to the technical execution of the work.
!38
of these materials allowed further gesture–texture blurring through the creation of
multidirectional motions featuring vectorial movement. 52
3.3 Live electronics development
During the developmental stages of sound-shaping for Glitches/Trajectories I
found myself regularly performing semi-structured/semi-improvised live electronics
events using Ableton Live and MIDI controllers to manipulate and transform both
live-inputted and rendered audio. Spontaneous montaging/layering of sounds, and
real-time transformation in performance environments informed my fixed media
compositional methodology from this point. 53
Parameters of transformation tools including filters, waveshapers, amplitude
attenuators, delays and reverbs were assigned MIDI control. Some physical
controls were assigned multiple audio plug-in parameters to attenuate (within set
minimum/maximum ranges). For example, a single MIDI controller dial attenuated
amplitude level, low-pass filtering, and reverb decay time applied to a single
channel of audio, allowing a single dial turn to suggest sound movement from
proximate to distal spatial positions. As the amplitude decreases higher frequency
content is subtracted, and reverb is added. Conversely, as amplitude increases,
full resolution of frequency content is restored and reverb is reduced, resulting in a
dryer sound, suggesting closer proximity.
Attenuation of the frequency rate of (stereo) amplitude modulation was another
process assigned MIDI control. This allowed manipulations expressive of
accelerating/decelerating texture motion (as visualised in Figure 6, Shape 2), and
at lower frequency rates resulted in audible separation of the stereo signal into two
spectromorphologically-linked mono sources working in tandem (sequentially
switching between outputs, first the left channel and then the right channel).
52
‘Bi/︎multidirectional motions create expectations, and most have a sense of directed motion. They
can be regarded as having both gestural and textural tendencies, and could be large structures in
themselves.’ Smalley, ‘Spectromorphology’, 116.
53
Performances were semi-structured in the sense that some combinations of materials to be
explored (and approximate structural points at which to introduce materials) had been decided in
advance. I also predetermined certain transformation processes that would be explored in relation
to specific sound materials.
!39
Live Set 1 is a capture of a live performance from this period (see USB-drive,
Appendix_C/Audio/Live_Electronics_Sets/Live_Set_1.wav). The performance
opens with an improvised passage of materials that would later form part of
Section 2 of Glitches/Trajectories (Live Set 1 at 0:00 - 1:47. See 3.5 Behaviour and
motion and growth processes for a structural analysis of the composition).
Materials present from 9:57 were later developed for the transition section of
Glitches/Trajectories. From 10:42 low frequency drone materials emerge, similar to
those developed for Section 2 of Glitches/Trajectories. 11:42 - 12:44 features a
sequence of glitch-like live transformations – this material was further developed
for inclusion in Section 2 of Glitches/Trajectories. From 12:45 to the conclusion of
the performance, structural development (an extended crescendo) is also similar
to that of Section 2 in the final fixed media composition.
Improvised live performance requires the performer to persist with, and respond
to, spontaneous musical developments as they occur in real-time – events that
may be dismissed as fruitless and sooner abandoned if arrived at in the studio.
This proposes the possibility of improvised and aleatoric outcomes that differ from
those developed in non-performance environments. In performance I found my
listening drawn to the immediacy of musical interactions as they unfolded through
a combination of spontaneous (and semi-planned) structural shifts, and aleatoric
triggering/processing of audio. Behavioural relationships between sound materials,
potential growth processes and structural possibilities identified through
performance were further explored in the studio and remapped to 8-channels,
seeking to achieve dramatic vectorial spatial development in circumspace.
!40
3.4 Vectorial space in the 8-channel image
Figure 7 illustrates possible left to right vectorial movement in proximate (stereo)
panoramic space.54 In seeking a more immersive deployment of vectorial space,
several stereo to multi-channel remapping techniques were employed.
Figure 7 (above left): Proximate panoramic vectorial movement.
Figure 8 (above right): Vectorial space passing through egocentric space.
Figures 9 & 10: Vectorial motions.
54
As discussed in Chapter 2.3, perspectival space and spectral space provide an enhanced
framework for vectorial spatial development to occur within the stereo image. However, the
following examples presume the initial stereo vectorial movements to be occurring in proximate
panoramic space.
!41
Figure 8 illustrates a possible reassignment, where the initial separation of 45°
between channels (from a centrally located listening position) is widened to 180°.
Loudspeaker 1 retains the left channel output and loudspeaker 8 is assigned the
right channel output. Vectorial movements originally in proximate panoramic space
are now positioned in circumspace and encroach on egocentric space. 55
If clockwise and counterclockwise rotational motion is added, vectorial movement
is no longer in a straight line and passes through egocentric space, defining
trajectories within circumspace (within the 8-channel image); Figures 9 and 10
show two possible distributions (indicated by the dotted arrows), initially separated
by 180° (as in Figure 8, highlighted in red), rotating clockwise as left to right
channel vectorial movement occurs. As rotation progresses, the 180° separation is
gradually reduced to 135° separation by the arrival point (outlined in blue) at
loudspeakers 5 (right channel, initially positioned in loudspeaker 8) and 6 (left
channel, initially in loudspeaker 1). Variations of vectorial motions as shown in
Figures 9 and 10 are determined by both the speed of left to right vectorial
movement as rendered in the original stereo file, and the speed of the clockwise/
counterclockwise rotation.
Figure 11: Perspectival trajectories in circumspace.
55
Egocentric space: ‘The personal space (within arm’s reach) surrounding the listener.’ Smalley,
‘Space-form’, 55.
!42
Figure 11 shows two possible trajectories of perspectival movement. Here left to
right materials function independently (as two discrete mono spectromorphologies
in tandem, separated by low frequency amplitude modulation applied to the stereo
file). Clockwise rotation is combined with reduction of amplitude and highfrequency content (achieving the illusion of circumspatial motions withdrawing into
distal space, beyond the ring of loudspeakers, indicated by the dotted arrows).
Vectorial movement concludes with both channels returning to a proximate
circumspatial position through a return to full amplitude and full spectral resolution.
Both spectromorphologies have shifted 135° clockwise while maintaining 180°
separation. 56
3.5 Behaviour, and motion and growth processes
Glitches/Trajectories is structurally in two halves with a brief transition section
segmenting the two. The following analysis identifies behavioural activity featured
in the work as related to structural development.
Section 1 (0:00 - 3:27)
The piece opens by introducing low-level noise activity; two identical, brief
unstable continuants are introduced terminating to silence, separated by 180°
panning in the 8-channel image, establishing fragmented distorted noise moving
through proximate circumspace. Activity starts again with the introduction of
multiple unstable noise-based continuant spectromorphologies, establishing a
discontinuous multidirectional texture motion; 57 vectorial movement passing
through egocentric space is also first introduced here at 0:07 - 0:08.
Spectromorphological coherence is achieved through use of related terminations;
a closed termination (comprising two low-pitched thuds) is used at the first silence
(0.01) and again leading into the second silence (0:16).
56
Spatialisation tools applied in developing these transformations include Orbit 2D (for Max).
[online] Available at <http://www.peterbatchelor.com/software>, accessed 29 July 2016, and
BEASTtools modular multi-channel transformation environment.
57
‘Texture motion may vary in internal consistency. Continuous motion is sustained while
discontinuous motion may be more or less fragmented.’ Smalley, ‘Spectromorphology’, 117.
!43
From 0:21 variations on these established spectromorphologies return, introducing
further fragmented noise continuants and prominent noise gestures featuring
reciprocal oscillating internal motion (0:26 - 0:28). Between 0:36 - 0:41 these
gesture types highlight examples of descending non-rooted motion.58 Materials
briefly dissipate (0:42 - 0:45) maintaining low-level activity, to then return to full
prominence (0:46 - 1:02), establishing a distorted, continuous and erratic texture
motion terminating to silence. 59 Overall, the first minute of the piece functions to
introduce a distorted noise-based erratic soundworld where activity is punctuated
by moments of silence, creating anticipation for the work’s next phase.
From 1:04 a new set of noise-based spectromorphologies (functioning as equal
parts gesture and texture) are introduced, mimicking the arrival/departure
behaviours of the previous sound materials; a series of four multi-channel sound
units (comprising component parts), each lead to a passage of silence. The
departure of the first sound unit is graduated (1:06 - 1:07), and the following three
variations use closed terminations (1:12, 1:15 and 1:20-1:21), followed by an
extended silence. Mimicking behaviours present in the first minute of the piece,
silence is followed by a continuous erratic texture motion of materials (1:26
onwards), where complex noise spectromorphologies are punctuated with three
graduated onset-closed terminations, located in the upper spectral regions (1:42 1:45, 1:46 - 1:50 and 1:53 - 1:56). The third of these – another descending motion
– arrives at a grounded root position in spectral space, establishing the next stage
of activity. 60
1:56 - 3:23 is a passage of flocking, continuous and granular texture motion in
circumspace.61 Materials appear spectrally grounded owing to the inclusion of
lower frequency content, in comparison to the descending spectral motions of the
58
‘Thus analogies with flight, drift and floating can be common. Motion towards a root could be
implied in a spectral descent towards termination, but a root may not be achieved if the motion
fades ‘in the air’.’ Ibid.
59
‘Both continuity and discontinuity can move in a more or less periodic–aperiodic/︎erratic manner,
with internal fluctuations in tempi. Continuous/︎discontinuous texture motion may need to be
considered as a totality, or may follow grouping patterns if contours, fluctuations or discontinuities
are subject to repetitions, cycles or pauses which imply higher-level groupings.’ Ibid.
60
‘Motion rootedness. Some are more likely to be ‘earthbound’ (push, drag) while others are not
rooted to a solid plane.’ Ibid., 116 - 117.
61
‘Flocking describes the loose but collective motion of micro- or small object elements whose
activity and changes in density need to be considered as a whole, as if moving in a flock.’ Ibid.,
117.
!44
immediate preceding material. Gestural closed terminations (example at 2:17),
imply causality by terminating activity of surrounding materials, leading to spectral
clearing and revealing higher spectral content. Repeated use of closed
terminations creates spectromorphological associations (source bonding) and
musical expectation (example passage between 2:17 - 2:32). Variations on
materials featured in the first minute of the work are reintroduced (2:27 - 2:42:
unstable, fragmented noise continuants and internally oscillating gestures), leading
to an increase in upper-region spectral density (from 2:41). Additional new
materials are embedded in the texture density, including noise granulations and
unstable pitched gestures, highlighting movement in spectral space (2:43 - 3:23).
The passage concludes with a gradual dissipation of materials, crossfaded with a
brief passage of new, unstable pitched material featuring amplitude oscillation
(3:20 - 3:30), leading into the transition section.
In conclusion, Section 1 combines loose motion coordination behaviours, with
pressured motion passages through flocking behaviours, the amassing of spectral
density, and use of causal terminations leading to spectral clearing and/or further
emergence of sound materials.62
Transition section (3:27 - 4:27)
This brief segue focuses on new sound materials exclusive to this section; sounds
featured are noise-based continuants with random internal (stepped) pitch content,
comparable to sample and hold processing. Behavioural connections to passages
featured in Section 1 are established; materials are introduced gradually leading to
a closed termination and to silence (3:27 - 3:38). Continuous flocking texture
motion with internal iteration is then established (3:39 - 4:05). From 3:58 unstable
pitched oscillations – as introduced/crossfaded with the end of Section 1 – reemerge, this time with the addition of a low-pitched stable drone, functioning as an
emergence onset/disappearance termination. At 4:05 this material causes the
termination of other sound behaviours, leading to a moment of spectral clearing.
Trajectorial materials move through circumspace and egocentric space, and
suggest more perspectival motions than the previous behaviourally active
62
‘The vertical dimension is concerned with motion coordination (concurrence or simultaneity),
while the horizontal dimension is concerned with motion passage (passing between successive
contexts).’ Ibid., 118.
!45
passages. Proximate to distal spatial movement occurs (from 4:08 onwards) and
iterations accelerate and decelerate, as several streams of continuants overlap in
perspectival circumspace, hinting towards divergence and convergence patterns
of motion and growth to be developed in Section 2. A second spectrally-rooted
drone (again, an emergence onset/disappearance termination, at 4:10 - 4:28),
arrives, peaks (4:21), and decays almost to silence as Section 2 begins.
Section 2 sound types and behaviours
Section 2 features three primary sound types, all of which retain their
spectromorphological identities throughout its development. It is appropriate to
identify each before considering their interactions:
Oscillations: Pitched continuants with internal iteration (amplitude oscillation).
Iterations are stable with accelerating/decelerating tendencies, designed to
emphasise reciprocal texture motion. Pitch is unstable and has tendencies
towards harmonic-like spectral shifts (similar to guitar feedback), occupying the
upper spectral regions. These sounds function as both gestural (discrete
sequential events arriving and departing in perspectival space) and textural
(continuant in nature and once amassed, producing circumspatial texture).
Oscillation sounds define vectorial space within circumspace, perspectival space
and egocentric space. Audible example: 4:32 - 4:55.
Glitches: Unstable glitch noise materials comparable to remote transformations
being played back through a faulty compact disc player. These materials also
evade specific gesture or texture classification due to their fragmented and
unstable continuant nature. They mainly occupy a presence in proximate
circumspace, occasionally retreating into distal regions. Audible at: 5:37 - 5:41.
Drones: Low-pitched drones. These occupy spectrally grounded root positions and
gradually arrive and depart to fill/clear lower spectral regions. Drones range from
shorter (graduated onset-graduated termination) spectromorphologies (example:
6:42 - 6:46) to longer graduated continuants,63 with some featuring turbulent and
63
‘The onset starts gradually as if faded in, and the note terminates gradually as if faded out. In
between, the note is sustained for a time.’ Ibid., 113.
!46
iterative internal texture motion (unpredictable shifts of internal oscillations,
example at 9:23 - 9:34). Due to their strong spectral occupancy, drones are the
most dominant of the three spectromorphologies featured.
Drones are behaviourally coexistent alongside the oscillations and both sound
types function together to develop spectral space (adding and subtracting spectral
density). Glitch materials function as argumentative, working against the loose
motion coordination established by the oscillations and drones.
Section 2 (4:28 - 11:28)
A dense and digitally-overloaded noise gesture announces the arrival of Section 2,
decaying to reveal oscillations and establishing sparse activity; acceleration and
deceleration behaviours aid the propulsion of sound behaviours in space. As
materials retreat into distal circumspace the first glitch materials are revealed at
5:01 in proximate circumspace. 5:12 - 5:38 employs an increasing dominance of
layered oscillations working in loose motion coordination. Glitches gradually return
and create a pressured motion passage at 5:38, immediately terminating the
oscillations.
From 5:41 the oscillations return to then recede into distal space. Oscillations
gradually increase in proximate presence and activity, arriving and departing in
perspectival circumspace and passing through egocentric space while maintaining
loose motion coordination. At approximately 6:14 drone materials begin to emerge
and from here to 7:12 spectral density and activity is developed. The passage
peaks at 7:08 and activity begins to subside – oscillations decelerate and spectral
density clears. 7:12 - 8:18 is a subtle extension of the preceding passage, focusing
on established behaviour patterns applied to variations of oscillation and drone
spectromorphologies. Motion is again achieved through the amassing/clearing of
spectral density and via dominant arrival/departures of drone materials.
From 8:18 layers of low drones are gradually introduced, featuring turbulent
internal oscillating behaviour. Between 8:48 - 9:01 we hear the reintroduction of
graduated onset-closed terminations, descending from upper spectral space to
lower positions, propelling motion and growth forward (as established in Section 1,
!47
examples at 1:46 - 1:50). As in Section 1 where these precede a structural
development, here they function as onsets towards the spectrally dense and
behaviourally active climax of the work.
Multidirectional growth is achieved via a convergence 64 of materials through
exogeny.65 A dense drone-led passage is established where drones amass,
resulting in turbulent texture motion and agglomeration, featuring internal
oscillations (embedded within drone material). 66 At 9:41 drone materials begin to
retreat into distal space, resulting in spectral clearing, revealing at 10:06 glitch
materials in proximate circumspace. A passage of behaviourally agitated and
unstable oscillations and glitches follows, exploiting vectorial movement in
circumspace and passing through egocentric space as the two sound types
appear to battle for dominance. In this final passage the spectromorphological
relationship between the oscillation and glitch materials becomes audibly clearer;
through inclusion of glitches of longer durations the listener may identify that
glitches are, in part, developed fragmentations/transformations of oscillation sound
types. The two sound types featured (oscillations and glitches) begin to merge,
losing discrete qualities and resulting in a sequence of unstable glitch oscillation
spectromorphologies where all sound events appear to battle for dominance. From
11:10 intensity of activity is marginally reduced, and the work concludes with a final
passage of fragmented audio leading to a closed termination, with remnants of
reverb decay falling to silence.
64
‘Divergence and convergence are strongly directional and could be gestures or texture growths,
or a simultaneous linear descent︎/ascent.’ Ibid., 116.
65
‘Exogeny (growth by adding to the exterior) could be allied to dilation and agglomeration, while
endogeny (growing from inside) implies some kind of frame which becomes filled, or texture which
becomes thickened.’ Ibid.
66
‘Agglomeration (accumulating into a mass) and dissipation (dispersing or disintegrating) are
textural processes.’ Ibid.
!48
Conclusion
Glitches/Trajectories employs behaviourally imitative, interactive and reactive
sound types within a structure that unfolds through spectromorphological
mimicking of preceding passages, leading eventually to the development of an
extended passage of convergent and divergent motion and growth. The
identification and shaping of suitable materials, the assigning of behavioural roles
to those materials and their deployment within space, as outlined, allows for a
dramatic multi-channel musical experience to be achieved.
!49
CHAPTER 4. TRANSMISSIONS/INTERCEPTS: STRUCTURAL COHERENCE IN
LONG-FORM AND METHODOLOGY FOR CONCEPTUAL WORK
‘Here we reach remote surrogacy. But the links with gesture need
not be entirely lost. The gesture-field operates in the psychological
domain, and in remote surrogacy the indicative link can be forged
through the energy-motion trajectory alone, without reference to
real or surmised physical gesture or an identifiable source. The
listener is thus called upon to exercise and enjoy maximum
gestural imagination.’67
4.1 Concept
Transmissions/Intercepts (24:32) is a large-scale 5-channel work themed on the
mysterious undisclosed soundworld of government shortwave radio broadcasts
known as number stations. 68 These broadcasts may be intercepted by anyone in
possession of a shortwave radio and generally take the form of a brief tune-in tone
or melody, followed by several minutes of Morse code, or a voice relaying a
sequence of numbers, concluded with a signifying ‘end’ or ‘out’ message.
There is an eerie lifeless quality to the broadcasts; the voice relays are clearly
technologically and/or mechanically automated, and it is in the merging of
utterance space,69 mechanised space 70 and mediatic space71 that a basis for sonic
exploration is found. The piece therefore focuses on the source bonded qualities
67
Denis Smalley ‘The Listening Imagination: Listening in the Electroacoustic Era’, Contemporary
Music Review, 1996, 13(2), Routledge, 77 - 107, 85.
68
‘Number stations are shortwave transmissions from foreign intelligence agencies to spies in the
field of foreign countries.’, Priyom.org, 2010. [online] Available at <http://priyom.org/numberstations>, accessed 21 March 2016.
69
A type of enacted space: ‘utterance spaces, which are articulated by vocal sound,’ Smalley,
‘Space-form’, 38.
70
‘Although all these are human creations, and although they may sometimes be triggered or
controlled by human agency, they can emit sound independently of us, thereby, in part at least,
producing their own space. We can call these mechanised spaces, and they can be nested in
broader enacted spaces.’ Ibid., 39.
71
‘This is mediatic space, which comprises an amalgam of spaces associated with
communications and mass media, as represented in sound by radio and the telephone, and sonic
aspects of film and television.’ Ibid.
!50
of the voice in conjunction with remote and synthesised sound materials, in
attempts to produce a work rich in electroacoustic musical language. 72
The theme for Transmissions/Intercepts stems from the discovery several years
ago of The Conet Project (1997), a 4xCD collection of number station recordings. 73
Additionally Andrew Lewis’ audio-visual work Lexicon (2012) was of particular
inspiration, specifically Lewis’ strikingly transparent handling of voice
transformations.
4.2 Extrinsic associations/intrinsic spectromorphologies
The shortwave radio soundworld suggested a variety of spectromorphologies that
provided a basis for the development of sound materials. 74 The notion of constant
streams of broadcast sound suggested textural continuant materials. Radio
frequencies and tuning suggested pitched content (both relative and intervallic). 75
Radio interference suggested noise, spectral reduction, spectral density and lowfidelity distorted sound. Morse code suggested gestural iteration, stable pitch and
internal texture detailing (the amassing of Morse code iterations to create texture).
Additionally the variety of sound transformations achievable through exploration of
the radio spectrum – when tuning between random frequencies – seemed to allow
for all manner of generated (synthesised or transformed recorded sound) thirdorder surrogate and remote surrogate sound materials to be considered for
72
This analysis focuses primarily on the electroacoustic sound detailing within the work (over a
detailed transcription of the work’s tonal content). Tonality within the piece was developed aurally,
and its development over time does not follow a set metre. It can be argued that transcription is of
less value than, for example, a discussion of spectral development, or of sound organisation and
behaviours featured in the work.
73
Irdial Discs have made The Conet Project complete 4xCD recordings available as a free
download from Archive.org. [online] Available at <https://archive.org/details/ird059>, accessed 25
July 2016.
74
‘Music is a cultural construct, and an extrinsic foundation in culture is necessary so that the
intrinsic can have meaning. The intrinsic and extrinsic are interactive.’ Smalley,
‘Spectromorphology’, 110.
75
‘In intervallic pitch we can hear pitch-intervals, and therefore their relationship to cultural, tonal
usage will become important. In relative pitch contexts we hear with much less precision the
distance between pitches and can no longer hear exact pitches or intervals in spectral space.’ Ibid.,
119.
!51
inclusion. Notions of masking and revealing the voice were also suggested (as
occurs when broadcasts are intercepted with little regard for accurate tuning-in). 76
4.3 Utterance and voice transformation
The voice, as the recorded and automated relayer of number sequences, was
explored in both natural and transformed states, ranging from recordings edited
and sequenced with minimum treatment (suggesting a listening experience
occurring outside of broadcast space within a real-world setting), to heavily
transformed repetitions of number sequences (iteration resulting in mechanised
utterance space). Different languages provided a further avenue of metaphorical
exploration (global broadcast space). Opting to recreate the number station
soundworld in the studio, I turned to NOVARS postgraduate students to provide
me with voice recordings in varied languages.77
Primarily the voice was considered as a source bonded sound object for musical
exploration; the words spoken (the phonetic alphabet), and numbers relayed in
different languages, have no inherent meaning beyond that of sounding words, or
words associated with broadcast. As number station broadcasts make no sense to
anyone other than (presumably) the broadcasting agent and the intended
recipient, it seemed appropriate to focus on musical coherence as opposed to
deeper levels of conceptual investigation, or other processes that may inform the
electroacoustic composer when exploring a conceptual theme. 78 It is then, through
montage (repetition) and sound transformation, that the voice becomes a signifier
for combined mediatic and mechanised space.
From the variety of voice transformation processes explored, granulation proved
effective in highlighting internal timbral properties of words spoken (phonemes),
and allowed for the exploitation of internal pitch content. The application of a
76
‘The human voice, however, can be recognized even when its specific spectral characteristics
have been utterly changed and it is projected through a noisy or independently articulated channel;
it is also notoriously difficult to imitate electronically.’ Wishart, ‘Sound Symbols and Landscapes’,
50.
77
With grateful thanks, Transmissions/Intercepts features the voices of NOVARS postgraduate
students Haruka Hirayama, Constantin Popp, Rosalia Soria Luz, Ignacio Pecino, and composer
Daniel Barreiro.
78
Referring here to processes such as data collection/analysis and sonification.
!52
(reverse sawtooth ramp) control signal to modulate the grain size, while also
adjusting the playback position of the file being transformed, resulted in a granular
time-stretch effect incorporating a high to low glissando pitch sweep (a result of
rapid playback of layered grains gradually increasing in grain length and slowing
down – for example, 15:08).
4.4 Tonality, glissando and structural functions
In contrast to the predominantly noise-based soundworld of shortwave radio, the
decision was made to develop tonal and pitch-centric material; stable intervallic
pitch provides a degree of aural grounding and accessibility for the listener
(Landy’s ‘something to hold onto’ factor).79 Additionally, the extraction of pitches
inherent in the spoken voice (through granular transformations), further suggested
the development of synthesised pitch content to support and enrich the vocal
material (through tonal layering). The application of glissando to different sound
materials formed behavioural relationships between voice materials (heard in the
previous example at 15:08), low frequency material (a sine wave, example at
10:25) and continuant synthesised drone materials (at 4:21, later reintroduced at
19:20). The latter glissando drone continuants provide a structural function within a
motion and growth process; glissando introduced towards the end of Part 1 (at
4:21) creates a divergence of tonality, shifting to relative pitch from the stability of
the preceding intervallic pitch material. This disruption in spectral space signifies
the onset of a musical transition, leading to a climactic gesture (at 4:40), from
which spectral density dissipates, concluding the introductory musical passage. A
variation on this growth process occurs later (at 19:20) where the reintroduction of
relative pitch glissando material leads into a return explosive gesture, once more
allowing density to dissipate and leading the listener in anticipation of climax and
release, transitioning from Part 3 to Part 4 of the work (see 4.7 Structural
analysis).
79
‘There are works of sound-based music that concentrate on a single parameter of sound to a
large extent, whether it be loudness (the extremely quiet and the extremely loud come to mind),
spatial projection of sounds or more traditional aspects including pitch (e.g., tuning, but anything
that is focused on audible pitch relationships is relevant), and/or rhythm.’ Landy, Art of Sound
Organisation, 29.
!53
4.5 Aleatoric development
Following initial generation and editing of sound materials, identification of
coherent musical combinations were derived from improvisations (spontaneous
montaging and sound transformation with real-time processing) within a live
electronics (laptop computer) performance environment. An Ableton Live session
was set up, with edited audio clips of generated sounds placed in discrete
channels containing a variety of real-time signal processing effects chains
(including, for example, filters, waveshapers and reverb plug-ins). Parameters to
be modified were assigned a physical control via MIDI. Through the spontaneity of
semi-improvised aleatoric exploration (and the later auditioning of these sound
events as captured through a stereo recording of the performance), successful
sound combinations and potential structural ideas were identified. The live
recording (see USB-drive, Appendix_C/Audio/Live_Electronics_Sets/
Live_Set_2.wav) provides insight into the process of semi-improvised performance
with fixed materials feeding directly into the compositional process; the
performance follows a similar development to Part 1 of Transmissions/Intercepts
(Live Set 2, 0:00 - 2:54) in amassing spectral density with noise-based materials
(here noticeably including drone materials from Section 2 of Glitches/Trajectories).
The performance then transitions into an early variation on Transmissions/
Intercepts’ tonal material (Live Set 2, 2:54 - 5:27). A timbral variation on this tonal
material occurs further into the performance (Live Set 2, 9:45 - 12:41, materials
later incorporated into Part 3 of Transmissions/Intercepts), following near identical
structure to Part 3 of the final composition, leading (Live Set 2, at 12:41) into an
antecedent shorter version of Part 4 of Transmissions/Intercepts, concluding the
live performance.
4.6 Typology of sounds
Transmissions/Intercepts features four primary classifications of sound types:
utterance, pitch-centric, noise-based and environmental sounds. These groupings,
however, contain several sub-classifications of spectromorphologies, some of
which link to other primary classifications (iterative granular voices for example, fit
within utterance as primary classification, but are also pitch-centric). Additionally,
within each subcategory, the identified transformations and behaviours are non!54
exclusive (for example, spoken iterative utterance is present in the work as both
untreated and filtered variations). Table 2 presents a typology of the most common
sounds types, each with an audible example (time reference), and an indication of
where the sound types feature in the work.
Primary Grouping
Sub-categories of behaviour/
transformed state
Audible
at:
Prominent in:
Utterance
Spoken, untreated
21:00
Parts 2 and 4
Spoken, iterative
(repetition of number sequences)
8:26
Parts 2, 3 and 4
Iterative, granular
(phonemes, stable pitch)
9:45
Part 2
Time-stretched/time-compressed,
(glissando pitch)
15:08
Part 3
Filtered
(spectrally reduced)
8:26
Parts 1, 2 and 3
Morse code
(iterative, stable pitch)
9:23
Parts 1 and 2
Pitch-centric
Noise-centric
Environmental
Synthesised graduated continuants, 16:25
(intervallic pitch)
Parts 1, 2, 3 and 4
Synthesised graduated continuants, 19:22
(glissando relative pitch)
(End of) Parts 1 and 3
Radio interference:
2:48
(untreated and granulated versions)
Part 1 and (end of) Part
3
Synthesised trajectorial iterative
gestures
2:00
Parts 1, 2 and 3
Synthesised oscillating trajectorial
continuants
6:05
(End of) Part 1
Synthesised, abstract gestures
(stable, both relative and intervallic
pitch, some iterative)
6:19 / 9:43 (End of) Part 1, Part 3
Abstract gestural noise, unstable,
trajectorial
13:37
Part 3
Abstract textural noise (granular)
12:46
(End of) Part 2
Continuant noise (two types:
transformed noise/unstable
electrical hum)
14:01/
22:51
Parts 3 (transformed
noise) and 4 (electrical
hum)
Continuant textural outdoor
ambience (includes birdsong,
footsteps and footsteps on leaves)
20:28
Part 4
Table 2: Typology of sounds in Transmissions/Intercepts
!55
Of the four sound groupings identified, Table 2 highlights utterance, pitch-centric
and noise-centric materials to be present throughout the work, where
environmental sounds feature in Part 4 only. Noise-centric materials are both
gestural and textural, and feature more diverse spectromorphologies than the
three other groupings. Pitch-centric materials featured are predominantly
continuant in nature. Iteration and granulation are also notable as common
processes applied across utterance, pitch-centric and noise-centric materials.
Figure 12: Transmissions/Intercepts sonogram (Parts 1 and 2).
!56
Figure 13: Transmissions/Intercepts sonogram (Parts 3 and 4).
!57
4.7 Structural analysis (spectromorphology and space-form)
Transmissions/Intercepts takes the form of four movements. The following is a
brief overview and detailed analysis of each movement (see Figures 12 and 13 for
an overview of the work’s development: a sonogram analysis and waveform
representation of the complete work, with brief comments).
Part 1 overview (0:00 - 7:07)
An exposition laying the tonal groundwork and introducing noise-based gestural
materials featured in later movements; an emergent and increasingly dense
graduated continuant, texture-carried section intended to evoke claustrophobia as
metaphor for the multitude of frequencies continually streaming broadcast sound.
Part 1 analysis: emergence/establishment
Part 1 of Transmission/Intercepts is underpinned by two graduated continuant
drones (rendered as stereo pairs: one to the front left and right loudspeaker pair,
the other to the rear left and right loudspeakers).80 Both are centred around the
pitch of D (D0 and D1 in both pairs) and both subjected to subtle frequency
modulation and low-pass filtering, gradually revealing upper spectral content and
increasing the harshness of the (predominantly) sawtooth wave-like timbre, filling
spectral space. The continuants provide a root for spectral content, and establish a
root for intervallic pitch content yet to come. The resulting circumspatial sound
image creates a state of gradual emergence and expanse through coexistence
and motion rootedness. 81
This texture-setting is disrupted at 2:00 by the introduction of iterative and spatially
trajectorial gestures (synthesised sound materials treated with delay to create an
echoing effect), moving through the 5-channel image and defining circumspace.
80
For consistency with my composed spatial intentions, I use the term rear loudspeakers as
opposed to surround loudspeakers. Please refer to page 12 for the intended loudspeaker positions
i.e. a quadrophonic setup with a front centre loudspeaker.
81
‘Many spectromorphologies are inherently non-rooted because there is no bass anchor
(fundamental note) to secure the texture.’ Smalley, ‘Spectromorphology’, 117.
!58
Figure 14: Morphological stringing of sound materials in Transmissions/Intercepts Part 1.
As further variations of noise-based gestural content are introduced, the overall
spatial image becomes increasingly spectrally dense. Noise materials are
organised to imply that one sound event triggers another through convergence
and divergence within motion and growth. An example of this can be heard at 2:32
- 2:37 (see Figure 14), where remote granulated textural noise and granulated
!59
radio broadcast noise are combined to produce a graduated onset-closed
termination gesture. The immediacy of the termination reveals the first vocal
material (an attack-decay of heavily distorted radio utterance, positioned in the
centre loudspeaker only), immediately fading out while being engulfed in a
spectrally dense, graduated onset-closed termination of noise material.
Iterative noise materials are introduced creating front to rear spatial movement in
proximate circumspace (from 2:18, with accelerating/decelerating speed of
iterations, suggesting expansion/contraction of time, and the winding-up/windingdown of mechanised activity). Additional active noise materials move around the 5channel image (2:21 - 2:26). Individual grains of noise are randomly assigned to
different channels adding further detail to the noise texture (example at 2:38 2:40). The interaction of sound behaviours and increased density create a sense
of pressured motion passage, audible at 2:51 - 3:34; the accumulation of noise
materials forces other sound materials out (at 2:56), the continuant root D drones
are high-pass filtered to remove low frequency content. The drones gradually
disappear to reveal the emergence of Morse code material in the centre
loudspeaker and a voice repeating ‘atenção’ (‘attention’, Portuguese), located in
the front left and right loudspeakers. Eventually the D drone texture is reintroduced
at 3:22, filtered in to reestablish low-end pitched content, seemingly forcing out all
other sound materials at 3:30.
From this point the 5-channel sound image becomes increasingly spectrally
dense, amassing continuant drones and noise materials previously introduced,
combined with an increasingly prominent Morse code relay and heavily distorted
broadcast utterance, maintaining a tannoy-like centre loudspeaker position. The
packed nature of the spectral density fills proximate circumspace. Several highpitched drones are gradually introduced, their frequencies modulated (initially
centred around the root, minor third and fifth notes – a D minor triad), creating
note-based intervallic to relative pitch variation, resulting in instability and tension,
adding a further layer of density to the upper spectral regions. Two explosive
gestures at 4:40 and 5:05 mark the climax of the introduction and trigger the
dissipation of sound materials, leaving the D continuant drones to gradually
terminate.
!60
Part 1 concludes with a sparse noise-based section (5:23 - 7:07), focusing on
abstract (synthesised) iterative and oscillating sound trajectories occurring in
perspectival circumspace. Contrasting the density of the preceding soundworld,
this passage reintroduces materials previously established (such as the low
frequency iterative gestures from 2:00), presenting them alongside new,
synthesised arrival-departure continuant sounds. The reduced spectral density is
intended to draw the listener's focus towards individual sound behaviours as they
emerge and disappear in circumspace. A final arrival-departure continuant at 6:52
gradually terminates to reveal the textural onset of Part 2.
Part 2 overview (7:05 - 13:43)
A texture-led movement reimagining the number station soundworld through
electroacoustic musical language. The spoken voice relaying number sequences
is presented in varied states of transformation, while retaining its causal identity
(human utterance).82 Tonality in the voice and synthesised materials create
moments of tension and resolve, finally leading into a graduated termination
texture of granular noise to conclude.
Part 2 analysis: reimagining the number station soundworld
Voice transformations are spatialised to produce an immersive texture-carried
opening passage, focusing on internal texture activity and intended to create a
sense of stasis. This is interrupted (at 7:47) by the graduated onset of a granulated
voice continuant texture. Pitch content inherent in the spoken word (‘niner’, female
voice) is emphasised (through granulation), producing pitch stability then exploited
for musical effect (at 7:47 - 8:08, B2 and C3). A synthesised gesture combined with
a male voice (‘niner’ at 8:08) triggers an increase in texture activity; pitch content
embedded within this new textural material is derived from Morse code recordings
and transformations. The male voice emerges again, positioned in the centre
loudspeaker and intended to suggest a consistent, fixed (broadcast) spatial
position within the 5-channel image. The reduction of frequency content and
additional bit reduction applied to the voice recording is intended to emulate
82
‘Once we can grasp the relationship between the sounding body and the cause of the sound we
feel we have captured a certain understanding: intuitive knowledge of the human physical gesture
involved is inextricably bound up with our knowledge of music as an activity.’ Ibid., 109.
!61
mediatic space. Further, the repetition of a number sequence (‘Zero, one, two, two,
niner’), evokes combined mechanised space and utterance space. Additional
musical development occurs at 8:32 where a transitional remote gesture triggers
new synthesised sound materials, including a low-pitched drone texture (B0),
providing a root for the tonal content inherent in both the voice granulations and
the Morse code transformations. A second number sequence (‘Zero, one, five,
four, niner’), is repeated, spatialised in circumspace and transformed to further
develop the notion of combined mechanised, utterance and mediatic space. 83 The
centre loudspeaker number relay becomes increasingly blurred through filtering
and applied reverb as the voice recedes into distal space. At 9:20, a 5-channel
abstract noise gesture appears to drive out most of the current sound activity,
leaving remnants of the texture-carried activity introduced at 8:08, while the low B0
drone fades away; spectral space clears to reveal an emergent Morse code relay
positioned in the centre loudspeaker, recalling the Morse code revealed in Part 1
(at 3:15).
The next development occurs (at 9:35) where synthesised gestural sounds
(positioned in the rear left and right loudspeakers) trigger a further texture-carried
section (9:42), reintroducing iterations of phonemes (extracted from the female
voice ‘niner’) moving through circumspace as the centrally fixed Morse code relay
maintains stability. The focus here is primarily on tonal relations between pitched
content inherent in the voice (also re-pitched through transposition) and tonal
synthesised gestural material, establishing at 9:43 a G major second inversion
(D0, G0, D1, G1 in synthesised sound materials and B2 in granular vocal).
Emulating behaviours further strengthen the relationships across sound types;
granular voice transformations change pitch with glissando – this rising/falling
glissando is mirrored in the lower spectral range with sine wave tones, suggesting
vertical (spectral) spatial movement. Table 3 details tonal development from 9:43
to its resolve on a C minor major (10:41) and then to C minor (10:46). Figure 15
highlights the points on the timeline where tonality changes (vertical blue lines a to
g as referenced in Table 3) and also highlights spectral development as visualised
by sonogram analysis.
83
Combined utterance, mechanised and mediatic space is achieved through the sound material
(the recorded voice), its behaviour (repetition), and its broadcast-like timbral qualities (achieved
through transformation).
!62
Figure 15
reference
Time
from:
Synthesised tonality/bowed
glass transformations:
Voice granulations:
a)
9:43
G major tonality, D0, G0, D1, G1
(synthesised reverse attack-decay
gestures).
B2.
b)
9:53
Repeat gesture.
Shifts to C3 (at 9:59).
c)
10:02
Repeat gesture.
Maintains C3.
d)
10:12
Re-pitched gesture (G#1, E♭2).
Maintains C3 with additional
introduction of D3, moving to E♭3 at
10:15.
At 10:19 C3 is pitched down to B2.
e)
10:20
Transposed gesture (G1, D2).
Maintains B2 and E♭3.
At 10:26 E♭3 is pitched down to D3
then back up to E♭3, as B3 is pitched
back up to C3.
f)
10:31
Re-pitched gesture (G#1, E♭2).
Maintains E♭3 and C3.
At 10:35 emergence and
disappearance of ‘row, row, row’, vocal
at G2 pitch.
g)
10:41
Gradual introduction of bowed
glass transformations (G3, E♭4).
Maintains E♭3 while C3 is pitched
back down to B3, then gradually back
up to C3 by 10:46 (resolving to C
minor).
Table 3: Figure 15 pitch references.
At 10:48 a noise-based granular texture emerges (previously introduced at 9:20 as
a short gesture), alongside a graduated termination of the granular vocals. New
third-order surrogate pitched continuants (bowed glass drones) are introduced to
vary timbre. As spectral content begins to clear (low frequency content exits at
11:54), we hear two transformed variations on the utterance ‘zero’, time-stretched
and treated with delay (referencing the number sequence earlier in the section).
The emergence of two untreated voices (one male, one female) – recalling in turn
the number sequence ‘zero, one, five, fo-wer, niner’, followed by ‘zero’ (male),
‘zero’ (female), ’zero’ (female) – gradually retreat into distal space; the untreated
voices are intended to imply a degree of real-world sound after journeying through
the reimagined broadcast soundworld.
!63
Figure 15: Spectral development in Transmissions/Intercepts Part 2 (09:35 - 12:50).
At 12:27 high-pitched granular continuants enter as the bowed glass drones exit.
The noise texture grows more dominant and active, rising at 12:46 to engulf the
circumspatial image, driving out all other materials. This texture motion gradually
fragments into audible variations of the same granular-based coexistent
behaviour, 84 as variations on established sound materials (gestures introduced in
Part 1 at 2:00) are reintroduced (13:01 - 13:13). These gestures depart and the
section concludes at 13:40 with a graduated termination of the noise texture.
84
‘The continuity–discontinuity continuum runs from sustained motion at one extreme to iterative
motion at the other. If iterative repetitions become too widely spaced then separate objects will be
heard. This tendency is possible with some of the multidirectional growth processes if the internal
texture becomes sparser during fragmentation in the growth process.’ Smalley,
‘Spectromorphology’, 117.
!64
Part 3 overview (13:43 - 20:14)
A dramatic gesture-led noise-based passage intended to mimic notions of radio
tuning and interference. The first voice relay has been extracted from an actual
number station broadcast. As the section develops, noise material and extensive
voice transformations lead into a radical shift towards streaming tonality, evoking
an imagined euphoric broadcast space.
Part 3 analysis: (first half, 13:43 - 16:08) contrast and morphological
stringing
Intended as a contrast to the preceding sections, Part 3 initially sets out to explore
both (the illusion of) time manipulation (example at 15:02 - 15:30; rapid shifts
between gestural events that propel time forwards, and sustained granular timestretched voice transformations intended to suggest suspended time), and the
organisation of materials resulting in morphological stringing and timbral
metamorphosis. 85
A brief period of silence is harshly interrupted by a timbrally-abrasive (centreloudspeaker positioned) edited recording of an actual number station broadcast.
Moments of silence are contrasted with short unstable bursts of noise in between
the number relays. The noise materials make fast unpredictable movements within
5-channels, contrasting with the stability of the voice material and number
sequences explored in Part 2.
In a development at 13:58, two transformed white noise continuants (triggered by
the immediate termination of previous noise material) pan left to right (positioned
in the front left and right, and rear left and right, loudspeakers respectively). The
voice begins to multiply and is distributed outwards from the centre loudspeaker to
the 5-channel image. The unstable noise materials maintain, and then increase,
presence until (at 14:27) the word ‘niner’ immediately terminates all activity, falling
85
‘Sound transformations may be defined as a timbral metamorphosis (i.e., from point 'A'
seemingly naturally to point 'B') within one single sound event or sonorous gesture. In the latter
case this can either take place within a sound continuum or by way of a discrete sound's repetition
being transformed into that of a second [, third, etc.] sound, again as 'naturally' as possible.’ Leigh
Landy, ‘Sound Transformations in Electroacoustic Music’, 2001. [online article] Available at <http://
www.composersdesktop.com/landyeam.html>, accessed 3 April 2016.
!65
to silence. ‘Niner’ repeats in isolation a second time at 14:30, followed by a further,
extended silence. Pitch is reintroduced with the arrival of a third transformed ‘niner’
(preceded by a short noise gesture); granular layers of utterance are re-pitched to
form a G minor tonality, and layered with a low sine wave tone; both then begin a
graduated descending glissando.
From 15:00 a combination of time-stretch transformations of spoken numbers and
various noise-based remote materials are organised to infer causality from one
sound event to the next. Voice transformations here suggest a speeding up and
slowing down of time, with ascending and descending glissandi becoming
suggestive of powering up/down-type behaviour (mechanised utterance). Further
utterances (both stretched and time-compressed – examples of which are audible
at 15:20, 15:36 and 15:48) decay or terminate to reveal (or appear to morph into) a
variety of behaviourally active noise-based remote sound materials. Synchronicity
and morphological stringing of events work in conjunction with spectral rise and fall
inherent in the voice granulations, until the final (freeze transformation) ‘tr-ee’ (at
16:08) is layered with an ascending glissando granular voice. This ascension
merges with the reintroduction of stable pitch content, marking the onset of Part
3’s climax.
Part 3 analysis: (second half, 16:08 - 20:14) spectral space and pitch space
16:08 features an emergence of a D4 pitched graduated continuant. From here
Part 3 becomes texture-carried through the use of layered intervallic pitched
continuant sound materials (created from a single synthesised tone, re-pitched,
stretched and layered multiple times). Use of sustain and intervallic pitch are
combined with an increasing density of graduated onset-closed termination noisebased gestures and utterance (a metaphor for radio interference), creating an
interplay between stable and unstable content and gradually amassing spectral
density in circumspace. While it was not a conscious compositional choice, on
reflection, the inclusion of numbers throughout this section seems to imply some
form of countdown to an approaching event or development, further building
anticipation. The function of spectral space within the texture-setting is a key part
!66
of the development (specifically tonal pitch space); 86 tension is built through the
introduction, removal and reintroduction of higher and lower frequency content, as
tonality and gestural noise amass (see Figure 16). Examples of this can be heard
between 17:46 to 19:00, where low frequency content is subtracted and
reintroduced.
Figure 16: Spectral development in Transmissions/Intercepts Part 3 (15:20 - 19:10).
Tonality (organised around a tonic of D) shifts from major third to suspended fourth
(16:26 to 17:00), then back to the root note. D minor tonality is next introduced and
gradually developed before finally making a dramatic shift to G minor at the
section’s climax (18:20). The arrival of this climax is onset by the graduated
introduction (via the opening of a high-pass filter) of a G1 tone, transformed to
contain audible distortion; the combination of tonality and noise result in a
dominant continuant, intended to evoke an imagined high energy broadcast
stream. Sustained (and increased) activity in the upper spectral regions further
add to the dense and tonally rich spectral image.
86
Tonal pitch space: ‘The subdivision of spectral space into incremental steps that are deployed in
intervallic combinations – a sub-category of spectral space.’ Smalley, ‘Space-form’, 56.
!67
The section concludes as spectral space clears (through gradual high-pass
filtering of content), to reveal once more the initial D4 tone, as a voice is heard
reading through the phonetic alphabet. Voice transformations are introduced and
layered, intending to suggest connections between natural (untreated voice),
mechanised (granulated) and mediatic (filtered/frequency shifted) forms of
utterance. Sound behaviours and interactions present in Part 1’s climax are then
emulated; the reintroduction of relative pitch (glissando) continuants mark the
transitional segue into a final explosive gesture (at 19:36), from which noise
materials gradually dissipate, leading into the work’s final movement.
Part 4 overview (19:58 - 24:32)
A timbrally softer extended variation on Part 3’s tonal content, introducing field
recordings. Tonal materials are combined with untransformed voices and source
bonded outdoor sounds. The listener is taken out of the imagined broadcast
soundworld and repositioned within an imagined real-world environment (the
space in which the broadcasts are received), producing enacted space.87 Tonality
and timbre in this section are expressive of melancholy as something the listener
may associate with reflection (as the work’s concluding movement), and as a
musical contrast to the predominantly lively preceding movements.
Part 4 analysis: from an abstract aural discourse, towards the abstracted
and mimetic
Referring to Emmerson’s language grid,88 the majority of Transmissions/Intercepts’
preceding sound materials are abstract in nature. Spectromorphologies included
throughout the work were developed and selected for musical effect (intrinsic
qualities), yet may be found to suggest extrinsic associations related to the work’s
theme (as previously discussed, see 4.2 Extrinsic associations/intrinsic
87
‘Spaces produced by human activity I refer to as enacted spaces, and they can be divided into
two primary types – utterance spaces, which are articulated by vocal sound, and agential spaces,
where space is produced by human movement and (inter)action with objects, surfaces,
substances, and built structures; we can also include human intervention in the landscape.’ Ibid.,
38.
88
Emmerson is concerned with ‘[…] the possible relation of the sounds to associated or evoked
images in the mind of the listener.’ Simon Emmerson, ‘The Relation of Language to Materials’, in
The Language of Electroacoustic Music, 1986, Macmillan Press Ltd., 17 - 39, 17.
!68
spectromorphologies). Abstract content, therefore, intends to produce a combined
aural and mimetic musical discourse.89
The emergence of outdoor/field recorded materials result in a spatiomorphological
development,90 shifting the listening experience towards the mimetic via this new
abstracted material. 91 Source bonded sounds include birdsong, general outdoor
ambience and physical movement (footsteps and footsteps on leaves), producing
enacted space and agential space, and contrasting the work’s prior focus on
mediatic and mechanised space-forms while providing additional aural grounding
for the listener.92 The transportation of the listener between spatial settings, and
the combining of abstract and abstracted materials, creates an interplay between
the real (source bonded nature) and the (preceding) imaginary spaces.
From 20:00 multiple variations of a short synthesised tone (a timbrally softer tone
to that used in Part 3’s climax, containing less high frequency content) are repitched, stretched and layered according to aural preference, resulting in tonal
streaming texture motion expressive of melancholy.93 This variation reprise is
intended to mirror the tonality of Part 3 while contrasting the preceding spectral
density and climax. A sense of musical rise and fall is achieved through the
spectromorphologies themselves (graduated onset-graduated terminations),
applied amplitude envelopes and moments of near silence where tonal materials
all but disappear. Several untransformed voices relaying numbers emerge and are
89
See Emmerson, ‘4. Combination of aural and mimetic discourse: Abstract syntax.’ in ‘The
Relation of Language’, 30 - 31.
90
‘I use the term spatiomorphology to highlight this special concentration on exploring spatial
properties and spatial change, such that they constitute a different, even separate category of sonic
experience. In this case spectromorphology becomes the medium through which space can be
explored and experienced. Space, heard through spectromorphology, becomes a new type of
‘source’ bonding.’ Smalley, ‘Spectromorphology’, 122.
91
See Emmerson, ‘5. Combination of aural and mimetic discourse: Combination of abstract and
abstracted syntax’, in ‘The Relation of Language’, 31 - 33.
92
‘The recorded ‘scene’ provides a low-level reference – a window on a real event which has a
documentary connection with lived experience that in a sense cannot be reduced, although it can
be influenced by, for example, details of recording focus determining the rhythm of presentation
and perspectives on how we are being offered the scene. So then if a more abstracted sound world
is developed around this using electroacoustic transformation tools, we have in the real-world
sound event a groundwork for meaning.’ Young, ‘Sound morphology’, 9.
93
‘For we describe music emotively even when it is perfectly clear that the music is not (and cannot
be) expressing the emotions we ascribe to it, or when we have no way of knowing whether it
expresses those emotions because we have no way of knowing what emotive state the composer
was in when he wrote it.’ Peter Kivy, The Corded Shell: Reflections on Musical Expression, 1980,
Princeton University Press, 14.
!69
spatialised to create continuous texture motion in circumspace; mediatic space is
no longer present.
At 22:28 transformed voice material (spectromorphologically linked to the first
voice texture in Part 2) emerges low in the mix. At 22:50 remote noise material is
superimposed on the environmental space (electrical buzz and hum referencing
electrical activity), combined with untransformed vocal texture – one voice
repeating the word ‘fin’ (‘end’, Spanish) layered and spatialised in circumspace.
There is a subtle climax (22:50 – 23:27) that concludes with a closed termination.
A final swell of material is led by the (graduated) introduction of a pulsing bass
tone (in D), building tension one final time before its sudden closed termination,
leaving the remaining tonal content to gradually terminate and concluding the
piece.
Conclusion
Transmissions/Intercepts achieves musical coherence in long-form composition
through several applied methodologies including the assimilation and sonic
interpretation of conceptually-linked space-forms, the merging of tonally-based
textural material with noise-based gestural content and the exploration of
reciprocity between the abstract (aural discourse) and the abstracted (mimetic
discourse) via concepts of source bonding and spatiomorphology.
!70
CHAPTER 5. REDUCTIONS/EXPANSES: SPATIAL TRANSCENDENCE AND
STRATEGIES FOR SOUND DIFFUSION PERFORMANCE
When a composer is interested in grain, in the internal evolution of
sound events, in spectro-morphology (Smalley 1986), in the
textural flux and gestural articulation of time which grows out of a
consideration (via ‘reduced listening’, perhaps?) of the unique
sound object, in the event itself rather than the intervals between
events, then sculpting sound into a performance space is not a
contradiction of the composer’s intentions – it is a continuation of
them.94
5.1 Overview
Reductions/Expanses (13:39) explores (the illusion of) spatial transcendence, 95
and notions of temporal prolongation and suspension through (primarily) thirdorder surrogate sounds. 96 The title refers to the result of spectrally reduced
materials (at very low amplitudes), creating a horizontal perspectival expanse, able
to suggest a listening experience originating from the edges of distal circumspace.
Composed space may then suggest events occurring beyond the boundaries of a
superimposed space.97 Temporal extension and suspension is achieved through
slowly evolving continuant spectromorphologies, and an extended passage of
centric (pericentral) texture motion. As a predominantly texture-led work, notions of
emergence and disappearance play functional roles within musical structuring and
transition. As all my electroacoustic works are intended to be experienced live,
94
Jonty Harrison, ‘Sound, space, sculpture: some thoughts on the ‘what’, ‘how’ and ‘why’ of sound
diffusion’, Organised Sound, 1998, 3(2), Cambridge University Press, 117 - 127, 125.
95
‘Containment and transcendence are experiential qualities associated with the image, and can
be regarded as a companion concept to enclosure/ouverture.’ Smalley, ‘Space-form’, 53.
96
Spatial transcendence is often considered in reference to (environmental) soundscapes. Smalley
writes: ‘A circumspatial or purely prospective image that suggests ‘environmental’ dimensions,
through whatever combination of spectral, source-bonded and perspectival means, is liable
psychologically to transcend the boundaries of the listening space. This is because, firstly, we know
that environment is more expansive than any concert hall or domestic setting, secondly, because
the suggestion of the openness of environmental space tends to eradicate consciousness of
boundary walls, and thirdly because transmodal perception transports our imagination into
environmental settings.’ Ibid.
97
‘The (indoors) listening space encloses and may either confine or expand the composed space.
This ultimate space where the listener perceives is therefore a superimposed space, a nesting of
the composed spaces within a listening space.’ Denis Smalley, ‘Spatial experience in electroacoustic music’, in L’Espace du Son II. Special Edition of Lien: revue d’esthetique musicale, 1991,
Ohain: Editions Musique et Recherches, 123 - 126, 123.
!71
analysis of two approaches to performing the piece provide appropriate examples
of my personal methodology towards sound diffusion, seeking to avoid an absolute
recreation of composed space and favouring instead the exploration of alternate
assignments of channel stems to loudspeaker groupings in attempts to exploit
superimposed space for musical effect.98
5.2 Source materials and transformation
Studio recordings were made of metallic sound sources including sheet metal, iron
rods, cymbals and resonant U-shaped iron ground hooks. Materials were captured
using both conventional microphones and contact microphones in order to extract
internal resonances from the sound objects.99 Sounds were transformed to
obscure the original source-causes while retaining some original
spectromorphological elements; combined granulation and reverb processes for
example, were applied to extend the resonances of gestural attack-decays into
sustained continuants. I sought to merge the resonant internal spatial properties of
these objects with tonal materials developed from source recordings of (attackdecay) acoustic guitar notes and chords. Granulation of guitar chords extended
the gestural source recordings into third-order continuants, producing material that
may or may not suggest an acoustic guitar as a possible source.100 Additionally,
through transformation and organisation of component materials I sought to
achieve a sense of distal spatial detailing through texture with blurred (spectrally
reduced) image definition featuring internal spectral activity, and complexity in
gesture unit construction.101
98
‘There is certainly a widespread belief among many composers that in performance, the aim
should be to attempt to reconstruct exactly the spatial image the composer put on the tape.’
Harrison, 'Sound, space, sculpture’, 124.
99
‘Internal space occurs when a spectromorphology itself seems to enclose a space. Resonances
internal to objects (hollow wooden resonance, metallic resonance, stringed instrument pizzicato
resonance, etc.) can give the impression that their vibrations are enclosed by some kind of solid
material. Internal space is therefore source bonded in that one needs this sense of an actual or
imagined sounding body.’ Smalley, ‘Spectromorphology’, 122.
100
As with Glitches/Trajectories and Transmissions/Intercepts, recorded source materials and
transformations – here specifically metallic sound sources and granular processing – were initially
explored via semi-improvised live electronics performance, prompting further studio exploration and
development.
101
‘We should note that a distant image could be blurred or clear, as could a close image.’ Smalley,
‘Spectromorphology’, 124.
!72
5.3 Structure and development
Structurally in two halves, Reductions/Expanses first establishes a shifting
soundworld moving between passages of spectral density and activity in more
proximate spatial positions – where gesture and texture play relatively equal roles
(0:00 - 1:27) – to texture-led spectral reduction and temporal stasis (1:27 - 3:17)
occurring in distal space (intending to achieve a sense of spatial transcendence).
Texture detailing is explored through transformations of materials resulting in
spectrally reduced continuants featuring internal spectral development and motion
(example audible at 1:56 - 2:08). Constructed gesture units emerge from distal
space to dominate as they pass through egocentric space, departing once more
into distal space, punctuating the less active texture motions and assisting forward
temporal motion (2:38 - 3:10). From 3:10 a spectral rising motion emerges from
the preceding gesture unit, leading into a highly active passage of spectral density
featuring movement in proximate circumspace and through egocentric space.
Spectromorphologies in this passage (introduced previously at 0:23 and 0:54 and
reintroduced at 3:47, 4:03 and 4:24) provide an example of gesture resulting in a
form of contiguous spatial texture; 102 the gestures imply a high velocity movement
through vectorial space leaving a trail of spectromorphological remnants behind.103
Tonality begins to emerge at 4:37 as density clears, leading into a second passage
of spectral reduction and temporal stasis. As spatially transcendent, tonal
continuants appear to recede further into distal space, temporal stasis is harshly
interrupted by a domineering attack-decay gesture in proximate space comprising
transformations of (struck) iron rods, layered, re-pitched and spatialised (at 6:23).
This gesture reintroduces tonal textural materials (granular guitar transformations)
leading – via a second iron rod strike gesture – into the second half of the work at
7:05.
102
‘Spatial texture is concerned with how the spatial perspective is revealed through time. This is a
question of contiguity. Space is contiguous when revealed, for example, in continuous motion
through space (such as in a left–right gestural sweep), or when a spectromorphology occupies a
spread setting (without spatial gaps).’ Ibid.
103
‘A trajectory is not necessarily a concentrated point-source. As the head or bulk of a gesture
moves through space it can leave residues behind. Trajectories can therefore leave trails, can be
smeared across space or be spread in a more egalitarian way through space. It may be that the
establishing of a residue is part of a transformation of a gesture into a spread setting – the spread
setting is introduced by a trajectory.’ Ibid.
!73
The second half focuses on a combination of centric motion and growth, spectral
amassing and clearing, and the illusion of temporal prolongation. 104 This is
achieved via several processes, including the creation and gradual layering of
multiple granular continuant spectral resonances (internal spatial qualities
captured from recordings of metallic sound objects) with tonal granular guitar
transformations; spatial deployment results in textural pericentral motion. 105 Guitar
recordings have in part been (aurally) selected to mirror component spectral and
tonal content found within the metal resonances, allowing the possibility of timbral
metamorphosis to play a compositional role (metal resonances and guitar
granulations – as discrete spectromorphologies – may appear to morph between
one another). Exogenous growth allows the texture motion to gradually amass and
clear, while never achieving a state of full (packed) spectral occupancy.106
Spectromorphological recycling of materials enhances notions of temporal
extension and suspension within the growth process (from approximately 8:00 9:17).107 A final rise and fall of gestural and textural activity arrives and gradually
terminates (10:08 - 10:42), leading to a sparse passage of tonally-based granular
pericentral texture motion. Activity and density subtly grows with the reintroduction
of established noise-based spectromorphologies (from 11:55) before transitioning
into a less active set of continuant drones, gradually terminating to conclude the
piece.
5.4 Embracing superimposed space: considerations for sound diffusion
performance
My works are intended to be experienced through live performance – diffused in
performance spaces – using available loudspeaker configurations to further
104
‘Generally in music, centric motion is expressed by spectromorphological recycling, giving an
impression of motion related to a central point. This can be achieved through spectromorphological
variation alone, but is frequently aided by spatial motion.’ Ibid., 116.
105
‘Centric motions can also be associated with growth. For example, I can think of rotating
motions which gather textural materials to them as they expand spectrally – a combination of
rotation and exogenous or endogenous growth. The spin, spiral and vortex are rotational variations.
Centrifugal (flung out) and pericentral (merely moving around a centre) are also a related group.’
Ibid.
106
‘Thus a packed or compressed spectral space is compacted so that is suffocates and blots out
other spectromorphologies.’ Ibid., 121.
107
‘Continuing recycling, like other forms of repetition, can give an impression of structural stasis,
but centric motions can also be strongly directional – vortical and spiral motions have this
possibility, for example.’ Ibid., 116.
!74
enhance their composed spatial qualities, and where possible, seeking satisfactory
reinterpretations of composed space. My methodology for sound diffusion
performance is a combined semi-improvised/semi-planned strategy wherein
several structural arrival points in the composition are identified. These are then
assigned a particular distribution of channel stems to loudspeakers, providing a
framework for improvised diffusion to be undertaken with confidence during
performance. It is often the case that time to rehearse on loudspeaker concert
systems is extremely limited; in my experience, composers generally tend to be
assigned a soundcheck allocation that is anywhere between one and two and a
half times the duration of the work they intend to perform. With such limited access
to a given performance system the improvisational aspects of diffusion
performance require (for me personally) an acceptance and an embracing of
superimposed space.108
Superimposed space in performance space is primarily defined by two factors: the
acoustic properties of the space itself and the configuration (groupings) of
loudspeakers and their positions within the performance space. 109 My involvement
in the design and implementation of the University of Manchester’s MANTIS 48loudspeaker sound diffusion system (working alongside Professor David Berezan
and NOVARS PhD student Constantin Popp) provided a richly rewarding
experience of experimentation with loudspeaker positions and groupings; alternate
layouts were often explored alongside more standardised speaker placements. 110
Loudspeaker groupings
Figure 17 is a top-down plan of the MANTIS 48-loudspeaker diffusion system,
highlighting four specific groupings employed for the concert premiere of
108
Despite limited rehearsal times I often find by the end of a soundcheck that I am confident in my
intentions for the diffusion of the work. I do not create diffusion scores, instead preferring to make
performance decisions based on aural response. I have however, found it beneficial to make brief
notes during sound check to refer to if necessary during performance; these notes always include
initial diffusion desk level settings for the start of the performance, and generally refer to specific
loudspeaker groupings that should be emphasised at the arrival points identified.
109
Superimposed space often changes considerably once the audience is seated within the
performance space. The audience often absorb an undeterminable amount of sound reflections,
thus altering the acoustic properties of the space. The audience therefore becomes an additional
contributor to superimposed space.
110
The MANTIS (Manchester Theatre In Sound) system always has fixed configurations to
accommodate playback of stereo, quadrophonic, 5-channel and 8-channel works, providing a
standardised spatial framework for diffusion performance.
!75
Reductions/Expanses. 111 A Main 8 ring of Genelec 8050 loudspeakers surrounding
the audience is highlighted in red.112 A Distant 8 ring of Genelec 8040
loudspeakers are highlighted in blue. The Distant 8 are at floor height, angled
approximately 45° (backwards) from an upright position, directed away from the
concert hall’s central spot. 113 Highlighted in green is a Stage 8 grouping, a quasiring of loudspeakers (mixed models and sizes). Figure 18 shows the on-stage
positions of the Stage 8, the loudspeaker directions, and my personal channel
assignments.114 By assigning all 8-channels to this formation, the entire spatial
image may be positioned in front of the audience: channels 1 and 2 outputting
from a stage high pair (on stands at the rear), channels 3 and 4 outputting from
stage wide loudspeakers, channels 5 and 6 assigned to loudspeakers centrally
placed, facing inwards towards one another, creating a stage fill effect, and finally
channels 7 and 8 outputting at the front of the stage, lower in position relative to
loudspeakers 3 to 6, in close proximity to each another, creating a mono-type
output and functioning as a stage solo pair.
Highlighted in orange (Figure 17) is an Inner 4 ring, positioned around the
performer, within the audience seating area; channel stems are grouped in pairs to
output from each loudspeaker as shown, drawing the 8-channel image into a more
intimate position nesting within egocentric space. Directional positions of
111
Reductions/Expanses was premiered at the MANTIS Festival, University of Manchester, UK, 17
October 2015.
112
Note the MANTIS Main 8 is based on four stereo pairings of loudspeakers surrounding the
audience from front to rear in a ring formation (positioned as shown in Figure 17) and differs in
configuration from Harrison’s original notion of a Main 8, as employed in the design of the
University of Birmingham’s BEAST system (this being a main stage pair, a wide pair, a distant pair
– at the rear of the stage – and a rear pair behind the audience). See Jonty Harrison, ‘Diffusion:
Theories and Practices, with Particular Reference to the BEAST System’, eContact 2.4 on the
website of the Canadian Electroacoustic Community/ Communauté electroacoustique canadienne,
2000. [online article] Available at <http://econtact.ca/2_4/Beast.htm>, accessed 28 August 2016.
113
‘In short halls, it can sometimes be difficult to achieve a real sense of distance, but if the wall at
the back of the stage is brick or stone, very distant speakers facing away from the audience and
reflecting off the wall can be effective (the high-frequency attenuation and general reduction in
source location mimicking remarkably well the sensation of the sound being further away).’
Harrison, ‘Sound, space, sculpture’, 122 - 123.
114
On-stage loudspeaker positions and directions were decided in conjunction with Professor
Berezan and Constantin Popp. Composers performing multi-channel works however, generally only
assign front-oriented composed channels to these loudspeakers. Considering an 8-channel work
where stems are assigned to the Main 8 as shown on page 11, channels 1 to 4 would be assigned
to stage loudspeakers, whereas channels 5 to 8 would not be assigned to any on-stage
loudspeakers; instead they would be assigned to loudspeakers positioned at the sides and rear of
the concert hall, maintaining a surround position to the centrally-located audience. The assigning of
all 8 channels to a frontal, on-stage position (and directional choices applied to the Inner 4 desk
loudspeakers), were personal preferences of mine.
!76
loudspeakers mimic a carousel-like configuration, and (as with the Stage 8) result
in a diffused space considerably different to that of the composed space. Figure 19
is an in-studio recreation of this configuration to further clarify.
Figure 17: MANTIS 48-channel performance system and four possible 8-channel groupings. 115
115
This diagram was developed from an initial template created by Professor David Berezan, as a
standard template used in visualising loudspeaker layouts for any given MANTIS concert system,
as would fit in the John Thaw Theatre (located in the University of Manchester’s Martin Harris
Centre for Music and Drama, Bridgeford Street, Manchester M13 9PL).
!77
Figure 18: Stage 8 quasi-ring (on-stage positions and directions).
Figure 19: Inner 4 (8-channels to 4 loudspeakers and directions).
!78
Figure 20: Reductions/Expanses structural overview/MANTIS loudspeaker group assignments.
5.5 Diffusion strategy 1 (large-scale concert system)
Figure 20 provides a concise overview of Reductions/Expanses’ structure,
identified as nine passages; right hand boxes detail the primary choices of spatial
distribution in relation to groupings discussed above (for clarity, colour coding of
right hand boxes in Figure 20 relates to colour coding applied to loudspeaker
groupings in Figure 17). The nine passages highlight examples of structural arrival
points identified during rehearsal. While these may be subject to change during
performance (hence the semi-improvised aspects), the final four passages (6 to 9)
and their loudspeaker groupings were identified as core to the successful
embracing of superimposed space.
Throughout performance, prominence is given to the Main 8 group. Each of these
loudspeakers is allocated an individual fader on the diffusion desk for controlling
output volumes; during performance I would alter these to emphasise/extend
dynamic highs and lows, also making subtle but randomised attenuation
!79
movements during more active passages such as Passage 3 (active passage in
proximate space, Figure 20). 116 Passages 2, 4 and 9 (see Figure 20, white
descriptor boxes) are focused on distal spatial activity and temporal stasis; here
volume attenuation in diffusion performance emphasises distal activity in
composed space, seeking a sense of spatial transcendence within the
performance space. Moving from Passage 3 into Passage 4 I gradually shift the
entire 8-channel image to the Stage 8; here morphing from a circumspatial output
to a panoramic position located exclusively in front of the audience, featuring
perspectival depth and height variation (see Figure 18). The Main 8 are
reintroduced in time for the abrupt gesture at the start of Passage 5 (at 6:23) to
surround the audience once more, following the stasis-like graduated termination
passage.
Passages 6 to 9 (outlined in black, Figure 20) employ a planned sequence of
spatiomorphological developments. From the circumspatial amassing of
pericentral textural movements (Passage 6), as texture becomes more sparse,
transition occurs to the Inner 4 group (Passage 7), creating a sparse intimacy of
spatial distribution nesting within egocentric space. As spectral occupancy and
activity rises once more (Passage 8) the sound is redistributed to the Stage 8,
creating a panoramic perspectival image. A gradual transition back to the Main 8 is
undertaken, leading into the concluding stasis continuants (Passage 9). A final
transition is gradually made from the Main 8 to the Distant 8 seeking spatial
transcendence once more as the performance concludes.
Where spectromorphological recycling of materials and sustained pericentral
texture motion through these final passages produces a degree of subtlety in the
composed 8-channel structural development, the diffusion outlined above results
in augmented structural development via spatiomorphology. Reinterpretation of the
composed space-forms via a large-scale loudspeaker performance system is
therefore used to produce a more immersive and expansive spatial listening
experience.
116
‘In performance I would, at the very least, advocate enhancing these dynamic strata – making
the loud material louder and the quiet material quieter–and thus stretching out the dynamic range
to be something nearer what the ear expects in a concert situation.’ Harrison, ‘Sound, space,
sculpture’, 121.
!80
Figure 21: Reductions/Expanses channel assignments for a small-scale 12 x loudspeaker diffusion
system.
!81
5.6 Diffusion strategy 2 (small-scale concert system)
A performance of Reductions/Expanses at the International Festival for
Innovations in Music Production and Composition 2016 (iFIMPaC), provides
evidence of an alternate diffusion strategy, designed to exploit a small-scale
system. 117 Figure 21 presents an approximation of the available performance
system and channel assignments. The limitations of a system based on a single
Main 8 ring of loudspeakers, an additional stereo Stage Wide pair and a Stage
Centre loudspeaker (alongside a sub loudspeaker) presented an opportunity to
explore an alternate spatial distribution. In order to achieve this I opted, at short
notice (just prior to sound check), to create an 11-stem version of the piece.
Stems 1 to 8 remained identical to the fixed media version submitted in the
portfolio. Stems 9 and 10 were a stereo mix-down of the 8-channel version
(previously created), assigned to the Stage Wide stereo pair. Separation of left and
right side composed space in the 8-channel image was retained in the stereo mix;
the Stage Wide, left loudspeaker handled channels 1, 3, 5, and 7 and Stage Wide,
right loudspeaker handled 2, 4, 6 and 8. Stem 11 was a mono mix of the stereo
mix-down (therefore combining all eight composed channels into a single channel
stem). This stem was assigned to the Stage Centre loudspeaker.
Referring to the structural breakdown outlined in Figure 20, Passages 1 to 6
maintained a circumspatial position, distributed through the Main 8 loudspeaker
ring, with appropriate fader variation intended to enhance dynamic range and
composed space. Transition into Passage 7 was achieved through the introduction
of the Stage Centre loudspeaker mono mix, and the graduated departure of the 8channel image to reduce this sparse passage down to a single position in
panoramic space. As activity increases (leading into Passage 8), a transition is
gradually made from the Stage Centre loudspeaker output to the Stage Wide
stereo pair, broadening the panoramic spatial image. Finally the gradual
reintroduction of the Main 8 ring coupled with the gradual fade-out of the Stage
Wide pair allowed Passages 8 and 9 to reestablish circumspace, relying on
composed space to convey a final receding motion into distal space, concluding
the performance. See USB drive Appendix_C/Audio/Reductions_Expanses_11_
117
Leeds College of Music, UK, 11 March 2016.
!82
Channel_Diffusion_Excerpt/Reductions_Expanses_11_Channel_Excerpt.aiff (and
the accompanying readme.pdf file for channel/loudspeaker assignments), for an
11-channel studio rendered example/excerpt of the diffusion strategy outlined from
Passage 6 to Passage 9.
Conclusion
Reductions/Expanses combines predominantly texture-led passages with
perspectival space-forms (proximate to distal/spatial transcendence), pericentral
motion with amassing/dispersing growth processes, and temporal prolongation/
suspension contrasted with passages of increased activity propelling time
forwards. The nature of the piece allows for radical spatial reassignments to be
explored in live performance environments. Through the embracing of
superimposed space as determined by given concert halls and performance
systems, the methods outlined for the transference of fixed media into
performance spaces highlight the potential for unique reinterpretations of fixed
media works to be achieved; here composed space may be further augmented in
the performance space, via diffused space.
!83
C H A P T E R 6 . I T E R AT I O N / B A N G E R : C O M B I N I N G A L G O R I T H M I C ,
G E N E R AT I V E A N D A L E AT O RY P R O C E D U R E S F O R M U LT I P L E
PERFORMABLE OUTCOMES
‘First, the computer offers powerful possibilities for constructing a
new sound world (far exceeding those by traditional instruments or
analogue electronic means) and for controlling with the greatest
care and precision the minutiae, the atomic structure, of sounds
themselves. Second, the computer suggests new ways to think
about musical structure because of the unprecedented facility for
unifying macro- and micro levels of a composition. The machine
gives the composer the capability of applying analytical and
theoretical concepts expressed as compositional algorithms or
programs, prompted by the necessity of organizing the new sound
world that has become available. Thirdly, by establishing an
interaction between the composer and technology, the computer
stimulates thought about the compositional process itself and
suggests a new relationship between creator and material with the
computer functioning as a more or less active intermediary.’118
6.1 Overview
Concluding the portfolio, Iteration/Banger (7:51) is an 8-channel fixed media,
gesture-led work exploring an alternate soundworld to the preceding compositions,
and hinting at likely future directions to be explored in my work. The work is
electroacoustic in that it is a composed multi-channel piece realised through
technological means with an emphasis on spatial exploration. Stylistically,
however, it lends itself to areas of contemporary electronic music affiliated with
rave and techno music cultures, specifically the genre termed post-rave. As such,
the work makes use of synthesised sound materials, some remote in
spectromorphological nature, others adopting the roles of second-order surrogates
associated with musical gesture (synthesised kick drums and electronic
percussive sounds).119
118
Tod Machover, ‘Thoughts on computer music composition’, Composers and the
Computer, 1985, William Kauffman, Inc., 89 - 111, 90.
119
‘Much music which uses simulation of instrumental sounds can also be regarded as
second order since, although the instrument may not be real, it is perceived as the
equivalent of the real. Commercial synthesizer usage is of this type when we recognise
both the gesture involved and the instrumental source simulated.’ Smalley,
‘Spectromorphology’, 112.
!84
Simon Reynolds writes:
‘By 1996 a new zone of music making had emerged out of the
ruins of “electronic listening music”; a sort of post-rave omni-genre
wherein techno’s purity was “contaminated” by an influx of ideas
from jungle, trip-hop and other scenes. Not particularly danceable,
yet too restlessly rhythmic and texturally startling to be ambient
chill-out, this music might be dubbed art-techno, since the only
appropriate listener response is a sort of fascinated contemplation.
Imagine a museum dedicated not to the past, but to the future,
where you can marvel at the bizarre audio sculptures.’120
Inspiration for the piece was drawn primarily from contemporary electronic artists
whose use of rhythm stems from chance procedures and generative computer
processes, resulting in unpredictable rhythmic and spectromorphological
developments; Autechre’s later works, including the Confield (2001) and Exai
(2013) albums, explore the terrain of generative computer music through
development of Max coding patches. One album of particular influence during this
period was Mark Fell and Gábor Lázár’s collaborative release The Neurobiology
Of Moral Decision Making (2015) – a set of rhythmically complex pieces produced
with a minimal palette of sounds (a kick drum, a synthesised hand clap and what
appears to be FM synthesis with minimal additional processing).
Iteration/Banger is performable as either diffused from fixed media, or alternately
as a live electronics performance. The 8-channel fixed version is the definitive
version, however Appendix C includes (along with a stereo reduction of the 8channel fixed media version), recordings of two alternate real-time performed
versions (see USB drive Appendix_C/Audio/Iteration_Banger_Live_Electronics_
Versions). One is stereo for a single laptop and MIDI controller, the other is an 8channel version for two networked laptops and MIDI controller. 121
120
Simon Reynolds, Generation Ecstasy: into the world of techno and rave culture, 1998,
Little, Brown and Company, 359.
121
To clarify, this work is not intended as a piece for other composers to perform or
reinterpret through performance.
!85
6.2 Development in Max and Live/signal path overview
Iteration/Banger resulted from experiments combining Max programming and
Ableton Live. The creation of a computer instrument capable of producing rapid,
synchronised and unpredictable rhythmic output with timbral variation became the
starting point for developing a fixed media multi-channel composition focused on
rhythm. As the Max patch became increasingly complex in both coding and
potential musical output I opted to separate its elements into two patches, each
handling different processes. Using classifications of compositional algorithms as
identified by Rowe, the performance system for Iteration/Banger incorporates
transformative, 122 generative 123 and sequenced124 components, and adopts an
instrument paradigm. 125 The signal path for the generation of 8-channel audio can
be split into three discrete stages. Stages 1 and 2 are handled on a MacBook Pro
and Stage 3 occurs on a second MacBook Pro – both MacBooks are connected
via Ethernet:
• Stage 1: Real-time sound shaping (generative) and sequential/chance triggering
of pre-rendered audio (sequenced), occurring on MacBook 1 via Max Patch 1.
• Stage 2: Real-time processing (transformative) of audio output from Patch 1,
occurring on MacBook 1 via Ableton Live.
• Stage 3: Real-time spatialisation (transformative) of the stereo output from Live
to 8-channels, occurring on MacBook 2 via Max Patch 2.
122
‘Transformative methods take some existing musical material and apply
transformations to it to produce variants. According to the technique, these variants may
or may not be recognizably related to the original. For transformative algorithms, the
source material is complete musical input. This material need not be stored, however—
often such transformations are applied to live input as it arrives.’ Robert Rowe, Interactive
Music Systems: Machine Listening and Composing, 1993, MIT Press, 7.
123
‘For generative algorithms, on the other hand, what source material there is will be
elementary or fragmentary—for example, stored scales of duration sets. Generative
methods use sets of rules to produce complete musical output from the stored
fundamental material, taking pitch structures from basic scalar patterns according to
random distributions, for instance, or applying serial procedures to sets of allowed
duration values.’ Ibid.
124
‘Sequenced techniques use prerecorded music fragments in response to some realtime input. Some aspects of these fragments may be varied in performance, such as the
tempo of playback, dynamic shape, slight rhythmic variations, etc.’ Ibid.
125
‘Instrument paradigm systems are concerned with constructing an extended musical
instrument: performance gestures from a human player are analyzed by the computer and
guide an elaborated output exceeding normal instrumental response. Imagining such a
system being played by a single performer, the musical result would be thought of as a
solo.’ Ibid., 8.
!86
Figure 22: Iteration/Banger software and hardware setup/signal path.
Figure 22 is an overview of the signal path and hardware setup used in the
creation of the 8-channel fixed media version. Max Patch 1 manipulates a primary
sine wave in real-time, several synthesised (pre-rendered) kick drums, a prerendered (snare-like) gestural noise burst and some simple FM synthesis (see
Appendix_C/Software/Iteration_Banger_Patch_1.maxpat). The audio output of
Patch 1 is sent out grouped as four discrete stereo pairs (eight channels), via the
ReWire software protocol into Ableton Live 9. In Live, the four stereo channels are
processed further with a variety of audio plug-ins, using multiple stereo channels.
For example, three stereo channels in Live may be receiving the stereo signal
from outputs 1 and 2 from Max Patch 1, each applying different processing to that
!87
stereo signal, then summed to the master output channel of Live, resulting in
further timbral shaping in stereo. Some plug-in parameters are modulated in realtime using Max for Live LFOs.126 These channels are then summed to a stereo
master output and sent to an audio interface (Figure 22, MOTU 1). The analogue
stereo output (L/R) of MOTU 1 is sent to the analogue in of MOTU interface 2. This
stereo signal is received via FireWire out from MOTU 2, by a second MacBook Pro
(running Max Patch 2). Max Patch 2 (see Appendix_C/Software/Iteration_
Banger_Patch_2.maxpat) redistributes the stereo signal to an 8-channel audio
output, and sends it back via FireWire to MOTU 2. The eight analogue audio
outputs from MOTU 2 are finally sent to a mixer and from there direct to eight
loudspeakers. The Ethernet connection between MacBooks 1 and 2 allow
udpsend and udpreceive Max objects to pass data (in this case, bang or trigger
messages) from Patch 1 to Patch 2, allowing performed changes in the output
state of Patch 1 to affect spatial distribution settings in Patch 2, enhancing control
when improvising in real-time with the performance system.
See Appendix B for further detail on both Max patches and the Ableton Live setup.
Appendix C (USB drive), features:
• both Max patches used in the creation of the piece (all sub-patchers feature
written explanations of signal shaping, audio triggering, spatialisation, etc., as
related to that part of the patch),
• audio samples for Patch 1 (place these in the Max file search path),
• an example Ableton Live session (using only native plug-ins to Live version 9), to
emulate Stage 2 of the signal path, 127
• two audio/video walk-through guides demonstrating the functionality of the Max
patches,
• stereo and 8-channel recordings of live-electronics performances of the piece.
126
For more information see <https://www.ableton.com/en/live/max-for-live/>, accessed
21 July 2016.
127
It was not possible to include the original (Stage 2) Ableton Live session, as it uses
third party plug-ins including software by Waves and iZotope.
!88
6.3 Genre hybridity and stylistic similitude
Musical hybridity is of less personal concern to my compositional interests than
seeking to create musically coherent fixed media multi-channel outcomes.
However, in the interest of analysis it is of value to consider some of the inherent
stylistic traits in Iteration/Banger that derive from rave and techno music genres.
The most obvious of these is rhythm itself; while rave and techno styles generally
focus less on rhythmic complexity in favour of stability (for example, 4/4 time at a
fixed tempo), Iteration/Banger focuses on rhythm as a primary compositional
element and explores complexity within – for the most part – stable metre.
Sound materials incorporated are directly related to rave and techno; synthesised
kick drums and simple repetitive tonality are closely related to rave music’s
instrumentation and use of repetitive synthesised one chord stabs. In Iteration/
Banger tonality is created by the application of a tuned comb filter effect to an
input signal of a rapid high to low sine wave sweep; the high to low sine wave
sweep shapes the timbre (the sweep being fast enough to result in a percussive
sound rather than audible descending glissando), whereas the tuned comb filter
resonates intervallic pitches based on the Dorian mode, from a set fundamental
base frequency. Constraint of sound material is also common to (minimal) techno
and rave; Iteration/Banger employs a minimum of core materials. Certain
processing techniques applied in the composition are also derivative of rave and
techno, in particular the use of side-chain compression to force the prominence of
kick drums and/or to result in compacted, spectrally dense textures (example at
3:55 - 4:10). Furthermore the application of long-decay reverb and the low-pass
filtering of continuant materials is commonplace in techno music production.
Iteration/Banger’s musical form is comparable to techno and rave form; repetition
occurs on both micro (rapid gestural iteration) and macro levels (returning/
developed variations of established musical sections, related to popular music
form). High energy gestural passages in the work are contrasted with brief
passages of less active textural material, followed by returning gestural and
rhythmic passages; this also mirrors aspects of techno music form, where
contrasting textural (ambient) passages often lead to the reestablishment of the
!89
work’s primary (rhythmic) passages (for example, reintroducing a regulated kick
drum and repeating motif or phrase).
6.4 Development/extraction of materials and organisation
Experiments with Patch 1 and the Ableton Live session via live electronics
performances (as the code was still in development), led to a refinement of the
system’s potential responses – for example, varying and setting input value ranges
being fed to parts of the patch controlling the timbral shaping of the sine wave
output. Modifications were added to control sound behaviours, tempo and
transformative settings. Through continued refinement of both patches and the
Live session’s performance functionality (allowing hands-on control over defined
pre-set states of aleatoric timbral and rhythmic output), and through the later
rendering of several studio improvisations in 8-channels, discrete and contrasting
musical passages were generated. Once identified, the most musically potent
recorded passages were extracted, edited and sequenced to define the overall
structural framework of the fixed media version.
Additional materials were then developed – some created using the Max patches,
others created with alternate tools and plug-ins – to add further contrast to
materials featured and to enhance spectromorphological detailing. The 8-channel
audio output of the signal path (Figure 22) primarily produces sound activity
located in proximate circumspace (from generated materials in Max Patch 1 and
Live sent to Patch 2 then distributed to 8-channels). As such, additional spaceforms were incorporated through the development of remote materials that may be
considered equal parts gesture and texture (continuants with internal
spectromorphological development treated with doppler effect), producing vectorial
movements in stereo. Stereo renders were then reassigned to 8-channels via a
matrix~ object in Patch 2 (random redistribution of the stereo signal to any two of
eight channels), resulting in vectorial spatial motions, passing through egocentric
space within circumspace (examples between 0:33 - 0:53).
As the core audio output of Patch 1 is short rapid variations of gestural material, I
sought to create complimentary continuant textures; high-pitched drones were
developed by passing continuous white noise through one of the tuned comb filter
!90
settings (intervallic pitch), then applying a high-pass filter to reject lower frequency
content. Later the continuants were processed through a low-pass filter to
gradually reduce them to silence (audible example at 5:00 - 5:52).
Varied spectromorphologies are linked through shared transformation processes;
tonality provided by intervallic pitch comb filter processing, for example, forms
bonds between the pitched gestural content and pitched continuant drone
materials (both sound types feature the same inherent intervallic pitches). The
synchronicity of gestural events (sine wave output, kicks, noise burst and FM
synthesised gestures) forges behavioural links between sound types.
6.5 Structure
Figure 23 provides a structural overview of the piece. Section 1 was predominantly
generated without using the Max patches and seeks to create emerging active
behaviour and spectral density through continuant materials positioned in the mix
to suggest a layering of sounds located in different perspectival spatial locations.
This texture-setting is disrupted by attack-decay gestures (1:07), introducing the
dominant gestural content to come, causing a shift towards a spectrally-cleared
circumspatial image. Section 2 establishes the primary soundworld of the piece:
rapid iterative variations, density, synchronised gestural events and random spatial
assignments. Events occur in proximate circumspace suggesting spatial
containment, surrounding the listener. Transitions between musical sections
fragment the iterative, metred nature of the material; temporal pacing switches
between forward-propelled (gestural) sections and less active (textural) transitional
passages, creating a pause of gestural activity. Section 3 reestablishes the
soundworld introduced in Section 2 with more complexity through inclusion of
additional gestural materials, leading into the next transition passage of spectrally
dense continuants (from 3:26, regularly interrupted by side-chain compression
triggered by a muted kick drum), gradually receding into distal circumspace. A
dominant gesture at 4:11 introduces a variation on Sections 2 and 3, featuring
transposed tonality and fragmentation of metre, where synchronised events
accelerate and decelerate, suggesting expanding and contracting notions of time.
Section 5 provides further contrast, as sparse activity and temporal stasis is
!91
Figure 23: Iteration/Banger structural overview.
established, only to be interrupted by unpredictable, prominent attack-decay
gestures with long reverb tails, located in proximate circumspace, exploiting
minimum to maximum dynamic ranges between materials. Finally, Section 6
combines variations on materials from all 5 preceding passages; iterations here
!92
create more syncopated rhythmic outcomes (synchronised gestures follow metre,
but may be triggered independent of one another). In its final moments the work
grows increasingly spectrally dense leading (through a textural, rising glissando
motion), to a final attack-decay termination gesture, concluding the piece.
6.6 Multiple performable outcomes/additional performance functionality
As outlined, this research resulted in multiple performable versions of Iteration/
Banger.128 The fixed media 8-channel version features more materials, greater
detailing in featured spectromorphologies and more varied spatial development,
than the live electronics versions. The live versions feature a variation in the work’s
structure; both live versions begin and conclude with variations on Section 5 of the
fixed media work, as a way to start and conclude performances with a degree of
subtlety, substituting Section 1 sound materials.129 The stereo live version in
particular is intended for performance possibilities in less controlled spaces, such
as club environments.130
Functionality in Patch 2 has been designed with capability to incorporate additional
fixed media materials into a real-time live electronics performance; up to two 8channel fixed media files may be randomly redistributed spatially (randomly
reassigning/crossfading channel stems with the possibility to adjust crossfade
times). Patch 2 also contains an additional sfplay~ object allowing for playback of
a fixed stereo file that may be randomly reassigned to outputs in the 8-channel
image, featuring time-stretching and pitch-shifting functionality. Depending on the
stability of the performance setup, the aforementioned tools (all of which played
128
The 8-channel fixed media version is intended for performance via small or large-scale
loudspeaker sound diffusion systems. Performance of the stereo live electronics version
requires a laptop running Max Patch 1 and Ableton Live (Stages 1 and 2 of the signal path
only, see 6.2 Development in Max and Live, and signal path overview). Performance of
the 8-channel live electronics version requires the complete setup as outlined in Figure 22
(Stages 1 to 3 of the signal path), as originally used in the production of the fixed media 8channel version.
129
Sound materials in Section 1of the fixed media version were not real-time generated
from Max and would therefore require playback of a pre-rendered file in order to be
included in live electronics performances.
130
The stereo live electronics version of Iteration/Banger was debuted at the Off the
Beaten Track event curated by Matthew Bourne, as part of the proceedings of iFIMPaC,
Belgrave Music Hall, Leeds, 11 March 2016.
!93
roles in the development of materials for the fixed 8-channel version) hold potential
for further embellishment during live performances.
While the composition clearly holds potential for extending aspects of liveness in
fixed acousmatic performance (specifically referring to the live 8-channel version),
my intention with Iteration/Banger was to explore the production of discrete
performable versions, as opposed to researching ways to merge and extend
diffusion performance practices in acousmatic listening situations.131 Berezan
(2007), and Moore (2007, 2008) among other composers have contributed
research to strategies for the potential fracturing of the fixed nature of acousmatic
performance. In exploring multiple alternate formats, the intention was to produce
a work that can accommodate a broad range of technical setups, allowing an
increased possibility of programmed performances of the work not restricted
exclusively to acousmatic listening situations. The diffusion of fixed works remains
a primary research interest.
Conclusion
Iteration/Banger explores an alternate (related) compositional methodology to
those employed in previous works, through the creation of an instrument paradigm
and the development of a soundworld inspired by alternate genres of popular
electronic music. Here the process of generating sound materials differs from
previous works, but a bottom-up approach to sound organisation (on a structural
level) is retained. In addition, the work provides further evidence of the potential
forging of relationships between live electronics performance and fixed media
compositional practices, resulting in multiple performable variations of a composed
work.
131
The live electronics versions were a secondary outcome of compositional research.
Production of a fixed media work through the development of generative and aleatoric
processes in Max coding is the primary research outcome.
!94
CHAPTER 7. CONCLUSIONS AND CONTRIBUTION TO RESEARCH
‘The composer can never forget, however, that the most important
process is always one of intuition and judgement (often based on
“insufficient evidence”). No matter how extensively the composer
engages in rigorous research, confidence should never be lost in
the power of simple musical thinking.’132
7.1 The music
The work presented here covers a broad exploration of possible approaches to
spatial acousmatic composition, unified by qualities inherent in my personal
approach to composing with sound and given coherence through the
methodologies adopted and developed. An exploration of spectromorphology and
space-form analytical concepts (initially undertaken during the period of study for
my MusM degree) have, in turn, considerably influenced my approach to
composition. 133 When applied to analysis of the works they reveal a variety of
defining characteristics, providing clear insight into my compositional processes.
My composed music is predominantly gesture-led, and focused on transformed
and synthesised sound, deployed in composed space for dramatic musical
development. Where sound sources – either real or generated – are masked for
the most part of my compositional output, gestural shaping proposes an energy
and physicality that is intended to enhance the listener’s chances of forming
source bonded links between sounding materials, suggesting possible extrinsic
associations via transmodal perceptual experience. 134 I am concerned with sonic
detailing in both the development of spectromorphologies and space-forms. My
methods are informed by techniques of improvisation and aleatoric development,
132
Machover, ‘Thoughts on computer music’, 91.
133
Composers have previously identified the potential of applying analytical concepts of
spectromorphology and space-form to the compositional process. See Blackburn, 2009.
134
‘Transmodal linking occurs automatically when the sonic materials seem to evoke what we
imagine to be the experience of the world outside the music, and in acousmatic listening (not just
acousmatic music) transmodal responses occur even though these senses are not directly
activated in order only to listen. In listening to acousmatic music, rather than suffering some kind of
sensory deprivation, I am led spontaneously to contemplate the, possibly unique or unfamiliar,
virtual transmodal richness afforded by the aesthetic configurations of the music.’ Smalley, ‘Spaceform’, 39.
!95
and prioritise intuition and aural response over concept-driven or predetermined
methodologies for composition. My influences are derived from electroacoustic
repertoire, published research, contemporary electronic music forms and popular
music forms. Methods adopted and developed have been shaped in part by the
composers who have mentored my progression and through engagement with the
broader research community, concert and conference attendance and meaningful
discourse with other composers and electronic music practitioners.
7.2 Responses to research questions
The following responses reflect my current thoughts and findings on research
topics outlined in this commentary as arrived at via practice-based research, and
propose practical uses for areas highlighted.
• How does the electroacoustic composer create musical coherence when
employing predominantly abstract sound materials in non-linear musical
structures?
As is the case with the portfolio works presented, considerations of aurally
perceptual relationships between spectromorphologies and use of structural
functions applied to recurring sound types allow for compositions to be structured
in unpredictable ways while maintaining a coherence that may be perceivable by
the listener. Abstract sound materials are not devoid of source bonding, and
through use of related variations of sound types the listener may – consciously or
otherwise – identify spectromorphological bonds and find a sense of musical
grounding through familiarity and predicted directionality, as a work unfolds over
time.
• What approaches might be adopted in order to extend and embellish composed
multi-channel fixed media works through concert presentation, in relation to
contemporary sound diffusion methods?
The diffusion strategies outlined in Chapter 5 highlight two approaches that have
proved suitable for further exploration in my work: the first is a willingness to
detach from the notion that all multi-channel works must be presented maintaining
!96
a central sweet-spot position in the centre of the audience. This potentially allows
for radical spatial reinterpretations of fixed surround works (an example of this
being to transition from an 8-channel circumspatial image to an exclusively
panoramic spatial image featuring all eight composed channels). Secondly (where
possible given the wide variety of acousmatic performance technical setups), by
incorporating reduced stem mixes of multi-channel works into performance
alongside the full multi-channel versions of compositions, further possibilities are
proposed for greater performer interaction to occur in the diffusion of multi-channel
works.135 Not all works will be suited to these approaches, but as I continue to
explore their potential within my own practice, I would argue that the highlighted
strategies propose greater possibilities for spatial development in performance
through bold fragmentations of composed space in seeking a more gratifying
performed concert experience.
• How might tonality be successfully employed alongside abstract sound materials
in acousmatic works?
I employ tonality to create an additional layer of musical contrast in my pieces and
to provide a sense of grounding for the listener that may enhance a work’s
accessibility. This is certainly the case with longer works such as Transmissions/
Intercepts where an appreciation of the piece requires extended concentration on
the part of the audience and where the majority of materials featured are remote in
nature. Tonality and second-order surrogate sounds are applied intuitively in my
works and function as musical components that may aid the listener in
appreciation of potentially challenging electroacoustic works.
• What potential might the creation of multiple performable variations of composed
works hold for the composer/performer?
As highlighted in Chapter 6, this holds potential for broader performance
possibilities and a widening of the potential exposure for composed and performed
works. It also proposes further investigation (see 7.3 Research contribution) as
135
Stansbie highlights issues around multi-channel diffusion ‘[…] the presentation of multichannel
works often involves corrective agential acts that present the music as heard during the
compositional process. With this in mind, one is often dealing with multichannel playbacks rather
than performances.’ Adam Stansbie, ‘The Acousmatic Musical Performance: An Ontological
Investigation’, 2013, Unpublished Doctoral thesis, City University London.
!97
certain types of fixed media works (an example being Iteration/Banger) may prove
suitable for reinterpretation through live electronics performance formats, further
developing links between acousmatic compositional processes and live electronics
practices.
• How might aleatoric processes be successfully incorporated into composition
and sound generation techniques?
• How can relationships between studio-based composition and live electronics
performance practices be merged to strengthen composed musical outcomes?
As highlighted in Chapters 3 and 4, the aleatoric development of materials via live
electronics performance holds potential to produce alternative, contributory
elements to compositional outcomes – elements differing from those arrived at in
the studio environment. Further possibilities are elaborated in the following
section.
7.3 Research contribution: harnessing aleatoric elements in electroacoustic
composition
Analysis of my work leads me to identify a personal compositional methodological
model, cultivated during my time at the Novars Research Centre and outlined here
as my contribution to electroacoustic research:
Figure 24 outlines a strategy for the forging of a reciprocal, mutually beneficial
two-way interactive system between studio composition and live electronics
performance practices, incorporating aleatoric elements of sound montaging,
shaping, generating and transforming (I include here spatial transformation),
resulting in a feedback loop where one practice directly influences and shapes
potential outcomes of the other. As previously illustrated (see 6.6 Multiple
performable outcomes), outcomes may include multiple performable versions of
composed works, but the model also suggests possibilities for compositions and
live electronics performances featuring identical or related sound materials, where
musical developments may differ considerably.
!98
Figure 24: A mutually beneficial methodology for the merging of acousmatic composition and live
electronics practices.
From the initial recording and/or synthesis of materials, a process of rendering and
transforming is undertaken; in my own work materials are often transformed into
third-order or remote surrogates as the immediate stage after recording or
synthesis (Figure 24, Stage 1). Studio experimentation leads on to Stage 2
rendering, resulting in extended files of spectromorphologically varied sound
materials. Potent musical materials are identified, as are potential combinations of
spectromorphologies and initial structuring ideas. Materials are then explored
through (recorded) semi-improvised live performance, following a prototype
structural plan. Post-performance auditioning leads to further identification of
structural coherences, specifically via aleatoric moments achieved in live
performance. This leads to rendering Stage 3; post-performance extraction,
refinement and/or recreation of materials (stereo or mono). From here, materials
may be re-spatialised into a multi-channel format. Alternately materials may be
reinserted into the performance situation, further exploring aleatoric development
with newly refined spectromorphologies; here a feedback loop is established
between the live performance stage and rendering Stage 3. Stage 4 (following the
!99
possible re-spatialisation stage) is the composing stage, being the organisation of
combined extracted, refined, multi-channel, stereo and mono renders to produce a
fixed media outcome. As seen in Figure 24, this outcome (or elements of the final
fixed media work) may later be extracted or dismantled and inserted back into the
performance stage, either compacted down to stereo or, depending on the
environment, as multi-channel renders for application in semi-improvised multichannel performance.136
Fixed media and live performance outcomes may be structurally related, or
structurally disassociated, while being potentially linked by spectromorphological
association between materials developed and deployed. The final diffusion
performance stage may appear isolated in the model, as an end point to the
process – this may or may not be the case. I have often reworked aspects of
compositions after first exploring them in the concert hall performance
environment; diffusion may therefore inform further development of composed
(and/or live electronics performed) space-forms, among other compositional
concerns.
In conclusion, the proposed model highlights the potential for live semi-improvised
performance practices to directly contribute to composed outcomes; processes of
montaging, transformation and wilful subversion of prototype structural frameworks
through spontaneity and persistence in performance, allow the harnessing of
aleatoric musical events for featured inclusion in composed works and future
performances.
7.4 The future
My five year immersion in acousmatic electroacoustic music has provided me the
time to develop and refine technical processes and aesthetic thinking, and will
continue to inform my ongoing musical development. As I conclude my work at the
Novars Research Centre, I consider the end of my time here to mark the beginning
136
To date I have performed two semi-improvised live electronics sets in multi-channel
(quadrophonic) environments, utilising mono, stereo and 4-channel rendered files for spatial effect.
The performances were METANAST at Chorlton Arts Festival, Manchester, UK, 17 May 2015, and
(as contributor to a collaborative semi-improvised work curated by artist Rachel Goodyear and
composer Sam Weaver) A Line Fractured Into A Thousand Aberrations, Samarbeta residency,
Islington Mill, Salford, UK, 27 August 2015.
!100
of a new stage of development as both composer and performer. Most
immediately, I imagine my work will pursue explorations in combining sound
synthesis, coding, chance procedures, rhythmic focus and space-form in
acousmatic composition and performance. I will continue to develop live
electronics performance practices, seeking ways to further strengthen bonds
between the disciplines. This journey feels to have brought me full circle,
concluding in some ways back where I started (as a performing electronic
musician), reinvigorated to pursue new approaches, explore new possibilities and
to cultivate outcomes worthy of presentation to the broader research community, in
contribution to the ongoing development of electroacoustic music as a performed
art.
!101
Bibliography
Berezan, David. 2007. ‘Flux: Live-Acousmatic Performance and Composition’,
proceedings of the EMS: Electroacoustic Music Studies Network, De
Montfort University, Leicester, UK, 12 - 15 June. [online article] Available at
<http://www.novars.manchester.ac.uk/indexdocs/Flux-BerezanEMS2007.pdf >, accessed 1 August 2016.
Black, Jack. 1997. ‘The Search for Inspirado’, Tenacious D: The Greatest Band
on Earth, series 1, episode 1, HBO, 28 November.
Blackburn, Manuella. 2009. ‘Composing from spectromorphological vocabulary:
proposed application, pedagogy and metadata’, proceedings of the
Electroacoustic Music Studies Network EMS 09 conference, Buenos Aires,
Argentina, 22 - 25 June. [online article] Available at <http://www.emsnetwork.org/ems09/papers/blackburn.pdf>, accessed 2 August 2016.
Blackburn, Manuella. 2011. ‘The Visual Sound-Shapes of Spectromorphology: an
illustrative guide to composition’, Organised Sound, 16(1), Cambridge
University Press, 5 - 13.
Ears ElectroAcoustic Resource Site. 2002. [online] Available at
<http://ears.pierrecouprie.fr/spip.php?article198>, accessed 24 July 2016.
Emmerson, Simon.
1986.
‘The Relation of Language to Materials’ in The
Language of Electroacoustic Music (ed. Simon Emmerson), The
Macmillan Press Ltd., 17 - 39.
Harrison, Jonty. 1998. ‘Sound, space, sculpture: some thoughts on the ‘what’,
‘how’ and ‘why’ of sound diffusion’, Organised Sound, 3(2), Cambridge
University Press, 117 - 127.
!102
Harrison, Jonty.
2000.
‘Diffusion: Theories and Practices, with Particular
Reference to the BEAST System’, eContact 2.4 on the website of the
Canadian Electroacoustic Community/ Communauté electroacoustique
canadienne. [online article] Available at <http://econtact.ca/2_4/
Beast.htm>, accessed 28 August 2016.
Kivy, Peter.
1980.
The Corded Shell: Reflections on Musical Expression,
Princeton University Press.
Landy, Leigh.
1991.
‘Sound Transformations in Electroacoustic Music’. [online
article] Available at <http://www.composersdesktop.com/landyeam.html>,
accessed 3 April 2016.
Landy, Leigh. 2007. Understanding the Art of Sound Organisation, MIT Press.
Machover, Tod. 1985. ‘Thoughts on computer music composition’ in Composers
and the Computer (ed. Curtis Roads), William Kauffman, Inc., 89 - 111.
Moore, Adrian.
2001.
‘Sound diffusion and performance: new methods – new
music.’ Proceedings of the Music Without Walls? Music Without
Instruments? conference, De Montfort University, Leicester, UK, 21 - 23
June. [online article] Available at <http://www.dmu.ac.uk/documents/
technology-documents/research/mtirc/nowalls/mww-moorea.pdf>,
accessed 28 July 2016.
Moore, Adrian. 2007. ‘Making choices in electroacoustic music: bringing a sense
of play back into fixed media works’. [online article] Available at <https://
www.shef.ac.uk/polopoly_fs/1.26355!/file/3piecestex.pdf>, accessed 1
August 2016.
Moore, Adrian.
2008.
‘Fracturing the Acousmatic: Merging improvisation with
disassembled acousmatic music’, proceedings of the International
Computer Music Conference, Belfast, Northern Ireland, UK, 24 - 29
August. [online article] Available at <https://www.shef.ac.uk/polopoly_fs/
1.26358!/file/ajmICMCfinalfracture.pdf>, accessed 1 August 2016.
!103
Priyom.org. 2010. [online] Available at <http://priyom.org/>, accessed 21 March
2016.
Reynolds, Simon. 1998. Generation Ecstasy: into the world of techno and rave
culture, Little Brown and Company.
Rowe, Robert.
1993.
Interactive Music Systems: Machine Listening and
Composing, MIT Press.
Schaeffer, Pierre. 1952. In Search of a Concrete Music (Translation by North,
Christine and Dack, John, 2012), Berkeley: University of California Press.
Smalley, Denis. 1986. ‘Spectro-morphology and Structuring Processes’ in The
Language of Electroacoustic Music (ed. Simon Emmerson), The
Macmillan Press Ltd., 61 - 96.
Smalley, Denis. 1991. ‘Spatial experience in electro-acoustic music’ in L’Espace
du Son II. Special Edition of Lien: revue d’esthetique musicale, Ohain:
Editions Musique et Recherches, 123 - 126.
Smalley, Denis. 1993. ‘Defining transformations’, Interface, 22 (4), 279 - 300.
Smalley, Denis. 1996. ‘The Listening Imagination: Listening in the Electroacoustic
Era’, Contemporary Music Review, 13(2), Routledge, 77 - 107.
Smalley, Denis. 1997. ‘Spectromorphology: explaining sound-shapes’, Organised
Sound, 2 (2), Cambridge University Press, 107 - 26.
Smalley, Denis.
2007.
‘Space-form and the acousmatic image’, Organised
Sound, 12(1), Cambridge University Press, 35 - 58.
Stansbie, Adam. 2013. ‘The Acousmatic Musical Performance: An Ontological
Investigation’, Unpublished Doctoral thesis, City University London.
!104
Wilson, Scott and Harrison, Jonty.
2010.
‘Rethinking the BEAST: Recent
developments in multichannel composition at Birmingham ElectroAcoustic
Sound Theatre’, Organised Sound, 15(3), Cambridge University Press,
239 - 250.
Wishart, Trevor.
1986.
‘Sound Symbols and Landscapes’ in The Language of
Electroacoustic Music (ed. Simon Emmerson), The Macmillan Press Ltd.,
41 - 60.
Young, John.
2004.
‘Sound morphology and the articulation of structure in
electroacoustic music’, Organised Sound, 9(1), Cambridge University
Press, 7 - 14.
!105
Discography
Autechre. 2001. Confield, Warp Records, WARPCD128.
Autechre. 2013. Exai, Warp Records, WARPCD234.
Berezan, David. 2003. ‘Cyclo’, La face cachée, Empreintes DIGITALes, IMED
0896 (2008).
Blackburn, Manuella.
2011.
‘Switched on’, Formes audibles, Empreintes
DIGITALes, IMED 12117 (2012).
Conet Project, The.
1997.
Recordings of Shortwave Number Stations, Irdial
Discs, 59ird tcp1, 59ird tcp1b.
Fell, Mark. Lázár, Gábor. 2015. The Neurobiology Of Moral Decision Making,
The Death Of Rave, RAVE010.
Lewis, Andrew. 2012. ‘Lexicon’, Au-delà, Empreintes DIGITALes, IMED 13125
(2013).
Moore, Adrian. 1997. ‘Study in Ink’, Traces, Empreintes DIGITALes, IMED 0053
(2000).
Parmegiani, Bernard.
1967.
‘Capture éphémère’, La mémoire des sons, Ina-
GRM, INA_C 2019 (2002).
Parmegiani, Bernard. 1975. ‘Géologie Sonore’, De natura sonorum, Ina-GRM,
AM 714_01 (1976).
Saul, Daniel. 2012. ‘Jaws’, unpublished composition.
Saul, Daniel. 2012. ‘Blow’, unpublished composition.
!106
Smalley, Denis. 1974. ‘Pentes’, Sources / scènes, Empreintes DIGITALes, IMED
0054 (2000).
Whitman, Keith Fullerton. 2012. ‘Occlusion (Rue De Bitche)’, Occlusions; Real
Time Music For Hybrid Digital-Analogue Modular Synthesizer, Editions
Mego, DeMEGO 026 (2012).
!107
Appendix A: Programme notes and key performances
Frictions/Storms (2013)
8-channel fixed media
Duration: 12:17
Frictions/Storms explores source materials that are linked through friction, as
integral to the cause behind sound generation. Sources include the push-pull of
sawing on wood, the back and forth action of bowing a violin, the dragging of clay
tiles across one another and ceramic tiles struck together (producing resonances).
Recorded materials were heavily processed to create textural passages that recall
shifting weather patterns, heavy winds, (electronic) storms and thunder. The
identifiable sound of the violin provides a degree of grounding in a piece
employing prominent use of third-order and remote surrogate sound
transformations.
Key performances:
• MANTIS Festival, University of Manchester, UK. 3 March 2013 (premiere).
• Toronto Electroacoustic Symposium (CEC/NAISA co-presentation), Wychwood
Barns, Toronto, Canada. 15 August 2013.
• MANTIS Curated Concert, De Montfort University, Leicester, UK. 15 January
2014.
!108
Rise (2013)
Stereo fixed media
Duration: 12:10
Rise explores the application of abstract transformation types as primary sound
materials, the use of reductive transformation processes (such as distortion and bit
reduction), space in the stereo image and expectation in acousmatic composition.
The work features remote sound materials that are given coherence through the
application of what Denis Smalley identifies as structural functions; these deal with
expectation and the possible predicted directionality of a piece of music. During
the developmental stage I identified what were to become two key texture
passages in the work; using Smalley’s descriptors, these suggested classifications
of arrival, statement and prolongation. The creation of contrasting sections and
transitions that direct motion towards and away from these focal points provide
flow and structural development. Further coherence is achieved through use of
related variations of gestures presented in a series of transformational states
throughout the work. Source bonding in Rise is therefore explored via aurally
perceivable relationships between featured sound-shapes, as opposed to notions
of (identifiable) real-world sources and causes.
Key performances:
• Eighth Biennial International Conference on Music since 1900, Liverpool Hope
University, UK. 11 September 2013 (premiere).
• MANTIS Fall Festival, University of Manchester, UK. 27 October 2013.
• Embracing Rhythm, Welcoming Abstraction Sonic Fusion Conference, University
of Salford, UK. 8 November 2013.
• Duration Concert, UCLan, Preston, UK. 22 April 2015.
!109
Glitches/Trajectories (2014)
8-channel fixed media
Duration: 11:29
This piece, as the title suggests, explores audio faults (digital ‘glitches’) and space
(specifically trajectories of sound) articulated through an 8-channel image. I chose
to work with sequences of audio containing digital faults created through simple
subversion of audio playback and transformation tools. Denis Smalley’s
spectromorphology syntax is suitable in discussing the work; focus throughout is
on behaviour and motion and growth processes. Earlier sections contain a degree
of imitative and reactionary behaviour (exploring activity/inactivity, instability,
emergence/disappearance and empty/full spectral density). Later, spatially
trajectorial sound materials explore interaction and agglomeration/dissipation
growth processes. As the composition came into focus I found the lines between
gesture and texture becoming increasingly blurred. This is emphasised through the
structuring of a final extended section featuring variations of sound materials that
may be perceived as equal parts gesture and texture, exploring perspectival space
and vectorial space in circumspace.
Key performances:
• MANTIS 10 Year Anniversary Concerts, University of Manchester, UK. 2 March
2014 (premiere).
• Sonic Fusion MANTIS Concert, Media City, University of Salford, UK. 3 April
2014.
• METANAST Concert, Underland, Manchester, UK. 9 April 2014.
• New York Electroacoustic Music Festival, Abrons Arts Centre, New York, USA.
5 June 2014.
• Sound, Sight, Space, Play Conference, De Montfort University, Leicester, UK. 18
June 2014.
• ICMC/SMC, Onassis Cultural Centre, Athens, Greece. 18 September 2014.
• Sonic Fusion Festival, Media City, University of Salford, UK. 19 February 2015.
• IfIMPaC, Leeds College of Music, UK. 12 March 2015.
• Sound As Being, Lancaster University, UK. 20 March 2015.
!110
Transmissions/Intercepts (2015)
5-channel fixed media
Duration: 24:32
Transmissions/Intercepts is a large-scale multi-channel work themed on the
mysterious undisclosed soundworld of government shortwave radio broadcasts
known as number stations. These broadcasts (widely understood to be a form of
spy code) may be intercepted by anyone in possession of a shortwave radio, and
generally take the form of a brief ‘tune-in’ tone or melody, followed by several
minutes of morse code, or a voice relaying a sequence of numbers, concluded
with a signifying ‘end’ or ‘out’ message. There is an eerie, lifeless quality to the
broadcasts; the voices themselves are clearly automated, and it is in the merging
of what Denis Smalley terms as utterance space (space produced by the human
voice), mechanised space (identifiable as non-human in causality), and mediatic
space (space associated with communications, mass media, and broadcast), that
a basis for sonic exploration is found. The piece therefore focuses on the source
bonded qualities of the human voice in conjunction with abstract sound materials,
in attempts to produce a work rich in electroacoustic musical language.
Throughout the piece the voice is explored as a sound object; numbers relayed,
words spoken (the phonetic alphabet), and numbers read out in different
languages (number stations are a global phenomenon) have no inherent meaning
beyond that of sounding words or words associated with broadcast. I also opt for
tonal content to feature in the work; stable pitch drones may imply a connection to
the concept of the piece (as metaphor for multiple continuant radio broadcast
streams to be intercepted). In addition, tonal content establishes a degree of
musical grounding for the listener, while providing an appropriate texture-setting in
which remote and noise-based gestural materials unfold.
Key Performances:
• MANTIS Festival, University of Manchester, UK. 28 February 2015 (premiere).
• Musical Chit-chat Concert, Contact Theatre, Manchester, UK. 14 April 2015.
!111
Reductions/Expanses (2015)
8-channel fixed media
Duration: 13:39
This piece focuses on the production of expansive perspectival space and notions
of suspended and extended time. By reducing frequency content of surround
sound materials it becomes possible to create the illusion of spatial transcendence
(events occurring beyond the performed space, as defined by the concert hall and
loudspeaker placements within).
Structurally in two halves, Reductions/Expanses features a shorter opening
section that explores clustered tonality and spectral reduction resulting in a murky
soundworld. In contrast, the second half focuses on tonality via the merging of
abstract resonant content (developed from metallic objects including sheet metal,
iron rods and U-shaped iron ground hooks) with more grounded, tonally-based
materials created from source recordings of (attack-decay) acoustic guitar notes
and chords. Guitar recordings have in part been (aurally) selected to mirror
component spectral content found in the metal resonances, allowing the possibility
of timbral metamorphosis to play a compositional role. The illusion of extended
and suspended time is achieved via several processes, including the creation,
layering and spatialisation of multiple continuant spectral resonances (via granular
synthesis) resulting in texture-carried pericentral spatial motion. Behaviourally
active gestural materials also feature, providing timbral and musical contrast while
a textural dominance is maintained throughout the work.
Key performances:
• MANTIS Festival, University of Manchester, UK. 17 October 2015 (premiere).
• Echocroma XIV, Leeds Beckett University, UK. 24 November 2015.
• New Music North West Festival, Martin Harris Centre, University of Manchester,
UK. 25 January 2016.
• iFIMPaC, Leeds College of Music, UK. 11 March 2016.
!112
Iteration/Banger (2016)
8-channel fixed media
Duration: 7:51
Iteration/Banger is an intense electronics workout inspired by rave and post-rave
music, incorporating genre-specific sound materials and developed using MaxMSP
coding software. Taking the principle of audio-rate sequencing as a starting point
(to generate timed events with a phasor~ ramp object for sample-accurate
sequencing), the main programming patch manipulates a primary sine wave, up to
three synthesised kick drums, additional gated and processed gestural noise
materials and some simple FM synthesis. The resulting computer instrument is
capable of producing unpredictable rhythmic output with timbral variation. The
output is further shaped through several complex effects processing chains via
Ableton Live. Finally, the stereo signal from Live is sent to a third transformative
stage (a second Max patch) designed to manipulate sound spatialisation for an 8channel output. Materials were generated using this three stage process, then
selected and organised by ear.
Structurally the work applies iteration on both micro and macro levels (as is often
the case in rave music); repetition defines the work, sections decay into dense
sound drones to then dissipate, followed by bursts of variational and complex
rhythm. While certain sound materials have been developed to create a sense of
vectorial spatial movement and perspectival space-forms, the work primarily
features chance spatial outcomes achieved through aleatoric processes, and the
majority of sound materials, by design, exist in proximate circumspace.
Key performances:
• MANTIS Festival, University of Manchester, UK. 6 March 2016 (premiere).
• Off the Beaten Track curated by Matthew Bourne, proceedings of the iFIMPaC
conference, Belgrave Music Hall, Leeds, UK. 11 March 2016 (stereo live
electronics version, premiere performance).
!113
Appendix B: Iteration/Banger technical information
The following elaborates on the functionality of the two Max patches (included in
Appendix_C/Software) and the Ableton Live setup designed and implemented in
the production of Iteration/Banger producing a three stage signal path (see 6.2
Development in Max and Live and Figure 22).
B.1 Patch 1: sound generation and sound triggering
Figure 25: Reshaping phasor~ ramp signal via log~.
Sine wave shaping
Taking the principle of audio-rate sequencing as a starting point (to generate timed
events with a phasor~ signal ramp object for sample-accurate sequencing), the
!114
signal generated from a phasor~ object (ramping from 0. to 1. at a rate determined
by an inputted frequency value), acts as a master clock. 137 This signal is used to
create and control a combination of (synthesised) real-time shaped elements and
pre-rendered audio, triggered through chance procedures as dictated by the
patch’s algorithms.
The phasor~ signal is fed into a log~ object that calculates and outputs a signal
composed of the logarithms of its input values, determined by a given logarithmic
base value (see Figure 25 for a visualisation of the reshaping of the phasor ramp
signal via log~). The new signal out from the log~ is multiplied by a value of 1000
and sent to the frequency inlet of a cycle~ object (a sinusoidal oscillator), now
outputting the multiplied signal within audible frequency range. The resulting
output from cycle~ is a signal comprising stable iterative variations of high to low
sine wave sweeps. The rate of iteration is determined by the frequency of the
Phasor~ (in the case of Iteration/Banger, the frequency value most maintained is
10).
The phasor~ is also used to create a sample-accurate bang (a trigger message
also outputting at the rate determined by the frequency of the master phasor~).
This bang is used to generate random numerical values that are then scaled and
sent to the log~ base value inlet. By feeding the log~ base value inlet a stream of
random values (generated by a random object and scaled to be within a given
minimum/maximum range as determined by arguments given to a scale object),
the signal output values from log~ are recalculated, reshaping the audio output of
cycle~, resulting in timbral variation (a constant reshaping of high to low frequency
sweeps, allowing different frequency ranges to be more or less audible on each
iteration). The resulting audio output is a rapid, stable stream of iterations with
chance timbral variations. This real-time generated sine wave output formed the
basis for the creation of Iteration/Banger.
137
See MAX online reference for more information on audio-rate sequencing. [online]
Available at <https://cycling74.com/wiki/index.php?
title=MSP_Sequencing_Tutorial_1:_Audio-Rate_Sequencing>, accessed 4 May 2016.
!115
Figure 26: Logarithmic output of the phasor~ signal (four examples).
!116
Chance silences
Base values between 0. and 1. fed to the log~ create high to low frequency
sweeps. Feeding the log~ base a negative value results in an output of
0.00000001; when this signal is multiplied and sent to the cycle~ object it results in
silence. By scaling the minimum/maximum range of random values fed to the log~
base, to fall between positive and negative values (for example, minimum value:
-3, maximum value: 0.9999), the regularity of the log~ output signal (as determined
by the frequency of the phasor~), becomes fragmented; here chance procedure
determines that the output from cycle~ will either be a high to low sine sweep, or a
silence, resulting in indeterminate rhythmic output. Figure 26 displays four
examples of reshaped phasor ramp signals via log~ with different base values.
Synchronicity, triggering of rendered audio and real-time FM gestures
A stream of random values are created in the PHASOR_LOG_BASE
MANIPULATOR sub-patch, triggered by the sample-accurate bang. These are
sent to multiple scale objects, that feed output to the log~, in turn changing the
timbral shape of the cycle~ sine wave output. Toggle objects (on/off switches) and
gswitch2 objects (allowing the switching of one input between two outputs) are
used to switch data flows, sending the random values outputted to selected scale
object inputs. Each scale object is given different minimum/maximum range
arguments. Output values from the current active scale object (as chosen via
toggle switch selection) are sent to the log~ base inlet. By sending random values
to different scale objects, the sound/silence output ratio is changed. The scale
objects therefore function as pre-set states that vary both the amount of chance
rhythmic activity and also the timbral shaping of the sine wave iterations.
The random values are also sent to the KICKPATCH sub-patcher, where
synthesised kick drum sounds are triggered (on receiving a positive value) using
buffer~ and groove~ objects for storage and playback of pre-rendered audio.
Playback of audio therefore occurs in conjunction with the cycle~ object’s audio
output, synchronising the shaped sine wave iterations with the kick drum output.
Two settings are available to change the output of the kick drum sub-patcher. The
first state restricts the output to trigger a single kick drum sample. The second
!117
state uses counter and gate objects to access and trigger up to three kick drum
samples. Here, the output of pre-rendered, triggered audio is sequential (a fixed
order of events where positive values to log~, create bangs that are outputted from
the gate object outlets in sequence, accessing the three groove~ objects
containing kick drum samples). The use of an odd numbered 19-step sequence in
combination with with the non-triggering of samples on receipt of negative values
to log~ creates undeterminable variations in the output, producing aleatoric
rhythmic outcomes.
In addition, generated random values are scaled and sent into split objects. The
split objects are given arguments that send positive and negative input values to
separate outlets, allowing the creation of discrete positive or negative bang
messages. These bangs are then used to trigger discrete events. For example,
when FM rhythm (generated by the FM_addition sub-patcher) is engaged and the
toggle object connected to inlet 4 of PHASOR_LOG_BASE MANIPULATOR is
also engaged, additional FM gestural output is produced independent of the sine
wave output from cycle~ ; here, FM gestures continue to be triggered when the
sine wave output is silent, creating further rhythmic diversity and syncopation.
The sub-patcher FM_addition generates short gestural bursts of randomised FM
synthesis using the Max simpleFM~ sub-patcher. Random objects triggered by
bangs received from the PHASOR_LOG_BASE MANIPULATOR sub-patcher
generate values that alter the carrier frequency and harmonicity ratio. The
modulation index and amplitude are shaped by pre-set envelopes, determined by
two function objects (breakpoint function editors). The toggle switch attached to
inlet 2 of FM_addition provides two settings for the duration of gestures (while
retaining the pre-set envelope shapes) resulting in very short gestures, or longer
gestures (both durations are under 1000ms).
An alternate triggering procedure
The sub-patcher MORE_RHYTHM_RANDOMS produces chance output of
triggered audio and silences via an alternate method. Here the sub-patcher is
constantly fed the sample-accurate bang signal, and a selection of 25 groove~
objects are continually and randomly selected and triggered in no fixed sequence
!118
via a gate object. Some of the buffer~ objects contain no audio and therefore when
triggered result in silence. Random values between 0 and 29 are sent to the gate
object (dictating the outlet through which the bang signal is sent), although the
gate object itself has only 25 outlets. Therefore when the gate object receives
values between 26 and 29 this also results in silence, creating further rhythmic
variation.
Possible changes to the state of aleatoric audio outputted from Patch 1 include
timbral shaping, the adjusting and acceleration/deceleration of tempo, active/
sparse output, rhythmic/non-rhythmic output and synchronised/syncopated output.
Please refer to Appendix_C/Tutorials/Patch_1_Tutorial.mov, for a comprehensive
video introduction to Patch 1’s musical functionality.
B.2 Stage 2: Ableton Live real-time transformation and MIDI control
The Ableton Live session created for Iteration/Banger relies on third party plug-ins
from several audio software companies (including iZotope, Waves Audio, and
Cycling 74’s Max for Live) and as such a copy of the original Ableton setup was
not possible for inclusion here. Instead, an alternate session is provided that
demonstrates a reduced and simplified version of the Live processing chain, using
plug-ins native to Live 9 standard edition (see Appendix_C/Software/
Iteration_Banger_Ableton_Session/Iteration_Banger_Ableton_Session.als). Table
4 lists the key transformative processes featured in the original Live session, and
identifies their application to channels/audio outputted from Max Patch 1 (via
ReWire).
A Novation Launch Control XL MIDI interface is used to manipulate the Ableton
session; Figure 27 displays assignments of physical dials to software sends
(controlling dry/wet signals) and controller buttons to on/off toggle states of plugins. Some toggles are used to control multiple plug-in states, for example, when
band-pass filtering is applied to Max channel outputs 1 and 2, the frequency
reduction results in a loss of amplitude. Therefore when band-pass is engaged a
utility object is also engaged in order to boost the volume to an appropriate level.
!119
Figure 27: MIDI controller assignment overview.
For performance purposes the MIDI controller communicates with Ableton and the
MacBook Pro’s trackpad is used to adjust states of toggle switches in Patch 1. The
udpsend and udpreceive objects included in the Max patches allow for various
Patch 2 spatialisation states to be controlled by activating toggle switches in Patch
1 (via Ethernet). See Appendix_C/Tutorials/Patch_2_Tutorial folder contents for
more information.
!120
Max
Sound type:
Patch 1
output
channel
to Live
Real-time
Triggered
generated or by:
pre-rendered
playback:
Processing
(Ableton Live):
Musical effect
1/2
Real-time
Comb filtering
(switching between
two settings)
Tonality and
transposition
Auto-panning/
Gating
Random
fragmentation
of the stereo
signal
Band-pass filter
controlled by
amplitude envelope
Wah-wah like
effect
Long reverb/
sidechain
compression from
5/6 kick drums
(reverb is placed
before compression
in the signal path)
Spectral
density/
compactness
Long reverb 2 (via
send)
Sustain and
decay
suggestive of
perspectival
space
Short delay (time
varied by LFO)
Phase effect
Sine wave
shaped by
phasor~ and
log~
Positive
values to the
log~ base
3/4
FM
synthesised
gestures (two
durations)
Real-time
Positive
None
values to log~
base/
independent
Two variation
lengths of
short unstable
gestures
5/6
Kick drums
(single kick or
three
variations)
Pre-rendered
Positive
values to the
log~ base
Low frequency
content
boosted/
enhanced
Bass enhancement
(Waves Maxbass
plug-in)
Saturation/
Spectral
Distortion (via send) density
7/8
2 x alternate
kick drums
and snare-like
noise burst
Pre-rendered
Sampleaccurate
bang
(triggered
empty buffer~
objects result
in silence)
Frequency shifted
up, distorted and
low-pass filtered
(controlled by LFO)
Transforms
kick drums into
a new gesture
(shorter noisebased
variations), all
with real-time
filter shaping
Gating
Creates an
attack-closed
termination
Table 4: Max Patch 1, 8-channel output to Live overview.
!121
B.3 Patch 2: randomised automated spatialisation
Patch 2 receives audio out from Ableton and redistributes the stereo signal by
randomly outputting to any two of eight possible output channels at any one time.
This randomised, automated spatialisation process is achieved by scrambling and
repacking a list of channel numbers that are sent to a matrix~ object functioning in
non-binary mode. In this mode matrix inputs and outputs have variable linear gain
stages allowing for crossfading between audio signals, as such, avoiding
unwanted audio clicks. Bangs sent from Patch 1 (received by Patch 2 via udpsend
and udpreceive objects) change the rate of occurrence of spatial reassignments
and crossfade times (switching between a slow rate with more gradual crossfades
and a rapid rate with shorter crossfades). A further udpreceive object provides the
option to reassign spatial distribution in synchronisation with changes to the on/off
state of Patch 1’s DAC~, providing an alternate spatialisation method when the
DAC_DYNAMIC_ON_OFF sub-patcher in Patch 1 is engaged; this was
specifically designed for the generation of 8-channel audio featured in Section 5 of
Iteration/Banger (5:00 - 6:31, spectrally sparse passage).
Patch 2 also features additional matrix~ objects in conjunction with sfplay~
objects, that allow the possibility of up to two 8-channel audio files, and one stereo
file, to be randomly redistributed spatially. The stereo sfplay~ features pitchshifting and time-stretching functionality. See Appendix_C/Tutorials/Patch_2_
Tutorial folder contents for a tutorial video of Patch 2.
!122