Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
  • As a research-driven composer and sound artist, I have authored more than a hundred media artworks and compositions i... moreedit
PROGRAM NOTE Per Magnus Lindborg: Man bör kalla saker och ting vid deras rätta namn. For flute, oboe, clarinet, bassoon, horn, percussion, piano, 2 violins, viola, cello and double bass. Duration 10 minutes. Written for Ensemble Bit20... more
PROGRAM NOTE
Per Magnus Lindborg: Man bör kalla saker och ting vid deras rätta namn.
For flute, oboe, clarinet, bassoon, horn, percussion, piano, 2 violins, viola, cello and double bass. Duration 10 minutes. Written for Ensemble Bit20 for the occasion of their Ho Chi Minh City concert, 9 December 2007.
After having worked with transcriptions and musicalisations of the voice of Mao Zedong over two years and eight pieces, the opportunity of working with a recording of the Swedish Prime Minister Olof Palme was welcome. Ever since he entered politics in the late 1950s, Palme was on the frontline, supporting the oppressed against the profiteers. Throughout the US war in Vietnam and Laos, Palme insisted on piece initiatives and troops withdrawal and was one of very few leaders in the West who actively supported the North Vietnamese government. When the Paris Peace Conference broke down in 1972 and President Nixon followingly ordered massive punitive strikes against Hanoi and Haiphong, Palme did not shred his words. A short but carefully worded statement in Swedish Radio, barely two minutes, caused an international upheaval and the retraction of the US ambassador from Stockholm.
”Man bör kalla saker och ting vid deras rätta namn: det som pågår idag, i Vietnam, det är en form av tortyr. Det kan inte finnas militära motiv för bombningar i denna skala. Militära talesmän i Saigon har sagt att det inte förekommer någon uppladdning ifrån Nordvietnamesernas sida. Det kan inte heller rimligen bero på någon halsstarrighet från Vietnamesernas sida vid förhandlingsbordet. Alla kommentatorer är överens om att det främsta motståndet mot Oktoberöverenskommelsen i Paris har givits utav presidenten Thieu. Det man nu gör, det är att plåga människor - plåga en nation, för att förödmjuka den; tvinga den till underkastelse, under maktspråk. Och därför är bombningarna ett illdåd, och av det har vi många i den moderna historien, och de är i allmänhet förbundna med ett namn: Guernica; Oradour; Babin Jar; Katyn; Lidice; Sharpeville; Treblinka. Där har våldet triumferat. Men eftervärldens dom har fallit hård over dem som burit ansvaret. Nu fogas ett nytt namn till raden: Hanoi, julen 1972.” (Olof Palme)
One should call things by their rightful names: what currently goes on in Vietnam is a form of torture. There cannot be military reasons for bombings of such scale. Military spokesmen in Saigon have said that there is no gathering of forces by the North Vietnamese side. Neither can there reasonably be a case of stiff-headedness by the Vietnamese at the negociation table. All commentators agree that the greatest obstacles, at the October Conference in Paris, have been created by Thieu, the president. What they are now doing, is torturing people - torturing a nation, to humiliate it; force it to kneel, by gun talk. And therefore the bombings are despicable, and of such we have plenty in modern history; and they are normally connected with a name: Guernica; Oradour; Babin Jar; Katyn; Lidice; Sharpeville; Treblinka. Here, violence triumfed. But the ones who live on carry a hard judgement over those responsable. Now a new name joins the list: Hanoi, Christmas 1972. (translation by PM Lindborg)
Research Interests:
TreeTorika TreeTorika for chamber orchestra [1111-sax-1111-acc-2prc-pf-2111] (22’, 2006) has been commissioned by Ensemble Ernst with support from Komponistrådet, Norway, and is dedicated to Thomas Rimul and Ensemble Ernst. The saxophone... more
TreeTorika

TreeTorika for chamber orchestra [1111-sax-1111-acc-2prc-pf-2111] (22’, 2006) has been commissioned by Ensemble Ernst with support from Komponistrådet, Norway, and is dedicated to Thomas Rimul and Ensemble Ernst. The saxophone part is performed by Lars Lien.

In TreeTorika, I deal with rhetorics through recordings of Mao Zedung speeches, pursuing work from recent pieces, in particular Khreia, ReTreTorika, ConstipOrat and Mao-variations. In these pieces, transcriptions of the voice are taken as “found” musical material. In ReTreTorika, bits of un-edited recording fuse with the saxophone and the ensemble, making Mao’s voice very much part of the sonic image, but in the more recent pieces, the original material is segmented and re-composed in various ways, thus distancing it from the source. Why work with rhetorics? I am interested in prosody: the way speakers form their speech and phrase their vocal delivery. Rhetorics is not about what is being said; it is about understanding how something is said. Oratory is not about clarity in public address; it is about manipulating an audience. Rhetorics is the study of oratory and oratory is the subject of rhetorics. There is no such thing as written oratory. There can be preparatory notes for a speech, and an article can be read aloud in plenum – but even good writing may make bad oratory. I am interested in the way speakers use the rhetorical situation – kairos – the particular moment and necessity calling for someone to speak in public. Kairos demands that the orator (lawyer, teacher, politician…) gauge the situation and respond to it by adapting to an adequate mode of delivery. The art of speech-making lies in understanding the dynamical relations in the speaker+topic+listener system and using this knowledge to affect the mindset of the listeners. Great orators craft the situation to their advantage: it is all about convincing the listeners. So, is “TreeTorika” is in itself some kind of oratory? No, because it does not attempt at convincing the listener to adopt any particular point of view. It is an abstraction: a musical drama involving aspects of rhetorics.

These pieces are based on analyses of recordings of speeches made by Mao Zedong. In “ReTreTorika” as well as in “ConstipOrat” the electronics include recognizable bits of his voice as well as bits which are more or less treated, i.e. transformed and therefore much less strongly pointing to the source. But even when Mao’s voice is clearly heard, one need not know what he says. He spoke Hunanese, a Chinese dialect so strong that most non-Hunanese find it nay impossible to understand it. This was also the case in Mao’s lifetime, but he never gave it up, as he was obviously concerned with showing ”peasant pathos” in public appearances. One could argue that Mao, a singularly powerful individual but with relatively few public appearances, did not see oratory as a central means to his exercise of power. In this he differs from politicians such as Martin Luther King and Olof Palme, people for whom the scene, the microphone and the camera consisted the platform of a public mandate. Mao preferred other channels for his exercise of power. Although it intensively disputed, I think Mao was a skilled orator. From a musical point of view, his delivery is not flamboyant. It is based on rhythmic stability and melodious prosody. The alterations between speaking and pausing are carefully worked out to maximise efficiency. The large-scale form, judging from the recordings, is given by the interaction with the audience, i.e. the rhythm between Mao speaking and the audience applauding. However, listening carefully to the recordings, I am positive that some equally dedicated and unscrupulous Party sound engineer has tampered with the applauses; identical bits appear in different places and can be identified because of Mao’s persistent groaning and coughing. For the composition project, however, this fact is irrelevant, as I decided early on to accept the recordings as “found material”.

Now, some technical notes which may be of interest to those working with computer-assisted composition. The analysis of the recordings – which are rather noisy and narrow-band – was made using different computer software. For the voice line, I needed an accurate transcript. Initially, I tried fundamental tracking in Diphone and automatic transcription in OpenMusic/Kant, but found the results unsatisfactory. Instead, I programmed tools in MaxMSP to assist a more straightforward method, using my ears. Further programming was used to optimise rhythmic notation, in particular in regards to phrase segmenting, the choice of pulse speed and measure markings. These are non-trivial tasks in the analysis phase of producing a performable sonofication of a recording. The harmony was deduced by note-chunking of the melody and from partial tracking in AudioSculpt. The rhythmic layers in movements 3 and 6 are composed using the “rhythm constraint” library in OpenMusic. Notwithstanding the flexibility of these techniques, the computer could only assist me in forming the raw material for composition. Orchestration and detail work rely on standard composition techniques and, not to be forgotten, old-fashioned intuition.

TreeTorika consists of six parts. Following analysis of the segments, I reinforced small differences in order to create quite different musical situations. The saxophone leads the first segment, with a sparse accompaniment. The situation is inversed in the second segment, when the full ensemble takes over, playing a monolithic chorale. Setting out on a lighter note, the third segment gradually picks up mass and movement, before ending in aggressive saxophone solo phrases. The low-key fourth and fifth segments are grumpy and dark before releasing tension in an ascending line. The final segment fuses aspects of the music in the preceding segments, leading to a cataclysmic coda where the ensemble-backed saxophone engages in a heated exchange with the bass drums.

Finally, a remark as to the title. TreeTorika has nothing to do with dendrochronology (although a working-title was the old carpenter’s adage “measure twice, cut once”) but is a contraction of the initials of those to whom it is dedicated, Thomas-Rimul-Ensemble-Ernst, and the word rhetorics.


PerMagnus Lindborg

Lindborg studied piano, trombone, mathematics, languages, classical music and jazz improvisation in his native Sweden before concentrating on composition. He obtained degrees from Oslo (State Academy) and Paris (Ircam and Sorbonne), receiving numerous awards and grants, such as the Norwegian Young Artist Grant twice. He also studied composition privately with Klas Torstensson in the Netherlands. A member of the Norwegian Society of Composers since 1996, he currently teaches electroacoustic composition at École Nationale de Musique in Montbéliard, France (www.enmenm.tk).

Lindborg’s main research interests lie in music interactivity and rhetorics as a metaphor for composition. Selected works are “Mao–variations” for trio, "Khreia" for orchestra, "ReTreTorika" for quartet and electronics. Springer Verlag has published an article on “leçons” for saxophone and computer. His pieces are featured on record labels such as ECM, Daphne and Ash International.

PerMagnus Lindborg is currently working on “SynTorika45”, a project with French ensemble Utopik, and an interactive computer work based on “RGBmix2-Pugnus” recorded by the Norwegian Poing trio. As member of the freq–out team (www.freq-out.org), he will participate in sound installations in Chiang Mai, Budapest and Vienna in 2007.
Research Interests:
While Scriptor is lyrical and intimate, Rhetor is fiery and speech-like. Both pieces draw from Chinese sources. The trombone line in Scriptor meditates on a few words from a letter written by the wanderer-poet Li Pai (T'ang dynasty, 7th... more
While Scriptor is lyrical and intimate, Rhetor is fiery and speech-like. Both pieces draw from Chinese sources. The trombone line in Scriptor meditates on a few words from a letter written by the wanderer-poet Li Pai (T'ang dynasty, 7th century A.D.) to a friend, reminiscing about happy days together, drinking, singing, feasting – and having to part. In Rhetor, the trombone retraces the voice of Mao Zedung, whose voice is captured in an elegiac speech from 1949 where he honors military heroes fallen in battle thirty years earlier.
Research Interests:
PROGRAM NOTE The Mao-variations deal with rhetorics and use recordings of Mao Zedung speeches. I pursue work from my three most recent pieces, namely ReTreTorika, Rhetor fragment and ConstipOrat. In the Mao-variations, the music is... more
PROGRAM NOTE
The Mao-variations deal with rhetorics and use recordings of Mao Zedung speeches. I pursue work from my three most recent pieces, namely ReTreTorika, Rhetor fragment and ConstipOrat. In the Mao-variations, the music is developed on the level of rhythm, prosody and harmony in a fairly abstract way. Transcriptions of the original recordings are here taken as “found” musical material, which is segmented and re-composed in various ways, thus distancing it from the source. The attitude is different from ReTreTorika, where bits of un-edited recording fuse with the saxophone and the ensemble, making Mao’s voice very much part of the sonic image. Although the Mao-variations are scored for violin, cello and kantele, three acoustic instruments, the compositional process is more akin to the one employed in ConstipOrat, a piece for loudspeakers. The two pieces use the same Mao citations as a basis for the compositions. While ConstipOrat develops them in the electroacoustic domain, the Mao-variations do so in the symbolic–compositional domain.
The Mao-variations consist of three parts. Each deals with one citation taken from a different speech. The excerpts were chosen for their particular rhetoric qualities with regards to situation, variation and style. Some of the acoustic qualities of the recordings have influenced the writing, in particular the violin echoing the cello in the second part.
The piece starts with a citation played in unison by the strings. Over three variations, the rhythmic and melodic complexity is gradually reduced to reveal the harmonic structure. The Mao excerpt was taken from the end of a 15-minute speech to a People’s Party conference in 1949, focussing on Party ideology. The speaker uses the even or dry rhetoric style, iskhnos.
The cello leads the second part. The material is developed in a double-ended process, transforming call-like notes into meandering lines of intricate prosody. The citation appears as one stage in the middle of the process. It is taken from a 1949 speech to a huge crowd, in which Mao announces the names of party workers appointed to higher office. In this speech, he could be said to employ the deinos style of grandiose and rhythmic delivery.
The third part is similar to the first, but the process is reversed. From a static harmony, more and more detail and rhythmic complexity is revealed, leading into the citation at the very end. The excerpt comes from the closing speech given to the Chinese Communist Party’s 1st national committee meeting in 1951, where Mao employs the middle rhetoric style, glafyros.
FIRST PERFORMANCE
Mao-variations for violin, cello and kantele are commissioned by the Shingle Church Music Festival.
FIRST PERFORMANCE
The Mao-variations were first performed by Maria Puusari, Markus Hohti and Eija Kankaanranta at the Shingle Church Music Festival Festival, Finland, in July 2006.
Research Interests:
PROGRAM NOTE Even though ReTreTorika is based on analyses and recordings of speeches made by Mao Zedong, the piece is not about him; it is about rhetorics. I am interested in the way that speakers use the rhetorical situation, which, in... more
PROGRAM NOTE
Even though ReTreTorika is based on analyses and recordings of speeches made by Mao Zedong, the piece is not about him; it is about rhetorics. I am interested in the way that speakers use the rhetorical situation, which, in traditional terminology, is called kairos. The word refers to the particular moment and necessity calling for someone to express herself or himself in speech. Kairos demands that the speaker (orator, rhetorician, politician...) gauge the situation and respond to it by adopting to a particular way of delivery. The art of public speech-making lies in understanding the dynamical relations in the speaker–topic–listener system and use this knowledge and affect the mindset of the listeners. Great orators craft the situation to their advantage: it is all about convincing the listeners.
So is ReTreTorika itself some kind of oratory? No, because it does not attempt at convincing the listener to adopt any particular point of view. It is an abstraction: a musical drama about some aspects of rhetorics. I will nevertheless give some suggestions to how one could – but not “should” – listen to the piece. One could hear in the saxophone an orator and in the other three instruments a listening crowd; afer all, they do sometimes “applaude”. One could take the recorded voice as being Mao the rhetor and sympathise with the saxophonist’s struggle in imitating him; for the musician offers an interpretation of what he or she has listened to. One could imagine the computer’s voice as a metaphor for the attempt to reconcile the demagogue with the blind follower; for it depends entirely on the saxophone and the recordings. Or you could – and perhaps should – take ReTreTorika in some completely different way.
ReTreTorika er bestilt av Affinis Kvartett med støtte fra Norsk Komponistråd. ReTreTorika is commissioned by Affinis Quartet with support from the Norwegian Composers' Fund.
DEDICATION
ReTreTorika is dedicated to the Affinis Quartet. The composer is indebted to Dr Chan Hingyan and Dr Joyce Beetuan Koh for their assistance with the Mao Zedong recordings.
FIRST PERFORMANCE
ReTreTorika was first performed by Affinis Quartet (Lars Lien - saxophone, Teodor Berg - percussion, Thomas Kjekstad - guitar, Jon Helge Sætre - piano) at NyMusikk Toneheim, Hamar, Norway and at Ilios Festival, Harstad, Norway on 2nd and 3rd of February 2006.
Research Interests:
PROGRAM NOTE SynTorika45 is a musical fantasy, based on analysis and free recomposition of earlier compositional work. It is the last piece in a series of six – TreTriTroi, ReTreTorika, Rhetor-Scriptor, Mao-variations, TreeTorika and... more
PROGRAM NOTE
SynTorika45 is a musical fantasy, based on analysis and free recomposition of earlier compositional work. It is the last piece in a series of six – TreTriTroi, ReTreTorika, Rhetor-Scriptor, Mao-variations, TreeTorika and SynTorika45 – and functions as a coda within this larger form. The subject of all the pieces is primarily rhetorics (on an abstract level) and, secondarily, one particular orator, Mao Zedung (on a material level). Recordings of speeches between 1949 and 1956 are taken as “found material”. I have transcribed the recordings, then segmented, analysed and re-composed the symbols in various ways. The electronics likewise employ different techniques in transforming the sonic material. One of the main formal principles in the compositional work is the resulting perceptual distance between the original and the actual composition. As much as I hope that what I say here is not essential for the concert experience, it might be of some interest to the analytically inclined.
So what do I mean by “rhetorics” in the context of music composition? The term used to refer both to a topic of intellectual study and to a set of practical tools, used by politicians and lawyers, but also in the training of musicians. After the classicist epoch, it seems to have had less and less impact on compositional thinking. Also, within the sphere of politics, the abuse of mass-communication – I am referring here to the fascist movements in the first half of the 20th century – has made “rhetorics” a negatively loaded word. However, the techniques of Gorgias, Quintilian, Perelman and others as such are not to be blamed. I think that composers today have the necessary distance to enable us to reinvestigate some aspects of rhetorical techniques. For my part, I am interested in the way that speakers use the rhetorical situation, or kairos. The word refers to the moment of necessity calling for someone to express herself or himself in speech. Kairos demands that the speaker gauge the situation and respond to it by adopting to a particular delivery. The way that prosody, speed of delivery and articulation change during a discourse is potentially musical. At least, an analysis of these parameters can allow the composer to extend his expressive palette. The art of public speech-making lies in understanding the dynamical relations in the speaker–topic–listener system and using this knowledge to affect the mindset of the listeners. Orators craft the situation to their advantage: as we know, rhetorics is about convincing the listeners. There are parallels with the role of the composer in the process of music creation, but I do not think the two situations are congruent. So is SynTorika45 itself some kind of oratory? No, it isn’t. It does not attempt at convincing the listener to adopt any particular point of view. It is an abstraction: a musical fantasy, relating to some aspects of rhetorics.
Research Interests:
TRANSPOSITION Score is transposed. DURATION 6-8 minutes. The actual duration of a performance is variable and could change between different performances. ACCIDENTALS Accidentals, including quarter-tones, are valid for the whole measure,... more
TRANSPOSITION Score is transposed.
DURATION
6-8 minutes. The actual duration of a performance is variable and could change between different performances.
ACCIDENTALS
Accidentals, including quarter-tones, are valid for the whole measure, in the same octave and for the same instrument, including trilled notes in parenthesis.

For further info, see rgbmix1-SAXCON
Research Interests:
"rgbmix1-SAXCON" consists of 12 segments, organised in four groups of three segments each. For convenience, they are labeled "red", "green", "yellow" and "blue" rather than anything that would suggest a sequential order. Each scorepage... more
"rgbmix1-SAXCON" consists of 12 segments, organised in four groups of three segments each. For convenience, they are labeled "red", "green", "yellow" and "blue" rather than anything that would suggest a sequential order. Each scorepage corresponds to a colour and features one of the instruments in the quartet, thus:
yellow <--> barytone saxophone blue <--> alto saxophone
green <--> tenor saxophone
red <--> soprano saxophone
On a scorepage, the three segments correspond to the three systems, and are referenced as "top", "mid" and "down", respectively.
All segments should be played attacca, i.e. without pause inbetween.
The duration of each segment is approximately 40 seconds.
The total duration of a performance is flexible (between 10 and 14 minutes).
Research Interests:
The SI13 NTU/ADM Symposium on Sound and Interactivity in Singapore provided a meeting point for local researchers, artists, scholars and students working creatively with sound and interactivity, as well as the foundation for an issue... more
The SI13 NTU/ADM Symposium on Sound and Interactivity in Singapore provided a meeting point for local researchers, artists, scholars and students working creatively with sound and interactivity, as well as the foundation for an issue exploring sound and interactivity in the Southeast Asian country.
The School of Art Design Media of Singapore’s Nanyang Technological University hosted the Symposium on Sound and Interactivity from 14–16 November 2013. A total of 15 artworks and 14 papers were selected by a review committee for presentation by 24 active participants during the three-day symposium. While all but four of the participants are residents of the island, they represent seventeen different countries, thus reflecting the cosmopolitan nature of Singapore in general and of sound artists and researchers in particular. (1)
Thanks to funding from Nanyang’s CLASS conference scheme, Roger T. Dean (MARCS Institute, University of New South Wales, Australia) and Diemo Schwarz (IRCAM, France) could be invited as Keynote Speakers; they also performed in the concert that opened the symposium, and contributed to the exhibition.
It is a pleasure to collaborate with eContact! in presenting a broad collection of articles emanating from this event, and to use these as a basis for an overview of sound art and related activities in Singapore. Eleven texts from the SI13 Proceedings have been edited for this issue. Joining them are two texts originally written for the catalogue of the “Sound: Latitudes and Attitudes” exhibition held at Singapore’s Institute of Contemporary Arts (7 February – 16 March 2014). Finally, in the guise of a “community report” on sound art activities in Singapore, I have contributed a “constructed multilogue” created from interviews with three sound art colleagues.
Welcome to this Special Issue of Array: Proceedings of Si15, the 2nd International Symposium on Sound and Interactivity. The articles in the present issue originated in the Si15 Soundislands Festival, which was held in Singapore 18–23... more
Welcome to this Special Issue of Array: Proceedings of Si15, the 2nd International Symposium on Sound and Interactivity.
The articles in the present issue originated in the Si15 Soundislands Festival, which was held in Singapore 18–23 August 2015. The festival events included five invited artist performances, two scientific keynotes and two days of proceedings, a commissioned sound installation, an afternoon of public talks, an internet panel, two pedagogic workshops, a concert with young performers, and more than fifty artworks and scientific papers in numerous forms and formats selected from an open call (http://soundislands.com/si15).
We are thrilled to present 20 articles, by 31 authors, emanating from Si15. The articles have been extended and thoroughly revised for this special issue of Array. They cover a range of topics related to aesthetics, percep-tion, technology, and sound art. We hope that you will enjoy the fruits of the authors' labour and therein discover many a stimulating thought.
Research Interests:
Jago stirs: something is strange. Ice-cold wind streams from the aircon and relentless chatter from the radio: "…it would not be without reason to deem it a ghost or a phantom formed by the brain…"1 Reality blur: yes, he must have... more
Jago stirs: something is strange. Ice-cold wind streams from the aircon and relentless chatter from the radio:  "…it would not be without reason to deem it a ghost or a phantom formed by the brain…"1 Reality blur: yes, he must have drifted off. Yes, the taxi, but no, why have we stopped? What time is it? He breathes in heavily through the nose. Fog lifting: yes. The guest lecture at the Uni, voices of those students still lashing the insides his skull. Jago searches a foothold for memory. Faint whiff of tiare, plumeria: airport posters with not-so-secret voluptuous bodies. Why is he alone? Or, not exactly alone.
This chapter examines computer assisted analysis and composition (CAAC) techniques in relation to the composition of my piece TreeTorika for chamber orchestra. I describe methods for analysing the musical features of a recording of a... more
This chapter examines computer assisted analysis and composition (CAAC) techniques
in relation to the composition of my piece TreeTorika for chamber orchestra. I describe
methods for analysing the musical features of a recording of a speech by Mao Zedong,
in order to extract compositional material such as global form, melody, harmony and
rhythm, and for developing rhythmic material. The first part focuses on large-scale
segmentation, melody transcription, quantification and quantization. Automatic tran-
scription of the voice was discarded in favour of an aural method using tools in Amadeus
and Max/MSP. The data were processed in OpenMusic to optimise the accuracy and
readability of the notation. The harmonic context was derived from the transcribed
melody and from AudioSculpt partial tracking and chord sequence analyses. The second
part of this chapter describes one aspect of computer assisted composition, that is the use
of the rhythm constraint library in OpenMusic to develop polyrhythmic textures. The
flexibility of these techniques allowed the computer to assist me in all but the final phases
of the work. In addition, attention is given to the artistic and political implications of
using recordings of such a disputed public figure as Mao.
While at conservatory, I played several smaller piano pieces by Olivier Messiaen, parts of the Quartet for the end of time, even a handful of the easier Vingt Regards. Our theory class analysed the Études, in particular the iconic Mode de... more
While at conservatory, I played several smaller piano pieces by Olivier Messiaen, parts of the Quartet for the end of time, even a handful of the easier Vingt Regards. Our theory class analysed the Études, in particular the iconic Mode de valeurs, with its mysterious amalgam of total material serialism and intuitive composition. I heard the world 2nd performance of Éclairs de l'au-délà in Oslo in 1991, Messiaen's last great work, and it was truly splendid. However, I was not taken in by his sound – saturated and thoroughly sotte, to quote Pierre Boulez – and certainly not his rhythm – instructive, as in Mon language, but so much less compelling and human than in Bartok, Berio or Holdsworth. When analysing scores, I was stunned by the straightforward, almost mechanical ordering of local harmony and rhythm, but even more surprised by the absence of understandable rules as to why this material had been composed in a particular way. The structure was simpler than in Bach or Berg, but more temperamental than in Carter. Exactly how this simplicity gave rise to splendour evaded me. What was I missing?
This paper aims at describing an approach to the music performance situation as a laboratory for investigating interactivity. I would like to pre- sent “Leçons pour un apprenti sourd-muet”1, where the basic idea is that of two... more
This paper aims at describing an approach to the music performance
situation as a laboratory for investigating interactivity. I would like to pre-
sent “Leçons pour un apprenti sourd-muet”1, where the basic idea is
that of two improvisers, a saxophonist and a computer, engaged in a se-
ries of musical questions and responses. The situation is inspired from
the Japanese shakuhachi tradition, where imitating the master performer
is a prime element in the apprentice’s learning process. Through listening
and imitation, the computer’s responses get closer to that of its master for
each turn. In this sense, the computer’s playing emanates from the saxo-
phonist’s phrases and the interactivity in “Leçons” happens on the level
of the composition.
There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening... more
There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.
Restaurants are complex environments engaging all our senses. More or less designable sound sources, such as background music, voices, and kitchen noises, influence the overall perception of the soundscape. Previous research suggested... more
Restaurants are complex environments engaging all our senses. More or less designable sound sources, such as background music, voices, and kitchen noises, influence the overall perception of the soundscape. Previous research suggested typologies of sounds in some environmental contexts, such as urban parks and offices, but there is no detailed account that is relevant to restaurants. We collected on-site data in 40 restaurants (n = 393), including perceptual ratings, free-form annotations of characteristic sounds and whether they were liked or not, and free-form descriptive words for the environment as a whole. The annotations were subjected to cladistic analysis, yielding a multi-level taxonomy of perceived sound sources in restaurants (SSR) with good construct validity and external robustness. Further analysis revealed that voice-related characteristic sounds including a 'people' specifier were more liked than those without it (d = 0.14 SD), possibly due to an emotional crossmodal association mechanism. Liking of characteristic sounds differed between the first and last annotations that respondents made (d = 0.21 SD), which might be due to an initially positive bias being countered by exposure to a task inducing a mode of critical listening. Comparing the SSR taxonomy with previous classifications, we believe it will prove useful for field research, simulation design, and sound perception theory.
Research Interests:
To work flexibly with the sound design for The Locust Wrath, a multimedia dance performance on the topic of climate change, we developed a software for interactive sonification of climate data. An open- ended approach to parameter mapping... more
To work flexibly with the sound design for The Locust Wrath, a multimedia dance performance on the topic of climate change, we developed a software for interactive sonification of climate data. An open- ended approach to parameter mapping allowed tweaking and improvisation during rehearsals, resulting in a large range of musical expression. The sonifications represented weather systems pushing through South-East Asia in complex patterns. The climate was rendered as a piece of electroacoustic music, whose compositional form - gesture, timbre, intensity, harmony, spatiality - was determined by the data. The article discusses aspects of aesthetic sonification, reports the process of developing the present work, and contextualises the design decisions within theories of crossmodal perception and listening modes.
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with... more
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.
Sound is a multi-faceted phenomenon and a critical modality in all kinds of sevicescapes. At restaurants, our senses are intensively stimulated. They are social places which depend on acoustic design for their success. Considering the... more
Sound is a multi-faceted phenomenon and a critical modality in all kinds of sevicescapes. At restaurants, our senses are intensively stimulated. They are social places which depend on acoustic design for their success. Considering the large economic interests, surprisingly little empirical research on the psychoacoustics of restaurants is available. Contributing to theory building, this article proposes a typology of designed and non-designed sonic elements in restaurants. Results from a survey of 112 restaurants in Singapore are presented, with a focus on one element of the typology, namely interior design materials. The collected data included on-site sound level, audio recordings from which psychoacoustic descriptors such as Loudness and Sharpness were calculated, perceptual ratings using the Swedish Soundscape Quality protocol, and annotations of physical features such as Occupancy. We have introduced a measure, Priciness, to compare menu cost levels between the surveyed restaurants. Correlation analysis revealed several patterns: for example, that Priciness was negatively correlated with Loudness. Analysis of annotations of interior design materials supported a classification of the restaurants in categories of Design Style and Food Style. These were investigated with MANOVA, revealing significant differences in psychoacoustic, physical, and perceptual features between categories among the surveyed restaurants: for example, that restaurants serving Chinese food had the highest prevalence of stone materials, and that Western-menu places were the least loud. Some implications for managers, acoustic designers, and researchers are discussed.
This text is a “constructed multilogue” oriented around a set of questions about sound art in Singapore. I have lived here since 2007 and felt that a “community report” should aim to probe recent history deeper than what I could possibly... more
This text is a “constructed multilogue” oriented around a set of questions about sound art in Singapore. I have lived here since 2007 and felt that a “community report” should aim to probe recent history deeper than what I could possibly do on my own, in order to give a rich perspective of what is happening here today. I was very happy when Pete Kellock, Zul Mahmod and Mark Wong agreed to be interviewed. Each has a long-time involvement in the Singapore sound scene, in a different capacity. Pete is an electroacoustic music composer who has worked in research and entrepreneurship, and is a founder of muvee technologies. Zul is a multimedia artist and performer who has developed a rich personal expression, mixing sonic electronics, sculpture and robotics in playful ways. Mark is a writer and sound artist who has followed Singapore’s experimental scenes closely since the 1990s.
I sent the three of them a letter containing a range of observations I had made (which may or may not be entirely accurate) and questions (admittedly thorny and intended to provoke), including the following:
The geographical location and Singapore’s historic reason-to-be as a trading post has instilled a sense of ephemerality — people come and go, ideas and traditions too — as well as a need to develop contacts with the exterior. The arts scene in general seems to be largely a reflection of whatever the current trading priorities demand. In what way does the current local sound art reflect the larger forces within Singaporean society? Since art is mostly orally traded, how are its traditions nurtured and developed?
Around 2010, the Government seems to have indicated a new task for cultural workers, including sound artists and musicians: to define — create or discover, stitch-up or steal — a “Singapore identity”. The Singapore Art Festival shut down two years while the think tanks were brewing. Will this funnel taxpayer money and (more importantly) peoples’ attention towards folkloristic or museal music, rather than to radical and/or intellectual sound art? At the same time, there is considerable commercial pressure to subsume music / sound listening into an experiential, multimodal, game-like and socially mediated lifestyle product. Are commercialization and identity-seeking two sides of the same coin — one side inflation-prone, and the other a possible counterfeit? Is there room for a “pure listening experience”, for example to electroacoustic music? Or is the future of sound art ineluctably intertwined with sculptural and visual elements?
Different kinds of creative people involved in sound art are entrepreneurs, programmers, academics, educators, curators and journalists. Which institutions nurture talent and bring audiences to meet new experiences? Where are the hothouses for developing ideas, craft, artistry, innovation and business?
The interviews, loosely structured around these themes, were made in January and February 2014. Our conversations often took unexpected turns (mostly for the better). I diligently transcribed the recordings, and each interviewee made corrections and additions, before we gently nudged spoken language a little closer to prose. I then brought out a pair of big scissors and a large pot of coffee, and made a cut-out collage, weaving the texts into the multilogue that follows. The idea has been to create an illusion of four people conversing with each other under the same roof. Deceit or not, at the very least, we all live and work on the same small island, somewhere in the deep southeast.
Singapore Voices is an interactive installation, integrating sound and image in a series of touch-sensitive displays. Each display shows the portrait of an elderly person, standing with the hand turned outwards, as if saying: “I built... more
Singapore Voices is an interactive installation, integrating sound and image in a
series of touch-sensitive displays. Each display shows the portrait of an elderly person,
standing with the hand turned outwards, as if saying: “I built this nation”. Two displays
can be seen in Figure 1 below. When the visitor touches the hand or shoulder, they hear
a recording of the speaker’s voice. Chances are that the visitor will not be able to
understand the language spoken, but she or he will indeed grasp much of all that is, in a
manner of speaking, “outside” of the words - elements of prosody such as phrasing and
speech rhythm, but also voice colour that may hint at the emotional state of the person.
Then there is coughing, laughing, a hand clap and so forth. Such paralingual elements of
vocal communication are extremely important and furthermore, their meaning is quite
universal.
The present article presents the language situation in Singapore, the design and
underlying aesthetics of the installation’s sonic interactivity, and finally, recapitulates
some of the media discussions that the first public showing, in March 2009, engaged.
Part of an art and speech research project, the installation aims at bringing attention to
the multitude of languages that Singaporeans use on a daily basis, but also the fragility
of this linguistic soundscape. It is well-known that language is key to understanding an
intangible cultural heritage linked to an immigrant minority: not only that of its
geographical origins, but also its communal experience of migration, of diaspora, of
integration. Much of this heritage is in great danger of being lost in Singapore. The
installation presents eight voices: speakers of Hokkien, Teochew, Hainanese, Hakka,
Telegu, Tamil, Malayalam and Baba Malay. They are telling their own stories about
childhood, life during the war, cooking methods and recipes, and so forth. The custodians
of these languages are now in their 70s and 80s, and Singapore Voices places them in
focus as individuals. Through the interactive experience of the installation, visitors are
able to rediscover the intergenerational distance through listening to and physically
feeling their voices. In a condensed setting, they can experience and appreciate a part of
Singapore’s rich cultural heritage.
The interaction design is built from a principle where different combinations of
touching trigger selected excerpts from interviews. As the voices speak, the whole
display vibrates with the sound, and in this way, touching becomes a metaphor for the
necessary effort, on our part, to re-establish contact between generations: necessary, if
we want to understand the richness of the culture we are living in. Singapore Voices
lets the visitor sense the individuality, and musicality, of the voices.
Music interactivity is a sub-field of human-computer interaction studies. Interactive situations have different degree of structural openness and musical “ludicity” or playfulness. Discussing music seems inherently impossible since it is... more
Music interactivity is a sub-field of human-computer interaction studies. Interactive situations have different degree of structural openness and musical “ludicity” or playfulness. Discussing music seems inherently impossible since it is essentially a non-verbal activity. Music can produce an understanding (or at least prepare for an understanding) of creativity that is of an order neither verbal nor written. A human listener might perceive beauty to be of this kind in a particular music. But can machine-generated music be considered creative and if so, wherein lies the creativity? What are the conceptual limits of notions such as instrument, computer and machine? A work of interactive music might be more pertinently described by the processes involved than by one or several instanciations. While humans spontaneously deal with multiple process descriptions (verbal, visual, kinetic…) and are very good at synthesising, the computer is limited to handling processes describable in a formal language such as computer code. But if the code can be considered a score, does it not make a musician out of the computer? As tools for creative stimulus, composers have created musical systems employing artificial intelligence in different forms since the dawn of computer music. A large part of music interactivity research concerns interface design, which involves ergonomics and traditional instrument maker concepts. I will show examples of how I work with interactivity in my compositions, from straight-forward applications as composition tools to more complex artistic work.
Pacific Belltower tolls for you… to remind of the fragility of the Earth's crust, and the reality faced by people around the Pacific Ocean exposed to the terrifying power of unpredictable earthquakes and volcanoes. The tower is installed... more
Pacific Belltower tolls for you… to remind of the fragility of the Earth's crust, and the reality faced by people around the Pacific Ocean exposed to the terrifying power of unpredictable earthquakes and volcanoes. The tower is installed at the centre of a public space, such as a lobby, using parametric 'beam' speakers and wall reflections to diffuse a surround soundscape consisting of virtual bianzhong bells, during one minute at every half hour. Each peal is unique, generated in real-time using Internet data about the most recently detected earthquake activity. The bell sounds are spatialised according to the geographical positions of events, and their pitch and hamonicity reflect the epicentre depth and magnitude.
This paper studies the use of Gumowski-­Mira maps for sonic arts. Gumowski-­Mira maps are a set of chaotic systems that produce many organic orbits that resemble cells, flowers and other life forms. This has prompted mathema-ticians and... more
This paper studies the use of Gumowski-­Mira maps for sonic arts. Gumowski-­Mira maps are a set of chaotic systems that produce many organic orbits that resemble cells, flowers and other life forms. This has prompted mathema-ticians and eventually artists to study them. These maps carry a potential for use in the sonic arts, but until now such use is non-­existent. The paper describes two ways of using Gumowski-­Mira maps: for synthesis and spatialization. The synthesis approach, which runs in real-­time, takes the dynamical system output as the real and imaginary input to an inverse Fourier transformation, thus directly sonifying the algorithm. The spatialization approach uses the shapes of Gumowski-­Mira maps as shapes across the acoustic space, using the first 128 iterations of each map as audio particles. The shapes can change based on the maps' initial parameters. The maps are explored in live performance using Leap Motion and Cycling '74's MIRA for iPad as control interfaces of audio processing in SuperCollider. Examples are given in two works, Cells #1 and #2.
In Project Time, A Theatre of Music1, we examined ‘time’ as the domain where the old and the new coexist in Singapore, articulated through the meeting of Indian and Chinese drumming with interactive computer sound transformation. It... more
In Project Time, A Theatre of Music1, we examined ‘time’ as the domain where the old and the new coexist in Singapore, articulated through the meeting of Indian and Chinese drumming with interactive computer sound transformation. It examines three perspectives: chronological time, through-time and outside time. A three-pronged metaphor relating skin, urbanism and immortality is proposed as a dissection tool to discuss the individual’s response to physical change, a society's attitude towards change and depersonalised memory’s way of dealing with experience.
Research Interests:
Mobile devices have been used in soundscape installations and performances over the past decade or longer, often to emphasize social interaction. Multichannel sonification has been found to successfully represent data describing kinematic... more
Mobile devices have been used in soundscape installations and performances over the past decade or longer, often to emphasize social interaction. Multichannel sonification has been found to successfully represent data describing kinematic phenomena. However, there are few if any examples where these two approaches are combined. The Locust Wrath project has evolved in stages: first, as surround sonifications of climate data for a multimedia dance performance; then, as a frontal display sound installation and as material in a live performance of ‘musical’ interactive sonification; and recently, as an audience participator work. We developed a system for spatialized sonification of data using a server-client model with iOS devices. In two multimedia performances, the audience members’ iPhones were employed ad hoc to constitute a large auditory display. This paper describes the artistic background to the project, outlines the stages, and focuses on the design and implementation of the Locust Wrath client app.
Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localization. Stimuli with different MP3 compression rates,... more
Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localization. Stimuli with different MP3 compression rates, as a proxy for audio quality, were generated from drum samples. Participants (n = 18) estimated the position of a snare drum target while compression rate, masker, and target position were systematically manipulated in a full-factorial repeated-measures experiment design. Analysis of variance revealed that location accuracy was better in wide target positions than in narrow, with a medium effect size; and that the effect of target position was moderated by compression rate in different directions for wide and narrow targets. The results suggest that there might be two perceptual effects at play: one, whereby increased audio quality causes a widening of the soundstage, possibly via a SMARC-like mechanism, and two, whereby it enables higher localization accuracy. In the narrow target positions in this experiment, the two effects acted in opposite directions and largely cancelled each other out. In the wide target presentations, their effects were compounded and led to significant correlations between compression rate and localization error.
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or perceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more recent.... more
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or perceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more recent. It is known that health is influenced by the sonic environment, and the study here presented aimed to investigate the nature and strength of relationships between soundscape features and physiological responses linked to relaxation or stress. In a controlled experiment, seventeen healthy volun-teers moved freely inside a physical installation listening to soundscape recordings of nature, urban parks, eateries, and shops, reproduced using 3D ambisonic techniques. Physiological responses were continuously captured, then detrended, downsampled, and analysed with multivariate linear regression onto orthogonal acoustic and perceptual stimuli features that had been previously determined. Measures of Peripheral Temper-ature regressed onto SoundMass, an acoustic feature, and onto Calm-to-Chaotic, a perceptual feature, in each case with a moderately sized effect. A smaller effect was found for Heart Rate onto VariabilityFocus, an acous-tic feature, and for Skin Conductance onto the interaction between the acoustic features. These relationships could be coherently accounted for by neurophysiological theory of how ANS activation leads to emotional relaxation or stress. We discuss limitations of the present study and considerations for future soundscape emotion research, as well as more immediate practical implications.
“On the String” is an installation-performance scored for sound sculptures, real-time synthesis, musicians, light display and an immersive sound diffusion system. The composition, inspired by String Theory, called for the design of an... more
“On the String” is an installation-performance scored for sound sculptures, real-time synthesis, musicians, light display and an immersive sound diffusion system. The composition, inspired by String Theory, called for the design of an immersive sound design that could be flexibly adapted to different diffusion situations. The software design aimed to realise a concept of sonic objects moving in four-dimensional space. The paper describes loudspeaker setup and software implementation of the work. Other aspects of the work, such as composition, sculptural elements, real-time synthesis, staging and light design, have been described in [4]. A DVD is available [3].
The article outlines a psychoacoustically founded method to describe the acoustic performance of earphones in two dimensions, Spectral Shape and Stereo Image Coherence. In a test set of 14 typical earphones, these dimensions explained... more
The article outlines a psychoacoustically founded method to describe the acoustic performance of earphones in two dimensions, Spectral Shape and Stereo Image Coherence. In a test set of 14 typical earphones, these dimensions explained 66.2% of total variability in 11 acoustic features based on Bark band energy distribution. We designed an interactive Earphone Simulator software that allows smooth interpolation between measured earphones, and employed it in a controlled experiment (N=30). Results showed that the preferred ‘virtual earphone’ sound was different between two test conditions, silence and commuter noise, both in terms of gain level and spectral shape. We discuss possible development of the simulator design for use in perceptual research as well as in commercial applications.
Skalldans is an audiovisual improvisation piece for a solo laptop performer. Sound and video syntheses are piloted with a MIDI interface, a camera, and a Wiimote; also, audiovisual streams influence each other. The present text discusses... more
Skalldans is an audiovisual improvisation piece for a solo laptop performer. Sound and video syntheses are piloted with a MIDI interface, a camera, and a Wiimote; also, audiovisual streams influence each other. The present text discusses some of the hardware and software points of interest, for example, how audio and video syntheses are piloted, how the streams interact, and the camera tracking method with a linear regression stabiliser. It also touches upon the sources of inspiration for the piece.
This article reports results from a study of perceived emotion portrayal in cartoons by different groups of subjects. A set of audiovisual stimuli was selected through a procedure in two steps. First, 6 ‘judges’ evaluated a large... more
This article reports results from a study of perceived emotion
portrayal in cartoons by different groups of subjects. A set of
audiovisual stimuli was selected through a procedure in two steps.
First, 6 ‘judges’ evaluated a large number of random snippets from
all Mickey Mouse cartoons released between 1928 and -35.
Analysis singled out the five films ranking highest in portraying
respectively anger, sadness, fear, joy and love/tenderness.
Subsequently, 4 judges made a continuous evaluation of emotion
portrayal in these films, and six maximally unambiguous
sequences were identified in each. The stimuli were presented to
two groups (N=33), one in which the subjects were expected to be
visually oriented, and one where they would tend to be more
aurally oriented, in three different ways: bimodally (original) and
unimodally, i.e as an isolated sound or video track. We
investigated how group and modus conditions influenced the
subjects’ perception of the relative intensity of the five emotions,
as well as the sense of realism portrayed in the cartoon clips, and
how amusing they were found to be. Finally, we developed an
estimate for visual-aural orientation as a linear combination of
select self-reported variables, and tested it as a predictor for the
perception of medium dominance.
In Project Time, A Theatre of Music1, we examined ‘time’ as the domain where the old and the new coexist in Singapore, articulated through the meeting of Indian and Chinese drumming with interactive computer sound transformation. It... more
In Project Time, A Theatre of Music1, we examined ‘time’ as the domain where the old and the new coexist in Singapore, articulated through the meeting of Indian and Chinese drumming with interactive computer sound transformation. It examines three perspectives: chronological time, through-time and outside time. A three-pronged metaphor relating skin, urbanism and immortality is proposed as a dissection tool to discuss the individual’s response to physical change, a society's attitude towards change and depersonalised memory’s way of dealing with experience.
Abstract — This paper looks into how music composers’ rights to their work are dependent on the nature of a work as well as its origin and underlying philosophy. Some differences and overlaps between French Droit d’auteur, Anglo-Saxon... more
Abstract — This paper looks into how music composers’
rights to their work are dependent on the nature of a work
as well as its origin and underlying philosophy. Some
differences and overlaps between French Droit d’auteur,
Anglo-Saxon Copyright and the Copyleft movement will be
explored. I will briefly touch on examples from my own
compositional practice.
This dissertation is about sound in context. Since sensory processing is inherently multimodal, research in sound is necessarily multidisciplinary. The present work has been guided by principles of systematicity, ecological validity,... more
This dissertation is about sound in context. Since sensory processing is inherently multimodal, research in sound is necessarily multidisciplinary. The present work has been guided by principles of systematicity, ecological validity, complementarity of  methods, and integration of science and art. The main tools to investigate the mediating relationship of people and environment through sound have been empiricism and psychophysics.
Four papers focus on perception. In paper A, urban soundscapes were reproduced in a 3D installation. Analysis of results from an experiment revealed correlations between acoustic features and physiological indicators of stress and relaxation. Paper B evaluated soundscapes of different type. Perceived quality was predicted not only by psychoacoustic descriptors but also personality traits. Sound reproduction quality was manipulated in paper D, causing two effects on source localisation which were explained by spatial and semantic crossmodal correspondences. Crossmodal correspondence was central in paper C, a study of colour association with music. A response interface employing CIE Lab colour space, a novelty in music emotion research, was developed. A mixed method approach supported an emotion mediation hypothesis, evidenced in regression models and participant interviews.
Three papers focus on design. Field surveys and acoustic measurements were carried out in restaurants. Paper E charted relations between acoustic, physical, and perceptual features, focussing on designable elements and materials. This investigation was pursued in Paper F where a taxonomy of sound sources was developed. Analysis of questionnaire data revealed perceptual and crossmodal effects. Lastly, paper G discussed how crossmodal correspondences facilitated creation of meaning in music by infusing ecologically founded sonification parameters with visual and spatial metaphors.
The seven papers constitute an investigation into how sound affects us, and what sound means to us.
Research Interests:
This dissertation is about sound in context. Since sensory processing is inherently multimodal, research in sound is necessarily multidisciplinary. The present work has been guided by principles of systematicity, ecological validity,... more
This dissertation is about sound in context.

Since sensory processing is inherently multimodal, research in sound is necessarily multidisciplinary.

The present work has been guided by principles of systematicity, ecological validity, complementarity of methods, and integration of science and art.

The main tools to investigate the mediating relationship of people and environment through sound have been empiricism and psychophysics.

Four papers focus on perception, and three on design. They constitute an investigation into how sound affects us, and what sound means to us.
Research Interests:
Préface Ce texte a comme sujet la confluence entre la création musicale et les sciences cognitives. Le but principal du travail a été de faire de la reconnaissance sur le terrain. Le présent texte est donc forcément incomplet, et ne... more
Préface

Ce texte a comme sujet la confluence entre la création musicale et les sciences cognitives. Le but principal du travail a été de faire de la reconnaissance sur le terrain. Le présent texte est donc forcément incomplet, et ne servira que de point de départ pour une recherche substantielle.
J’ai choisi comme thématique l’interactivité musicale, qui sera définie comme le dialogue musicien–machine. Je vais tenter d’approcher ce phénomène par multiples chemins, qui se superposent. Le thème restera au centre, et autour de lui, j’esquisserai sa relation avec plusieurs faits et phénomènes liés, en particulier : les langages naturels et formels, la question de l’interface à la création, l’intelligence artificielle, et les notions de mémoire et de sens. Ces approches mises ensemble constitueront l’étude des aspects des systèmes d’interactivité.
Le vaste sujet de l’interactivité musicale est incrusté dans l’histoire de la musique d’ordinateur, une histoire qui date déjà d’un demi-siècle au moins. Par conséquent il sera nécessaire de cerner le cœur du sujet et de parcourir des cercles concentriques ou en spirale, pour gagner des connaissances qui nous permettent de comprendre mieux le phénomène. La procédure est un peu comme quand on observe une étoile avec l’œil nu : si on la regarde tout droit elle disparaît… La rétine est plus sensible à la lumière dans les côtés. Le texte est donc fatalement un collage consistant de plusieurs études d’envergure limitée. Malgré cela, il faut respecter les aspects importants propres au sujet, essayer d’esquiver le superflu et faire le plus possible de liens. La recherche est guidée par trois thématiques. Quelle est la matière, en d’autres termes, les composants et les processus qui constituent le système de proprement dit, utilisé dans la situation de performance musicale ? Deuxièmement, quelle est la relation entre recherche cognitive et outils technologiques à disposition ? Troisièmement, quelles implications est-ce que les technologies ont eues et auront d’autant plus à l’avenir sur la créativité musicale ?
Depuis plusieurs années, les concepts qui sous-tiennent ce texte ont influencé mon travail de compositeur et performeur. J’ai fait des expériences en la matière au travers d’œuvres employant des dispositifs électroacoustiques de configuration variable : “Beda+” (1995), “Tusalava” (1999), “Leçons pour un apprenti sourd-muet” (1998-9), “gin/gub” (2000), “Manifest”[1] (2000), “Project Time”[2] (2001), “sxfxs” (2001), “Extra Quality” (2001-2), ”D!sturbances 350–500”[3]… Ces morceaux de musique sont nés d'une curiosité pour le fondement théorique de la cognition et le fonctionnement du cerveau humain. En particulier, je me suis consacré à analyser la situation de jeu dans laquelle a lieu un échange d’informations et d’initiatives musicales entre musicien et machine, qui agissent sur un degré équivalent de participation dans un système complexe. J’éprouve que cette situation ludique peut également servir d’outil de recherche ; elle est un peu comme un laboratoire, ou un banc d’essai, pour tester des hypothèses, qu’elles soient des propos limités à la musique, ou bien plus étendues, élargissant vers des terrains inhabituels.
Étant compositeur, j’ai essayé de rendre l’étude ni trop limitée, ni strictement descriptive. J’ai ressenti le besoin d’analyser des travaux contemporains, ayant des composants scientifiques : les trois projets étudiés sont effectivement en cours de développement. Il s’agissait dans cette étude de capter plutôt leur raison d’être que de montrer leurs formes respectives dans un état finalisé, qui de toute façon n’est pas leur destin. Si la musicologie se contentait de démontrer des structures dans des œuvres de répertoire connues depuis longtemps, ou si elle s’enfermait dans un académisme technocrate développant des modèles n’expliquant que des choses qui sont évidentes pour les musiciens, alors elle souffrirait d’anémie. En proposant une hypothèse, elle doit comporter des aspects prédictifs. Ce serait encore mieux si des modèles développés en support à l’hypothèse étaient facilement accessibles et pouvaient servir au développement de nouveaux outils innovants. Cela est souhaitable, non seulement pour stimuler la production créative, mais également pour aider à mieux comprendre le fonctionnement de la créativité lui-même.
L’activité musicale, au sens général, pour ceux qui la produisent autant que pour ceux qui l’apprécient, est un exercice essentiellement non-verbal dont le but est l’émergence d'une compréhension de la créativité humaine d’un ordre autre que verbal ou écrit. En étudiant la créativité, et surtout sa formalisation, ne risquerait-on pas de la dénaturer ? Peut-être la créativité ne risque-t-elle pas de s’effondrer dans la recherche ? Que restera-t-il de la création musicale le jour où une machine aura composé une œuvre capable d’émouvoir les auditeurs ignorant tout de son mode de fabrication ? Néanmoins, en suivant l’appel de William Faulkner, “kill your darlings”, espérons transcender la créativité telle qu’on la connaît et aller vers des pays musicaux inouïs.
PerMagnus Lindborg
Paris, septembre 2003
L’article présente le travail de Frédéric Voisin et en particulier le projet “Neuromuse” qui présenté d’après plusieurs points de vue [les techniques utilisées, des exemples sonores.] Ensuite, une réflexion essayera de trouver la... more
L’article présente le travail de Frédéric Voisin et en particulier le projet
“Neuromuse”  qui présenté d’après plusieurs points de vue
[les techniques utilisées, des exemples sonores.]
Ensuite, une réflexion essayera de trouver la signifiance de cette approche à la
création musicale contemporaine, et des directions pour futur travail.
Finalement, la rédaction d’un entretien avec Voisin.
“Locust Wrath #2, sound installation” is a sonification of the predicted climate in the Mediterranean basin between 2015 and 2055. Daily values of precipitation (rainfall), atmospheric pressure, temperature, wind speed, and humidity are... more
“Locust Wrath #2, sound installation” is a sonification of the predicted climate in the Mediterranean basin between 2015 and 2055. Daily values of precipitation (rainfall), atmospheric pressure, temperature, wind speed, and humidity are sampled in 25*12 = 300 geographical gridpoints in an area of approximately 6.6 million km2. The dataset is produced by scientists at the Tropical Marine Science Institute, National University of Singapore. The 300 points are sonified by an equal number of sonic units (using a modified plucked string model) that are spatially distributed. The 41 years of data are time-compressed into a 58-minute piece, a process known as audification. A system for interactive sonification was developed in Max (Cycling ’74) to create surround soundscapes for “Locust Wrath, a multimedia and dance performance” (Liong, Koh, Lindborg 2013). It provides real-time control over mapping and scaling of the data, thus allowing the output to be tuned to the desired musical character of the piece as well as the acoustics of the site.
TimeTravel is a project to investigate creative real-time music and imagery over internet. The aim is to let audiences and musicians in different physical locations establish a playful audiovisual communication based on the inherent... more
TimeTravel is a project to investigate creative real-time music and imagery over internet. The aim is to let audiences and musicians in different physical locations establish a playful audiovisual communication based on the inherent musicality of voices and landscapes. In tune in, arctic Tromsø and tropical Singapore are connected in an interactive installation running over three days. The musicians at both locations engage in a series of open rehearsals, creating layers of sonic material, all coming together at the performance ‘exit concert’ on Thursday 2 February at GMT 13.00 (9 pm in Singapore, 2 pm in Tromsø).

It is part of a 3-year research where we aim to design and research online expressivity; that is, participatory telematic audio and video frameworks with both physical and virtual components, involving technologies for real-time analysis and synthesis (MaxMSPJitter), streaming (UltraVideo, JackTrip) and diffusion (Spat~) in order to create multisensorial interactive media art experiences.

TimeTravel is a collaboration between Tromsø Music Conservatory, Arctic Sinfonietta and Nanyang Technological University. It is supported in Norway by the Cultural Council, Sparebank Gift Fund (Gavefund), University of Tromsø, Verdione, and in Singapore by NTU’s Innovation Centre and School of Art, Design, Media, and a National Research Foundation grant.
We investigated the interaction between psychological and acoustic features in the perception of soundscapes. Participant features were estimated with the Ten-Item Personality Index (Gosling et al. 2003) and the Profile of Mood State for... more
We investigated the interaction between psychological and acoustic features in the perception of soundscapes. Participant features were estimated with the Ten-Item Personality Index (Gosling et al. 2003) and the Profile of Mood State for Adults (Terry et al. 1999, 2005), and acoustic features with computational tools such as MIRtoolbox (Lartillot 2011). We made ambisonic recordings of Singaporean everyday sonic environments and selected 12 excerpts of 90 seconds duration each, in 4 categories: city parks, rural parks, eateries and shops/markets. 43 participants rated soundscapes according to the Swedish Soundscape Quality Protocol (Axelsson et al. 2011) which uses 8 dimensions related to quality perception. Participants also grouped ‘blobs’ representing the stimuli according to a spatial metaphor and associated a colour to each.
A principal component analysis determined a set of acoustic features that span a 2-dimensional plane related to latent higher-level features that are relevant to soundscape perception. We tentatively named these dimensions Mass and Variability Focus; the first depends on loudness and spectral shape, the second on amplitude variability across temporal domains. A series of repeated-measures analyses of variance showed that there is are patterns of significant correlations between perception ratings and the derived acoustic features in interaction with personality measures. Several of the interactions were linked to the personality trait Openness, and to Aural-Visual Orientation.
"for her polyphonic writings, a monument to suffering and courage in our time"
Research Interests:
Friedrich Murnau’s Nosferatu has a screenplay by Henrik Galeen, based on Bram Stoker’s Dracula. Hans Erdmann composed the original music, which received mixed reviews when it was first presented in Berlin in the 1920s. Eighty years later,... more
Friedrich Murnau’s Nosferatu has a screenplay by Henrik Galeen, based on Bram Stoker’s Dracula. Hans Erdmann composed the original music, which received mixed reviews when it was first presented in Berlin in the 1920s. Eighty years later, at least ten other music accompaniments have been composed. Why should anyone want to make yet another score to this film?
Research Interests:
With the advent of sound recording and mechanically perfect playback, it became possible to hear a sound again and again. In the 1940s, Pierre Schaeffer and co-workers were spinning sounds on closed-loop turntables, listening with great... more
With the advent of sound recording and mechanically perfect playback, it became possible to hear a sound again and again. In the 1940s, Pierre Schaeffer and co-workers were spinning sounds on closed-loop turntables, listening with great attention, beyond the point of nausea, until the repeated unit of sound took on a novel perceptual quality: as if existing outside the flow of time, the sound materialised, stabilised, and became understood as an object (Schaeffer 1966).

This phenomenological discovery, followed by extensive typologies of sonic morphology, timbre, texture, and signification, laid the theoretical basis of experimental music composition, in particular acousmatic music. This, in its turn, enabled the digital audio revolution, sampling, the pop production pipeline, and the design of earcons.

While research in multimodal perception has a long history, especially the perception of audio-visual composites, the field has more recently gained considerable traction. Two examples are cross-modal effects of sound on food taste (Spence 2011), and sound design of auditory icons in screen-based interfaces (Gaver 1986). This is partly driven by advanced marketing methods predicting the 'just right' music detail attracting attention and positive valence to sales products, and partly by increased availability of psychophysiological and neurological research equipment.

The authors conduct research in sonocentric cross-modal perception and design. We are charting associations between visual colour and auditory timbre (Lindborg & Friberg 2015), between visual spikyness and sonic roughness (Liew, Lindborg et al. 2018), and the identification of sound sources in complex soundscapes (Lindborg 2016). Adhering to a research-led creative practice, we apply findings in creative artwork through data sonification (Liew & Lindborg in review).
Sound recording technologies have been around for more than a hundred and forty years. Composers have imagined and indeed created very long works – lasting days, years, or even with infinite duration – and conversely, very short works –... more
Sound recording technologies have been around for more than a hundred and forty years. Composers have imagined and indeed created very long works – lasting days, years, or even with infinite duration – and conversely, very short works – second-long miniatures that metaphorically encapsulate huge corpuses of music. Comparing two works,  Leif Inge's 9 Beet Stretch (2002) and Johannes Kreidler's Compression Sound Art (2009), this paper reflects upon idea-based sonic art of extreme time durations: opposite points on a dimesion of time scales from very long to very short, extending Stockhausen's 'unified time structure'. For long works, we posit that the perceptually defining characteristics are slowness, repetition, and continuity. For short works, it is recognizability and specificity that are most important. With this in mind, we argue that what connects the works by Inge and Kreidler is the overarching concept of iconicity, as enabled by technologies of appropriation.
19 Feb 2019 - talk to present own work & directions for new collaborations between SNU composition and instrumental departments (incl. Korean music).

Recording available at https://www.youtube.com/user/sonosofisms
Research in the perception of the sonic environment, or 'soundscape' as defined by R Murray Schafer, has three historic roots: music composition, psychoacoustics, and activism. They often combine, as in the work of Bernie Krause. Sound... more
Research in the perception of the sonic environment, or 'soundscape' as defined by R Murray Schafer, has three historic roots: music composition, psychoacoustics, and activism. They often combine, as in the work of Bernie Krause. Sound can inform us about the general health of an environment. This applies to aquatic ecosystems, where fish and mammal vocalisations, in particular, are negatively influenced by human-introduced noise. This talk will give an overview and a few examples from my own work doing underwater recordings in French Polynesia on a fellowship with TBA21.
Slides for my paper session presentation of Pacific Belltower sonification installation, exhibited at ICMC-EMW in Shanghai, October 2017
Research Interests:
Slides for art research presentation at City University of Hong Kong, 12 April 2017
Research Interests:
Slides for presentation at Si17 Symposium on Sound and Movement (part of Soundislands Festival)
Research Interests:
Research Interests:
Slides for presentation
Research Interests:
In this presentation, Dr. Lindborg will discuss his approach to perceiving and designing the sonic environment and show examples from recent work. Since sensory processing is inherently multimodal, an attempt at knowing sound necessarily... more
In this presentation, Dr. Lindborg will discuss his approach to perceiving and designing the sonic environment and show examples from recent work. Since sensory processing is inherently multimodal, an attempt at knowing sound necessarily involves multiple disciplines. Research relies on systematicity, ecological validity, complementarity of  methods, and the interdisciplinary integration of science and art. The main tools to investigate the mediating relationship of people and environment through sound are empiricism and psychophysics. Data sonification aims to make structures in complex data apparent to the auditory sense. Navigating between ‘ars musica’ and ‘ars informatica’, sonification is both a set of techniques and an aesthetic. PerMagnus will present experimental software and custom-made hardware for the auditory display of large geospatial and temporal datasets. The ‘soundscape’ concept emerged from an anti-modernist movement concerned with the aesthetics and preservation of natural environments. Over the past decade or so, urban soundscape research has rapidly expanded to embrace sonic design, city planning, and public health policy. In his empirical work, PerMagnus focusses on servicescapes, that are complex multimodal environments. Sound art has its roots in sculptural installation and interactive music. In parallel with ‘light’ in the visual arts, ‘sound’ is both material and medium. Our own voice and ears are fundamental tools for sonic exploration, which is then extended with the help of instruments, computers, and loudspeakers, to enable communication with fellow musicians and other acoustic beings in the social environment. Much as the urban soundscape naturally inhabits three dimensions, PerMagnus’ sonifications and electroacoustic compositions are often deployed in multichannel, physical installations.
Children and teachers are organisms involved in information exchange within an environment called a classroom. The biological task for the auditory system is to alert important changes. Audition is always active and specialises in sudden,... more
Children and teachers are organisms involved in information exchange within an environment called a classroom. The biological task for the auditory system is to alert important changes. Audition is always active and specialises in sudden, extreme, or quickly approaching sounds that might necessitate action.
High intensity sound (even within legal levels) might induce physiological stress on hearing and voice organs when competing with ambient noise, psychophysiological stress on heart rate and metabolism, or psychological reactions such as annoyance.
A recent German study showed that the acoustical conditions in elementary classrooms often do not fit the specific needs of children, who are more sensitive to acoustic problems due to reverberation and noise than adults.
If the soundscape matters to school children, how can we create optimal acoustics in classrooms for communication, learning, and development? This is a great multidisciplinary challenge, calling for us to bridge environmental psychology, acoustic design, and education.
In this presentation, I will explain concepts such as SPL, reverberation radius, and Lombard effect, and share preliminary results from a pilot study in Singapore kindergartens.
(slides for presentation)
Research Interests:
mostly images, not much text
Research Interests:
Research Interests:
Research Interests:
What makes Sonification an Artwork? Slides for a skype presentation
Research Interests:
slides
Research Interests:
... Show full item record. Upload File Title: Spectrum : Modern Icon : Sunday 6 April 2008 @ Esplanade Recital Studio. Authors: Ligeti; PerMagnus Lindborg; Chen Yi. Issue Date: 2008. Document Type: Video Recording. Files in this item. ...
New conference formats are emerging in response to COVID-19 and climate change. Virtual conferences are sustainable and inclusive regardless of participant mobility (financial means, caring commitments, disability), but lack face-to-face... more
New conference formats are emerging in response to COVID-19 and climate change. Virtual conferences are sustainable and inclusive regardless of participant mobility (financial means, caring commitments, disability), but lack face-to-face contact. Hybrid conferences (physical meetings with additional virtual presentations) tend to discriminate against non-fliers and encourage unsustainable flying. Multi-hub conferences mix real and virtual interactions during talks and social breaks and are distributed across nominally equal hubs. We propose a global multi-hub solution in which all hubs interact daily in real time with all other hubs in parallel sessions by internet videoconferencing. Conference sessions are confined to three equally-spaced 4-h UTC timeslots. Local programs comprise morning and afternoon/evening sessions (recordings from night sessions can be watched later). Three reference hubs are located exactly 8 h apart; additional hubs are within 2 h and their programs are alig...
IntroductionIt has proven a hard challenge to stimulate climate action with climate data. While scientists communicate through words, numbers, and diagrams, artists use movement, images, and sound. Sonification, the translation of data... more
IntroductionIt has proven a hard challenge to stimulate climate action with climate data. While scientists communicate through words, numbers, and diagrams, artists use movement, images, and sound. Sonification, the translation of data into sound, and visualization, offer techniques for representing climate data with often innovative and exciting results. The concept of sonification was initially defined in terms of engineering, and while this view remains dominant, researchers increasingly make use of knowledge from electroacoustic music (EAM) to make sonifications more convincing.MethodsThe Aesthetic Perspective Space (APS) is a two-dimensional model that bridges utilitarian-oriented sonification and music. We started with a review of 395 sonification projects, from which a corpus of 32 that target climate change was chosen; a subset of 18 also integrate visualization of the data. To clarify relationships with climate data sources, we determined topics and subtopics in a hierarchi...
This article analyses recent developments of sonic art in Hong Kong. Based on a series of in-depth interviews with 23 local sonic art practitioners over the past six years, we discuss the contextual understanding of what constitutes... more
This article analyses recent developments of sonic art in Hong Kong. Based on a series of in-depth interviews with 23 local sonic art practitioners over the past six years, we discuss the contextual understanding of what constitutes ‘sonic art’ among local practitioners, along neighbouring terms such as ‘electroacoustic music’, ‘experimental music’ and ‘computer music’. We also give a description of the new generation of sonic art practitioners who emerged over the past ten years, contributing to a renewed sense of professionalism. These developments can be understood in relation to four aspects: a strong cluster of interrelated higher education institutions; a shift in public policy supporting ‘art and tech’ projects and cultural organisations; specific individuals, practitioners deeply invested in what we here define as sonic arts, acting as passeurs, connecting underground and academic milieux; and the international integration of Hong Kong-based sonic artists and promoters.
We report results from an investigation into the relationships between acoustic performance, price, and perceived quality of earphones. In Singapore today, the most common situation where people listen to music is while commuting, however... more
We report results from an investigation into the relationships between acoustic performance, price, and perceived quality of earphones. In Singapore today, the most common situation where people listen to music is while commuting, however such environments have generally high ambient noise levels. A survey (N=94) of listener habits on buses and trains was conducted. Results showed that people use a wide range of earphones, both in terms of price and measurable acoustic performance. Five typical earphone models were identified and employed in a perceptual experiment (N=15). Volunteers rated various aspects of earphone quality while listening to music under two conditions: studio silence and a reproduced commuter environment. Results showed that participants displayed a strong preference towards in-ear earphones and this can be attributed to these having better acoustic isolation than on-ear earphones. People tend to describe the music listening experiences in terms of sonic clarity a...
Loki’s Pain is an immersive 3D audio installation artwork, a sonification of seismic activity. Visitors take the place of Loki, who was punished by the gods and caused earthquakes. We designed an auditory display in the shape of a... more
Loki’s Pain is an immersive 3D audio installation artwork, a sonification of seismic activity. Visitors take the place of Loki, who was punished by the gods and caused earthquakes. We designed an auditory display in the shape of a hemi-dodecahedron and built a prototype with a low-budget, DIY approach. Seismic data were retrieved from the Internet. Location, magnitude, and epicentre depth of hundreds of recent earthquakes were sonified with physical modelling synthesis into a 10-minute piece. The visitor experience was evaluated in a listening experiment (N = 7), comparing the installation with a version for headphones. Differences on eight semantic scales were small. A content analysis of focus group discussions nuanced the investigated topics, and qualitative interpretation strengthened the quantitative findings. Verbal expressions of immersivity were stronger in the installation, which stimulated longer and more detailed responses. Aspects such as audio quality, the structure&#39...
Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localisation. Stimul ...
Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localization. Stimuli with different MP3 compression rates,... more
Audio quality is known to cross-modally influence reaction speed, sense of presence, and visual quality. We designed an experiment to test the effect of audio quality on source localization. Stimuli with different MP3 compression rates, as a proxy for audio quality, were generated from drum samples. Participants (n = 18) estimated the position of a snare drum target while compression rate, masker, and target position were systematically manipulated in a full-factorial repeated-measures experiment design. Analysis of variance revealed that location accuracy was better in wide target positions than in narrow, with a medium effect size; and that the effect of target position was moderated by compression rate in different directions for wide and narrow targets. The results suggest that there might be two perceptual effects at play: one, whereby increased audio quality causes a widening of the soundstage, possibly via a SMARC-like mechanism, and two, whereby it enables higher localization accuracy. In the narrow target positions in this experiment, the two effects acted in opposite directions and largely cancelled each other out. In the wide target presentations, their effects were compounded and led to significant correlations between compression rate and localization error.
&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;L’article présente le travail de Frédéric Voisin et en particulier le projet “Neuromuse” qui présenté d’après plusieurs points de vue [les techniques utilisées, des exemples... more
&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;L’article présente le travail de Frédéric Voisin et en particulier le projet “Neuromuse” qui présenté d’après plusieurs points de vue [les techniques utilisées, des exemples sonores.] Ensuite, une réflexion essayera de trouver la signifiance de cette approche à la création musicale contemporaine, et des directions pour futur travail. Finalement, la rédaction d’un entretien avec Voisin. &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;
The SI13 NTU/ADM Symposium on Sound and Interactivity in Singapore provided a meeting point for local researchers, artists, scholars and students working creatively with sound and interactivity, as well as the foundation for an issue... more
The SI13 NTU/ADM Symposium on Sound and Interactivity in Singapore provided a meeting point for local researchers, artists, scholars and students working creatively with sound and interactivity, as well as the foundation for an issue exploring sound and interactivity in the Southeast Asian country.Figure 1. Snapshots from the SI13 exhibition, which could be visited throughout the symposium from 14–16 November 2013. [Click image to enlarge] The School of Art Design Media of Singapore’s Nanyang Technological University hosted the Symposium on Sound and Interactivity from 14–16 November 2013. A total of 15 artworks and 14 papers were selected by a review committee for presentation by 24 active participants during the three-day symposium. While all but four of the participants are residents of the island, they represent seventeen different countries, thus reflecting the cosmopolitan nature of Singapore in general and of sound artists and researchers in particular. 1[1. See the SI13 web...
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or per-ceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more... more
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or per-ceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more recent. It is known that health is influenced by the sonic environment, and the study here presented aimed to investigate the nature and strength of relationships between soundscape features and physiological responses linked to relaxation or stress. In a controlled experiment, seventeen healthy volun-teers moved freely inside a physical installation listening to soundscape recordings of nature, urban parks, eateries, and shops, reproduced using 3D ambisonic techniques. Physiological responses were continuously captured, then detrended, downsampled, and analysed with multivariate linear regression onto orthogonal acoustic and perceptual stimuli features that had been previously determined. Measures of Peripheral Temper-ature regressed onto SoundMass, an a...
Research Interests:
Abstract Sound is a multi-faceted phenomenon and a critical modality in all kinds of sevicescapes. At restaurants, our senses are intensively stimulated. They are social places that depend on acoustic design for their success. Considering... more
Abstract Sound is a multi-faceted phenomenon and a critical modality in all kinds of sevicescapes. At restaurants, our senses are intensively stimulated. They are social places that depend on acoustic design for their success. Considering the large economic interests, surprisingly little empirical research on the psychoacoustics of restaurants is available. Contributing to theory building, this article proposes a typology of designed and non-designed sonic elements in restaurants. Results from a survey of 112 restaurants in Singapore are presented, with a focus on one element of the typology, namely interior design materials. The collected data included on-site sound level, audio recordings from which psychoacoustic descriptors such as Loudness and Sharpness were calculated, perceptual ratings using the Swedish Soundscape Quality protocol, and annotations of physical features such as Occupancy. We have introduced a measure, Priciness, to compare menu cost levels between the surveyed restaurants. Correlation analysis revealed several patterns: for example, that Priciness was negatively correlated with Loudness. Analysis of annotations of interior design materials supported a classification of the restaurants in categories of Design Style and Food Style. These were investigated with MANOVA, revealing significant differences in psychoacoustic, physical, and perceptual features between categories among the surveyed restaurants: for example, that restaurants serving Chinese food had the highest prevalence of stone materials, and that Western-menu places were the least loud. Some implications for managers, acoustic designers, and researchers are discussed.
Page 1. 275 14TH INTERNATIONAL SYMPOSIUM ON ELECTRONIC ART In Project Time, A Theatre of Music,1 we examined &amp;#x27;time&amp;#x27; as the domain where the old and the new coexist in Singapore, articulated through the meeting ...
This document is downloaded from DR-NTU, Nanyang Technological ... Title Composers&#x27; rights in a digital world. ... Citation Lindborg, P. (2007). Composers&#x27; rights in a digital world. Intellectual Property Rights and Copyrights... more
This document is downloaded from DR-NTU, Nanyang Technological ... Title Composers&#x27; rights in a digital world. ... Citation Lindborg, P. (2007). Composers&#x27; rights in a digital world. Intellectual Property Rights and Copyrights in Music Field (2007:Hanoi), pp.1-4.
This document is downloaded from DR-NTU, Nanyang Technological ... Title Reflections on aspects of music interactivity in performance situations. ... Citation Lindborg, P. (2008). Reflections on aspects of music interactivity in... more
This document is downloaded from DR-NTU, Nanyang Technological ... Title Reflections on aspects of music interactivity in performance situations. ... Citation Lindborg, P. (2008). Reflections on aspects of music interactivity in performance situations. eContact, 1-10.
Research Interests:
This paper studies the use of Gumowski-­Mira maps for sonic arts. Gumowski-­Mira maps are a set of chaotic systems that produce many organic orbits that resemble cells, flowers and other life forms. This has prompted mathema-ticians and... more
This paper studies the use of Gumowski-­Mira maps for sonic arts. Gumowski-­Mira maps are a set of chaotic systems that produce many organic orbits that resemble cells, flowers and other life forms. This has prompted mathema-ticians and eventually artists to study them. These maps carry a potential for use in the sonic arts, but until now such use is non-­existent. The paper describes two ways of using Gumowski-­Mira maps: for synthesis and spatialization. The synthesis approach, which runs in real-­time, takes the dynamical system output as the real and imaginary input to an inverse Fourier transformation, thus directly sonifying the algorithm. The spatialization approach uses the shapes of Gumowski-­Mira maps as shapes across the acoustic space, using the first 128 iterations of each map as audio particles. The shapes can change based on the maps&amp;#39; initial parameters. The maps are explored in live performance using Leap Motion and Cycling &amp;#39;74&amp;#39;s MIRA for iPad as control interfaces of audio processing in SuperCollider. Examples are given in two works, Cells #1 and #2.
&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;L’article présente le travail de Frédéric Voisin et en particulier le projet “Neuromuse” qui présenté d’après plusieurs points de vue [les techniques utilisées, des exemples... more
&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;L’article présente le travail de Frédéric Voisin et en particulier le projet “Neuromuse” qui présenté d’après plusieurs points de vue [les techniques utilisées, des exemples sonores.] Ensuite, une réflexion essayera de trouver la signifiance de cette approche à la création musicale contemporaine, et des directions pour futur travail. Finalement, la rédaction d’un entretien avec Voisin. &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;
Research Interests:
The musical aspects of rhetoric speech go beyond a Cicerean pronunciatio; indeed classicist composers skilfully employed rhetorical form, tropes and figures to write ‘convincing ’ music. In the 20 th century, the use of such means became... more
The musical aspects of rhetoric speech go beyond a Cicerean pronunciatio; indeed classicist composers skilfully employed rhetorical form, tropes and figures to write ‘convincing ’ music. In the 20 th century, the use of such means became associated with totalitarian regimes, and was consciously abandoned by progressive musicians. Likewise, contemporary politicians, at least in democratic fora, resort to a mode of delivery that Aristotle would have called iskhnos, “dry”. What is the role of musicality in today’s speech-making? What can we learn about the nature of political power through a music-focused analysis of vocal delivery? These questions are relevant to my compositional
New conference formats are emerging in response to COVID-19 and climate change. Virtual conferences are sustainable and inclusive regardless of participant mobility (financial means, caring commitments, disability), but lack face-to-face... more
New conference formats are emerging in response to COVID-19 and climate change. Virtual conferences are sustainable and inclusive regardless of participant mobility (financial means, caring commitments, disability), but lack face-to-face contact. Hybrid conferences (physical meetings with additional virtual presentations) tend to discriminate against non-fliers and encourage unsustainable flying. Multi-hub conferences mix real and virtual interactions during talks and social breaks and are distributed across nominally equal hubs. We propose a global multi-hub solution in which all hubs interact daily in real time with all other hubs in parallel sessions by internet videoconferencing. Conference sessions are confined to three equally-spaced 4-h UTC timeslots. Local programs comprise morning and afternoon/evening sessions (recordings from night sessions can be watched later). Three reference hubs are located exactly 8 h apart; additional hubs are within 2 h and their programs are alig...
The idea for When We Collide sprang from Douglas Hofstader’s metaphor of creativity as the meeting between records and record players, appearing in his 1979 book &quot;Gödel, Escher, Bach: An Eternal Golden Braid&quot;. In our case, the... more
The idea for When We Collide sprang from Douglas Hofstader’s metaphor of creativity as the meeting between records and record players, appearing in his 1979 book &quot;Gödel, Escher, Bach: An Eternal Golden Braid&quot;. In our case, the records are soundfiles, whilst the record player is a generative system. The player analyses, selects, mixes, transforms, and spatialises the material created by the composers (monophonic and quadraphonic soundfiles). The system negotiates between algorithms that tend towards monotony (in terms of loudness, spatialisation, and frequency spectrum) and algorithms that tend towards variability (in terms of soundfiles, transformations, and scenes). In a nutshell, the installation is a space where sonic ideas collide and co-exist.
Ce texte a comme sujet la confluence entre la creation musicale et les sciences cognitives. Le but principal du travail a ete de faire de la reconnaissance sur le terrain. Le present texte est donc ...
Research Interests:
The smellscape is the olfactory environment as perceived and understood, consisting of odours and scents from multiple smell sources. To what extent can audiovisual information evoke the smells of a real, complex, and multimodal... more
The smellscape is the olfactory environment as perceived and understood, consisting of odours and scents from multiple smell sources. To what extent can audiovisual information evoke the smells of a real, complex, and multimodal environment? To investigate smellscape imagination, we compared results from two studies. In the first, onsite participants (N = 15) made a sensory walk through seven locations of an open-air market. In the second, online participants (N = 53) made a virtual walk through the same locations reproduced with audio and video recordings. Responses in the form of free-form verbal annotations, ratings with semantic scales, and a ‘smell wheel’, were analysed for environmental quality, smell source type and strength, and hedonic tone. The degree of association between real and imagined smellscapes was measured through canonical correlation analysis. Hedonic tone, as expressed through frequency counts of keywords in free-form annotations was significantly associated, ...
Environmental sounds are a key component of the human experience of a place as they carry meanings and contextual information, together with providing situational awareness. They have the potential to either support or disrupt specific... more
Environmental sounds are a key component of the human experience of a place as they carry meanings and contextual information, together with providing situational awareness. They have the potential to either support or disrupt specific activities as well as to trigger, to inhibit, or simply to change human behaviors in context. The experience of acoustic environments can result in either positive or negative perceptual outcomes, which are in turn related to well-being and Quality of Life. In spite of its relevance to the holistic experience of a place, the auditory domain is often not given enough prominence in environmental psychology studies. Environmental sounds are typically considered in their negative perspective of “noise” and treated as a by-product of society. However, the research (and practice) focus is gradually shifting toward using environmental sounds as mediators to promote and enrich communities’ everyday life. Designers explore how natural sounds can be mixed into ...
Loki’s Pain is an immersive 3D audio installation artwork, a sonification of seismic activity. Visitors take the place of Loki, who was punished by the gods and caused earthquakes. We designed an auditory display in the shape of a... more
Loki’s Pain is an immersive 3D audio installation artwork, a sonification of seismic activity. Visitors take the place of Loki, who was punished by the gods and caused earthquakes. We designed an auditory display in the shape of a hemidodecahedron and built a prototype with a low-budget, DIY approach. Seismic data were retrieved from the Internet. Location, magnitude, and epicentre depth of hundreds of recent earthquakes were sonified with physical modelling synthesis into a 10-minute piece. The visitor experience was evaluated in a listening experiment (N = 7), comparing the installation with a version for headphones. Differences on eight semantic scales were small. A content analysis of focus group discussions nuanced the investigated topics, and qualitative interpretation strengthened the quantitative findings. Verbal expressions of immersivity were stronger in the installation, which stimulated longer and more detailed responses. Aspects such as audio quality, the structure&#39;...
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or perceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more recent.... more
There is no exact model for the relationship between the autonomic nervous system (ANS) and evoked or perceived emotion. Music has long been a privileged field for exploration, while the contribution of soundscape research is more recent. It is known that health is influenced by the sonic environment, and the study here presented aimed to investigate the nature and strength of relationships between soundscape features and physiological responses linked to relaxation or stress. In a controlled experiment, seventeen healthy volun-teers moved freely inside a physical installation listening to soundscape recordings of nature, urban parks, eateries, and shops, reproduced using 3D ambisonic techniques. Physiological responses were continuously captured, then detrended, downsampled, and analysed with multivariate linear regression onto orthogonal acoustic and perceptual stimuli features that had been previously determined. Measures of Peripheral Temper-ature regressed onto SoundMass, an ac...
Research Interests:

And 23 more

The presentation is a progress report on the design and installation of SoundLab, [1] a physical art/research space with a hemispherical loudspeaker array dedicated to high spatial resolution audio at the School of Creative Media in City... more
The presentation is a progress report on the design and installation of SoundLab, [1] a physical art/research space with a hemispherical loudspeaker array dedicated to high spatial resolution audio at the School of Creative Media in City University of Hong Kong that was initiated in November 2020. We also introduce a study on the local context of sonic art in Hong Kong and possible future directions for the genre in the region to which SoundLab aims to contribute through its research and art as well as teaching and outreach activities.