Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

On Sonic /Visual Form

Dual media composition and live performing needs special tools and new strategies. We should be able to control and manipulate form, structure, micro and macro elements on both media simultaneously. As there’s no existing commercial platform that can fulfill this requisites, we have to design and build a suitable tool from scratch. I’ll be analyzing different alternatives on software and hardware, and describe several prototypes I’ve developed and tested on stage in the last five years. We’ll see how this tools work, and focus on the conceptual difficulties, as the technical aspects can be solved with ease.

On Visual / Sonic Form Jaime Munárriz Ortiz Universidad Complutense de Madrid c/ Greco, 2 Madrid 28040 - SPAIN 34 91 3943653 munarriz@art.ucm.es ABSTRACT Dual media composition and live performing needs special tools and new strategies. We should be able to control and manipulate form, structure, micro and macro elements on both media simultaneously. As there’s no existing commercial platform that can fulfill this requisites, we have to design and build a suitable tool from scratch. I’ll be analyzing different alternatives on software and hardware, and describe several prototypes I’ve developed and tested on stage in the last five years. We’ll see how this tools work, and focus on the conceptual difficulties, as the technical aspects can be solved with ease. Keywords Form, composition, visual, sonic, PureData. 1. INTRODUCTION Form constitutes the structure of a visual / sonic piece. Form can be divided into parts. Parts can be subdivided into smaller entities, reaching the micro-cosmos of notes and even further, into grains and small particles. A piece evolves in time due to it's structure. Form guides change, and helps understanding of the overall. Working with Form on visual and audio media needs special tools, and new ways of thinking when composing. 2. ON VISUAL AND SONIC FORM Antecedents As an experimental musician and visual artist, I found myself working on different scenes, with a dual role. As a musician, I was recording and playing live, often with previously elaborated projections. The complex physical and mental actions involved in playing music, even on a modern computer based environment, prevent the musician from controlling the visual projections. This activity is usually performed by another person, or just triggered and left running by itself. As a visual artist, coming from a fine arts background, my work has evolved into photography, video and digital image, and I found myself performing live at art institutions and festivals. Live visuals, controlled and interacted by the artist, are consolidating as a new genre. This artistic work has been enriched by my professional work at theater and stage projections, where I’ve been able to develop and test new tools and maintain contact with the industry and it’s expensive products. The need to unify This dual work slowly made clear the need to combine both compositional and performing activities. It was evident that I had to develop a system that could allow me to create sonic and visual material simultaneously, with enough freedom to perform live, triggering events, modifying existing material, remixing and interacting with both media. Analyzing the existing platforms that could suite this tasks, I soon realized that none of them offered enough power to compose freely and simultaneously on both media, nor they had the capabilities necessary to perform live with fast synchronization and enough human interaction to make it a true live act, and not just playback. Dual composing / dual thinking At the first tests I soon found out that it is really difficult to keep yourself focused on both media at the same time. The mind quickly concentrates on one composition area, forgetting about it's relationship with the other part. That made clear the need to elaborate a system with true interconnectivity, a system that facilitates, and almost obliges to work and think on sound and visual elements at the same time. Systems tested Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. PdCon09, July 19-26, , 2000, São Paulo, SP, Brazil. Copyright remains with the author(s). In order to build this system I've been testing several platforms, audio tools, programming environments and different physical controller alternatives. Each system is typically good at some tasks, but not so good at others. The dual system needs a platform that works well on both media, and also allows a good integration of both parts of the material. This balance has proved difficult to attain. Some of the difficulties found are: poor overall performance, separate ways of composing and manipulating the data, too complex configuration for live acting, high price of non open source software. The different systems where each used on a global project, named "Suites". This Suites have been composed, performed and recorded. Each system gave birth to new ideas that suggested some changes at the technical level, and even a change of platforms for the next project. This process has allowed a complete and real test of the designed systems, with stage and public confrontation, that makes the experience more valuable. All systems have developed as valid ways of dealing with the problem, but the problem's core, the dual composition and performing, is still far from a perfect solution. Suite Zero. PureData on sound, Processing on visuals. The Pd patches allowed an alternative way of thinking on form, structure, evolution and micro/macro elements. Sound seemed a bit low on quality, comparing with the common virtual instruments and processors. This could be of course reinforced with some VST plugins triggered by Pd. At the end I kept the minimalistic and rough sounding of PureData. Processing showed some serious problems with video performance. Drawing forms was behaving properly, and I could attain a good synchronization on triggering visual elements from the Pd composer patch, but when playing video clips the framerate was falling to really awful levels. Quicktime for java was eating all the CPU processing power, and even with small size and really optimized video clips I couldn't get a nice performing platform. Both programs were running on the same machine, but this was not a problem, as the video slowness appeared even without Pd running. The synchronization was done by OSC, really fast and totally customizable. Suite #01 "Ultramar". Audio sequencer, Isadora on visuals. The bad video performance made me switch to a commercial program, Isadora, developed for dance and stage projection. On a short time schedule, I could easily develop a whole set of visual pieces, with several layers of graphic elements and video. I had some minor problems, but I could finish the suite in time for an art festival live performance. One laptop was used for sound, another for visuals, wired by MIDI interfaces and cables. The sound was composed and played live on a common audio sequencer. For live playing the sequencer was sending MIDI notes to Isadora. This allowed for changing scenes, swapping videos and other synchro effects. A knob controller was also used that could alter Isadora parameters, and at the same time modify sound effects. A keyboard could trigger sounds and corresponding video clips. This system was technically too complicated for live playing. It worked, but I had to think on the signal flow often, and after some time the overall system seemed cryptic. The dual controller configuration seemed really promising, but I found I had to think and plan really carefully the dual actions generated by it beforehand in order to attain a meaningful playability. Stage projection: "Brokers" Jitter on visuals A commercial theater project got me into Max/Jitter work. I had to develop a system for live controlling the visuals generated into a 3 screen LED stage design. Even without previous experience on Jitter, I had no real difficulties to build this program, thought from scratch for theater performing. I didn't swell into the matrix system, the real strength of Jitter, but the video flow was nice and I could take advantage of my PureData knowledge in order to build a global structure control system based on Colls. As a teacher at the Fine Arts University at Madrid, I try to push open source alternatives for my students and workshops. This made me think of porting my Jitter experience to PureData, in order to transmit this model to my students. GEM is no Jitter, it is a completely different graphic system. I had to learn it from scratch. I looked for how to play, mix and layer video, and then I played with the native elements inside GEM, like circles and rectangles. Suite #02 "Mapslids". Ableton Live on Sound, PureData/Gem on visuals. Working on my next suite, that I planned to perform at a mini gig at South America, I developed a complex system that is close to my ideal dual compositing performing tool. One problem I had to deal with on previous systems was the need to close and open different pieces on two separate programs. This made live performing quite stressing, with too long silence interludes, and prone to silly errors. So I designed a system that could open and close the needed patches on the background, controlled from the main application. For this system I relied on Ableton Live as the main live application, the obvious choice for live audio performing. Opening and closing patches form MIDI events was difficult to achieve, but when finished it worked with awesome reliability and speed. The visual patches now open and close smoothly on the background, as if running on a single program. When playing and composing music and visuals, Live clips trigger MIDI events, generating sound from virtual synthesizers or samples, and simultaneously triggering visual events generated on the PureData patches. In this way every single event used for performing has a direct relationship with a visual event. Synchro and dual concept are perfect. Sound textures, long and evolving, doesn't have such a direct punctual relationship with video events. For this kind of material I opted for visual textures, abstract and evolving videos that showed a formal correspondence with this continuous sounds. Percussive and short elements have a direct relationship with fast visual events, like an appearing dot or a shaking line. When composing with this system one needs to prepare and adapt each patch in order to generate unique and distinct elements for each piece. Afterwards, composing begins to conform a unique experience, where sonic events generate its own visual counterpoint. You have to tweak the GEM patches, of course, as you compose, in the same way as you do tweak synth patches while you play and look for the right sounds. The total machine is built and ready. Ableton Live is commercial software, and of course I'll prefer a free alternative, specially for educational purposes. For quick prototyping it became invaluable, being able to send MIDI notes to its own tracks and also to a MidiYoke channel, received by PureData . A patch inside Pd was converting this notes to internal Sends, so each patch could react to its own dedicated notes. However, Live was born as a MAX/MSP patch, so it shouldn't be difficult to build a sound controlling application on PureData that suites to this kind of work, allowing live triggering of MIDI and audio clips, and sending this information also to GEM. 3. FORM COMPOSITION Actual composing tools facilitate the creation of lineal or evolving pieces. Structure is built by adding parts á la brick style, and it is easy to copy and repeat phrases and groups of elements, then adding some new elements. This way of composing, based on multi-track recording, facilitates some kind of structural thinking, but can be an obstacle for overall form design. Of course you can think on terms of global structure and then build it piece by piece, but each tool pushes in it's own direction, and these sequencers just make easier some ways of working and keep us from elaborating sonic material in different ways. When confronted with a blank sheet of paper, it's easy to focus on the global structure. You can think on intros, main theme, bridge, etc. You can draw schemes that show the relationship between parts. You can begin with a simple pattern, like ABA, and then dive inside it's evolving structure. Or you can think on masses of elements that mutate and transform over time. This macro level of composition pushes toward part differentiation, and it is easy to conceive confronting parts that use different tempi, signatures or scales. This kind of macro structure adds variety and interest to the music, offering a higher degree of information to the listener. Sequencer music tends to be lineal. Tempo or signature changes can be realized, but we are usually too lazy to think on them. At the micro level, the same happens: it's too easy to just copy a part, and we forget that adding variations is quite necessary to keep listener's interest. Mechanic quantizing of events constitutes the same problem: it is easy to groove quantize or humanize musical events, but actual tools just push us toward rigid grid quantizing. Mechanical music is fine, but it may lead us to forget another musical resources that could get us into refreshing territories. Just try a 7/8 pattern after a 4/4 part, and switch from 128bpm to 96bpm, and then go up to 178bpm with a 13/8, and see what happens. Complex structures can be interesting and stimulating. Mind appreciates changes, and complex levels of information. Modern musicians often feel the need to build their own music tools. I felt I needed a way to explore form at it's highest level, and also it's relationship with lower levels, so I embarked on the designing and construction processes of a composing tool that could handle form, parts and global structure. PureData was chosen because of it's openness, direct relation with sound and MIDI, built in tempo control, real time processing, and enough data containers to define, alter and use the structural elements of a composition. 4. SIMULTANEOUS SONIC / VISUAL COMPOSITION Dual sonic / visual composition poses a great challenge, even for the artist with extensive experience on both fields. Sound and motion visuals share a common element: Time. Time evolution, changes, repetition, variations, constitute the essence of both media. We can think this may deal to a common workflow, a parallel composing experience, but the truth is we are used to quite different sets of mental an physical tools to work on both fields. When trying to compose at both levels simultaneously, it is difficult to keep one's mind on both tasks, and we usually end focusing on one activity and forgetting about the other. We need tools that oblige us to deal with both media at the same time, establishing strong connections between both types of material. The good part of inventing our own tools is that they are unique and they correspond exactly to our own interests. Developing a Form Exploring System we can define our own connections between both worlds, and so establish our own personal audiovisual language. When confronted to such a complex task as this kind of dual thinking is, it may help to start from a simple departure point. Reductionism may help, minimizing the number of factors involved in the creation process. Choosing a reduced set of visual and sonic elements can constitute a good strategy, helping to focus on the essence of the dual composing problem. My choice was to choose abstract textures (visual-scapes / sonic-scapes ) and punctual elements (short sounds / discreet visual elements). Textures may share a common mood and evolution in time. Punctual elements are the best way to establish a direct relationship between sound and image, because they can appear suddenly, and evolve and extinguish in a short and perceptible way. Our perceptual system can relate both phenomena easily. Rhythm, or percussive material, appears simultaneously at both levels as repetitions and patterns on time. This two layers appear as background and foreground. Textures constitute the background, punctual elements the foreground. However, by choosing really minimal punctual elements, primary visual and sonic material, we build a non too meaningful foreground. This may lead to an exchange of roles, moving from back to front, drawing our attention, or escaping from it. Working on purpose with not too meaningful material, as can be a melody or a lead vocal, or a human figure, creates an ambiguity and a sliding of planes and levels of attention that adds an elusive interest to the pieces composed. On this two layers we can superimpose effects, as reverb or blur, but it is necessary to find an equivalence between both systems. I don't like to talk about synesthesia, as it has become fashionable lately, because it is a health malfunction and I think it constitutes quite a different subject. Our interest lies on equivalencies between sonic and visual effects. For instance, an echo in sound may be paired with a visual repetition of an element with some fading as decay. It's not easy to find an equivalence for each effect we can use, so it is a necessity to start with few and well chosen resources, a reduced and personal palette. 5. TRIGGERING EVENTS The connection between the sonic part of the system and the visual generator can be made using several protocols. MIDI, an old technology, proves quite useful for this purpose, as all audio hardware and software and most visual tools use MIDI as a core technology. We don't really need a great data rate between both systems, and usually we can just send an "event triggered" signal. We may use MIDI notes for this actions. Virtual MIDI ports are fine for working inside the same machine, but external MIDI interfaces may be used for separating visual and audio machines, and even for extending the system into several connected units, adding screens and projectors or even more human performers. OSC, as an open protocol, is perfect for more demanding messages. You can establish your own kind of signals, and build your tools around it's flow. However, not all the tools available can handle OSC, so it's use may depend of it's availability. When using a single development environment, as PureData, Processing or Max/Msp /Jitter, you can establish your own internal signals, as the useful Send/Receive objects on Pd that can be used to connect any event generator to any media generator. MIDI, OSC and internal sends can be converted with ease, so we can adapt any system to work by any other. PureData shows up as the perfect tool for this kind of conversion, as it can handle MIDI, OSC and internal sends with speed and transparency. Gem visual possibilities Live visuals need to be fast and effective. Commercial VJ software, as Resolume or Arkaos, focus on clip triggering, layer superimposing and fancy effects. This software can of course be used for dual sonic / visual composition, using MIDI for triggering clips and control change messages for modifying effects. This tools are fast and reliable, but we may find the same problem as we did with audio sequencers: they are intended for one kind of work, and may not be good for a different approach. The need to build our own tool rises again. A personal composing visual tool must not focus on extreme effects, but on the relation we need to establish with the sonic material. Complex effects and superimposing may be done offline, giving priority to real time reaction and tight integration with audio events. The main tasks that need to be solved are video triggering, video mixing and adding, graphics with alpha channel, and geometric form generation. This basic visual elements may constitute a basic palette, suitable for live performing. Of course the aesthetics of the graphic material used will come up from the creative universe of the composer, as his sounds or melodies do on the musical field. Envelopes and fades may be good tools for establishing a correspondence between sonic and visual elements, as they relate to the timing evolution of dissimilar events. Random moves, prosecution of elements, clicks and glitches may add more coherence and integration to both parts of the composition. 6. CONCLUSION My work on both music and visual creation has dealt to the need to create a system for composing and live performing on both levels simultaneously and coherently. I've built and tested systems based on several platforms and tools, some free and some commercial. PureData has shown as one of the best systems for this purpose. It is Open Source, and can deal nicely with audio, video, graphics, MIDI, OSC, and has some good objects for controlling form and it's components and evolution. Artists these days feel the need to build their own tools, and PureData is one of the best platforms for this purpose. 8. REFERENCES [1] Puckette, Miller. The theory and tecnique of electronic music. World Scientific Publishing, 2007. [2] Supper, Martin. Música electrónica y música con ordenador. Historia, estética, métodos, sistemas. Alianza, Madrid, 2004. [3] VVAA. Bang. Wolke, Hofheim, 2006. [4] VVAA. XXXXX. xxxxx, 2006. [5] Winkler, Todd. Composing interactive music: techniques and ideas using Max. MIT Press, Masachusets, 2001. [6] Woolman, Matt. Sonic graphics. Seeing sound. Thames & Hudson, London, 2000. [7] Xenakis, Iannis. Formalized music: thought and mathematics in music. Pendragon, New York, 1992. 7. ACKNOWLEDGMENTS [8] www.createdigitalmusic.com Thanks to Miller Puckette for freeing and sharing this technology, and to the PureData community. [9] www.createdigitalmotion.com