Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Skalldans, audiovisual performance.

...Read more
Abstract Skalldans is an audiovisual improvisation piece for solo laptop performer., developed in Max [2]. Sound and video syntheses are piloted with a MIDI interface, a camera, and a Wiimote; also, audiovisual streams influence each other. The proceedings paper discusses some of the hardware and software points of interest, for example, how audio and video syntheses are piloted, how the streams interact, and the camera tracking method with a linear regression stabiliser. It also touches upon the sources of inspiration for the piece. Background aiming for more involvement of the performer in laptop performance [1]; accident --> cranial MRI scan --> 3D “skull” object; patcher ‘framework’ developed in stages since 2008; Performance outline (Total duration 10 min.) first section: noise, close-up details of the 3D object; 2: skull revealed, musical elements added: Drone, slow Rhythm chops up Noise; 3: tempo picks up, other video streams mixed with 3D skull (live input, tracking); 4: Rhythm input from Drone --> Drum’n’Bass sample, down-sampled visuals with more colour; 5: Rhythm gradually faster & simpler (synchronising 2 step sequencers); performer ‘live head’ fuses visually with 3D skull; last bit: ‘retro’ analog video delay, performer stands up, head-bang dance-> abrupt end & image freeze. Motion tracking iSight camera (computer) towards performer’s face; blob tracking [3] of performer face in relation to laptop, mapped to 3D skull position on screen {x, y}; tracking continuity and stability improved by linear regression prediction from the proceeding 5 frames to most likely ‘next frame position’ of the target blob; Wiimote (hidden in a cap), performer head angle directly mapped onto 3D skull {roll, yaw, pitch}. Physical interface Evolution uc33e MIDI interface, 9 sliders & 24 knobs; Audio synthesis Drone: additive synthesis, directly controlled by the performer with uc33e interface; Coloured Noise’: subtractive synthesis, with FFT filter shape based on 3D skull appearance; Rhythm: two coupled step sequencers, based on [4], variations depend on 3D skull appearance; input comes from Drone, Noise, or a Drum’n’Bass sample. Visual synthesis video sources: VFX with Jitter [2]: compositing, downsampling, and Sobel outlining: directly controlled by the performer with uc33e interface; ‘retro’ analog feedback, loop with screen projection; Skalldans, audiovisual performance. PerMagnus Lindborg Nanyang Technological University, Singapore / KTH Royal Institute of Technology, Stockholm permagnus@ntu.edu.sg www.permagnus.net E=expensive C=cheap References [1] Lindborg, PerMagnus (2008). "Reflections on aspects of music interactivity in performance situations". eContact , 4.10. [2] Cycling74 (). Max . www.cycling74.com (acc. Feb. 2013). [3] Pelletier, J.-M. (2010). Cv.jit, Computer Vision library for Jitter. http://jmpelletier.com/cvjit/ (acc. Feb. 2013). [4] Tanaka, A. (2003) “Modsquad Relooper”. [Max example] 3D skull; linear regression line, prediction ‘hair-pin’; live camera input (performer’s face);
Skalldans, audiovisual performance. PerMagnus Lindborg Nanyang Technological University, Singapore / KTH Royal Institute of Technology, Stockholm permagnus@ntu.edu.sg www.permagnus.net Abstract Skalldans is an audiovisual improvisation piece for solo laptop performer., developed in Max [2]. Sound and video syntheses are piloted with a MIDI interface, a camera, and a Wiimote; also, audiovisual streams influence each other. The proceedings paper discusses some of the hardware and software points of interest, for example, how audio and video syntheses are piloted, how the streams interact, and the camera tracking method with a linear regression stabiliser. It also touches upon the sources of inspiration for the piece. Motion tracking ‣ iSight camera (computer) towards performer’s face; ‣ blob tracking [3] of performer face in relation to laptop, mapped to 3D skull position on screen {x, y}; ‣ tracking continuity and stability improved by linear regression prediction from the proceeding 5 frames to most likely ‘next frame position’ of the target blob; ‣ Wiimote (hidden in a cap), performer head angle directly mapped onto 3D skull {roll, yaw, pitch}. Physical interface ‣ Evolution uc33e MIDI interface, 9 sliders & 24 knobs; Background ‣ aiming for more involvement of the performer in Audio synthesis ‣ Drone: additive synthesis, directly controlled by the laptop performance [1]; performer with uc33e interface; ‣ accident --> cranial MRI scan --> 3D “skull” object; ‣ patcher ‘framework’ developed in stages since 2008; ‣ ‘Coloured Noise’: subtractive synthesis, with FFT filter shape based on 3D skull appearance; ‣ Rhythm: two coupled step sequencers, based on [4], variations depend on 3D skull appearance; input comes from Drone, Noise, or a Drum’n’Bass sample. Visual synthesis ‣ video sources: ‣ 3D skull; ‣ linear regression line, prediction ‘hair-pin’; ‣ live camera input (performer’s face); ‣ VFX with Jitter [2]: compositing, downsampling, and Sobel outlining: directly controlled by the performer with uc33e interface; ‣ ‘retro’ analog feedback, loop with screen projection; Performance outline (Total duration ≈ 10 min.) ‣ first section: noise, close-up details of the 3D object; ‣ 2: skull revealed, musical elements added: Drone, slow Rhythm chops up Noise; ‣ 3: tempo picks up, other video streams mixed with 3D skull (live input, tracking); ‣ 4: Rhythm input from Drone --> Drum’n’Bass sample, down-sampled visuals with more colour; ‣ 5: Rhythm gradually faster & simpler (synchronising 2 step sequencers); performer ‘live head’ fuses visually with 3D skull; ‣ last bit: ‘retro’ analog video delay, performer stands up, head-bang dance-> abrupt end & image freeze. E=expensive C=cheap References [1] Lindborg, PerMagnus (2008). "Reflections on aspects of music interactivity in performance situations". eContact, 4.10. [2] Cycling74 (). Max. www.cycling74.com (acc. Feb. 2013). [3] Pelletier, J.-M. (2010). Cv.jit, Computer Vision library for Jitter. http://jmpelletier.com/cvjit/ (acc. Feb. 2013). [4] Tanaka, A. (2003) “Modsquad Relooper”. [Max example]
Keep reading this paper — and 50 million others — with a free Academia account
Used by leading Academics
Johannes Preiser-Kapeller
Austrian Academy of Sciences
Petra Sijpesteijn
Leiden University
Justine Firnhaber-Baker
University of St Andrews
Emilia Jamroziak
University of Leeds