Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Steve Gibson

This volume surveys the key histories, theories and practice of artists, musicians, filmmakers, designers, architects and technologists that have worked and continue to work with visual material in real time. Covering a wide historical... more
This volume surveys the key histories, theories and practice of artists, musicians, filmmakers, designers, architects and technologists that have worked and continue to work with visual material in real time.

Covering a wide historical period from Pythagoras’s mathematics of music and colour in ancient Greece, to Castel’s ocular harpsichord in the 18th century, to the visual music of the mid-20th century, to the liquid light shows of the 1960s and finally to the virtual reality and projection mapping of the present moment, Live Visuals is both an overarching history of real-time visuals and audio-visual art and a crucial source for understanding the various theories about audio-visual synchronization. With the inclusion of an overview of various forms of contemporary practice in Live Visuals culture – from VJing to immersive environments, architecture to design – Live Visuals also presents the key ideas of practitioners who work with the visual in a live context.

https://www.routledge.com/Live-Visuals-History-Theory-Practice/Gibson-Arisona-
During the 1980s and 1990s the aesthetics and practice of Live Visuals entered the mainstream, leading to better acknowledgement of the ‘Live Visuals performer’ as an important creative role. The rise of rave culture also created changes... more
During the 1980s and 1990s the aesthetics and practice of Live Visuals entered the mainstream, leading to better acknowledgement of the ‘Live Visuals performer’ as an important creative role. The rise of rave culture also created changes in audience experience – being in a large, disinhibited dance-focused crowd now demanded an alternative form of visual spectacle to replace the previous tradition of the stage as the place of visual focus. Simultaneously the growth in stadium-scale gigs also instigated the need to create more ambitious visuals, allowing Live Visual performers to work on larger-scale and visually more innovative productions.
These new contexts also provided live visualists new opportunities to present contemporary socio-political subject matter to their audiences; themes such as Thatcherism, Reaganomics, mass consumption and environmental degradation were commonly addressed within the audio-visual content of this period. Interestingly, while mass media was a site for their criticism, for some artists, it also became a channel through which they found a more mainstream voice – the prime example being through the creation of music videos that were then seen on MTV (as outlined in Chapter 3).
Developments in technology played an important role in driving these changes. The emergent media technologies of the VHS video, the Panasonic MX10 Digital Video Mixer and rudimentary computer-based video editing greatly facilitated new forms of creation and production. As is argued in Chapter 8, such experimental use of technology helped shape the visual aesthetic that emerged during this era. By reappropriating broadcast content on VHS tapes, irony and juxtaposition came to underpin the aesthetic; these approaches are also reminiscent of the ‘montage’ technique used by Sergei Eisenstein (as discussed in Chapter 2). By combining broadcast content with computer-generated graphics and text, visual artists could elect to be more narratively ambitious. Later hardware developments afforded better real-time computer processing, which when harnessed, allowed for the generation of ‘audio-responsive’ visuals. For Live Visual performers, the development of better interactive interfaces made it possible to utilise all the new advantages of desktop computers and software into their stage productions.
Parallel to these developments, the acid house movement began in Chicago in the mid-1980s and moved quickly to the UK, culminating with the “Second Summer of Love” in 1987.1 Acid house was a simple form of dance music, with a pulse that was driven by the iconic Roland TB-303 synth. Acid house fed directly into the establishment of rave culture, which was christened by former Throbbing Gristle/Psychic TV singer Genesis P-Orridge in 1989.2 Rave usually consisted of illegal events in huge venues, where drugs were consumed and hedonistic dancing took place to electronic music. Though initially driven by acid house, other forms of electronic music such as drum and bass and techno became common at raves in the 1990s. The rave, while not always employing Live Visuals, often did have psychedelic qualities achieved through lighting that brought to mind the equally psychedelic events of the 1960s (see Chapter 3 for more on psychedelic events such as The Joshua Light Show’s performances and Chapter 11 for more on the hedonism of rave culture).
This chapter will select and discuss some of the most pioneering live-visual performers of the 1980s and 1990s as examples to illustrate what happened when culture was mediated through emerging technology. The chapter will begin in the UK with scratch video, stay in the UK to discuss rave culture, move to the United States to discuss Emergency Broadcast Network (EBN) and then return to the UK to consider Coldcut.
Opto-Phono-Kinesia (OPK) is an audio-visual performance piece in which all media elements are controlled by the body movements of a single performer. The title is a play on a possible synesthetic state involving connections between... more
Opto-Phono-Kinesia (OPK) is an audio-visual performance piece in which all media elements are controlled by the body movements of a single performer. The title is a play on a possible synesthetic state involving connections between vision, sound and body motion. Theoretically, for a person who experiences this state, a specific colour could trigger both a sound and a body action. This synesthetic intersection is simulated in OPK by simultaneity of body movement, and audio-visual result.

Using the Gesture and Media System 3.0 motion-tracking system, the performer can dynamically manipulate an immersive environment using two small infrared trackers. The project employs a multipart interface design based on a formal model of increasing complexity in visual-sound-body mapping, and is therefore best performed by an expert performer with strong spatial memory and advanced musical ability. OPK utilizes the “body as experience, instrument and interface” [1] for control of a large-scale environment.

1. ACM. TEI Arts Track website. 2018. Retrieved October 10, 2017 from https://tei.acm.org/2018/arts-track/
This Live Visuals LEA volume collects papers from a wide range of filmmakers, media artists, musicians, choreographers, designers and computer scientists, this volume is the first serious academic look at the phenomenon of real-time... more
This Live Visuals LEA volume collects papers from a wide range of filmmakers, media artists, musicians, choreographers, designers and computer scientists, this volume is the first serious academic look at the phenomenon of real-time visuals.
This volume collects selected papers from 2006-07 instances of Digital Art Weeks (Zurich, Switzerland) and Interactive Futures (Victoria, BC, Canada), two parallel festivals of digital media art. The work represented in Transdisciplinary... more
This volume collects selected papers from 2006-07 instances of Digital Art Weeks (Zurich, Switzerland) and Interactive Futures (Victoria, BC, Canada), two parallel festivals of digital media art. The work represented in Transdisciplinary Digital Art is a confirmation of the vitality and breadth of the digital arts. Collecting essays that broadly encompass the digital arts, Transdisciplinary Digital Art gives a clear overview of the on-going strength of scientific, philosophical, aesthetic and artistic research that makes digital art perhaps the defining medium of the 21st Century.
This article argues for the value of using formal models within the (digital media) artwork. Eschewing the antiformalism common to much of postmodernism, it argues for a more active engagement with formal concerns. Without embracing the... more
This article argues for the value of using formal models within the (digital media) artwork. Eschewing the antiformalism common to much of postmodernism, it argues for a more active engagement with formal concerns. Without embracing the totalizing theories of late modernist formalism, or discarding the idea of “the concept,” the author argues for a more formal approach to the making of the (digital media) artwork. The goal is to point to models that can be used to intimately connect form and concept rather than treat them as separate or warring entities. The article critically explores three very specific digital media artworks that endeavor to bridge the gap between formalism and conceptualism, each pointing to indicative (but not exhaustive) methods for reuniting form and concept.
This article considers gestural and motion-based interaction within the context of body-based audio-visual performance. More specifically the article describes some general commonalities between different approaches to both large-area and... more
This article considers gestural and motion-based interaction within the context of body-based audio-visual performance. More specifically the article describes some general commonalities between different approaches to both large-area and small-area interaction. This is discussed in relation to my own work with motion-tracking technologies (The Gesture and Media System [GAMS]), as well as more recent experiments with the Leap Motion hand/gesture tracker. In addition, the article references other strategies for gestural interaction such as a formal ‘method for mapping embodied gesture, acquired with electromyography and motion sensing’ (Zbyszynski et al. 2019) and a more human-centred ‘expansive and augmented performance environment that facilitates full-body musical interactions’ (Altosaar et al. 2019). The article posits that while precise gestural mapping approaches are not universally applicable, enough commonalities between gestural interaction strategies exist that some insights can be generally applied to gestural performance, regardless of the technology employed or the scale of the interaction space.
This physical computing project proposes a circle of re-purposing, in which both the interface and content are repurposed, and portions of the content are updated according to geographical location of its exhibition. The artefact employed... more
This physical computing project proposes a circle of re-purposing, in which both the interface and content are repurposed, and portions of the content are updated according to geographical location of its exhibition. The artefact employed is a repurposed bicycle intended to navigate computer-based environments. There is a history of cycle repurposing for this intention, from Jeffrey Shaw's Media Art project The Legible City to commercial sports cycle simulators such as Tacx; however, very few projects propose a repurposing of a cycle interface along with the content, as well as a geographically-specific repurposing. The main research concern continues a 25-year project by the author into the formal and material uses of 'found, sampled and stolen' (Media N, 2012) objects. While this concept has been explored in extensive terms in relation to Sound and Media Art, in Interaction Design the uses of repurposed materials has yet to be extensively theorised. This paper proposes a provocation in the form of a repurposed artefact, not merely for the purpose of denying originality, but as a means of illustrating how repurposing can create a skewed version of the original(s) and therefore create new meaning. In the face of limited resources, repurposing also serves as a potentially advantageous option for Interaction Designers.
In this paper I will describe and present examples of my live audio-visual work for 3D spatial environments. These projects use motion-tracking technology to enable users to interact with sound, light and video using their body movements... more
In this paper I will describe and present examples of my live audio-visual work for 3D spatial environments. These projects use motion-tracking technology to enable users to interact with sound, light and video using their body movements in 3D space. Specific video examples of one past project (Virtual DJ) and one current project (Virtual VJ) will be shown to illustrate how flexible user interaction is enabled through a complex and precise mapping of 3D space to media control. In these projects audience members can interact with sound, light and video in real-time by simply moving around in space with a tracker in hand. Changes in sound can be synchronized with changes in light and/or real-time visual effects (i.e. music volume = light brightness = video opacity). These changes can be dynamically mapped in real-time to allow the user to consolidate the roles of DJ, VJ and light designer in one interface. This interaction model attempts to reproduce the effect of synesthesia, in which certain people experience light or color in response to music.
This paper will look at Dadaist tendencies in game art projects such as Wafaa Bilal’s Virtual Jihadi and the author’s collaborative project Grand Theft Bicycle. The focus will be on how certain game art projects borrow from popular games... more
This paper will look at Dadaist tendencies in game art projects such as Wafaa Bilal’s Virtual Jihadi and the author’s collaborative project Grand Theft Bicycle. The focus will be on how certain game art projects borrow from popular games such as Grand Theft Auto and Quake, and invert/subvert the meaning of the originals by ‘modding’ aspects of the original world (i.e. the characters, the sounds, the basic concept), while leaving much of the source (i.e. the background textures, the AI) intact. This paper argues that the game art mod is a digital descendent of the Dadaist ready-made.

More specifically the paper will look at the particular Dadaist-inspired absurdities in Grand Theft Bicycle (GTB), such as the use of an aerobic-style bicycle to engage in a political battle with world leaders within a vacuous game environment.

GTB is part of a sub-genre I am describing as ‘Dadaist game art.’ In short, Dadaist game art uses the forms of commercial gaming, but inverts normally uncritical game content to include ironic reflections on the culture of gaming. Dadaist game art borrows from game culture, but provides a new take on gaming that is contradictory, provocative and absurd in the Dadaist sense.
This paper demonstrates the results of the authors’ Wacom tablet MIDI user interface. This application enables users’ drawing actions on a graphics tablet to control audio and video parameters in real-time. The programming affords five... more
This paper demonstrates the results of the authors’ Wacom tablet MIDI user interface. This application enables users’ drawing actions on a graphics tablet to control audio and video parameters in real-time. The programming affords five degrees (x, y, pressure, x tilt, y tilt) of concurrent control for use in any audio or video software capable of receiving and processing MIDI data. Drawing gesture can therefore form the basis of dynamic control simultaneously in the auditory and visual realms. This creates a play of connections between parameters in both mediums, and illustrates a direct correspondence between drawing action and media transformation that is immediately apparent to viewers.

The paper considers the connection between drawing technique and media control both generally and specifically, postulating that dynamic drawing in a live context creates a performance mode not dissimilar to performing on a musical instrument or conducting with a baton. The use of a dynamic and physical real-time media interface re-inserts body actions into live media performance in a compelling manner. Performers can learn to “draw/play” the graphics tablet as a musical and visual “instrument”, creating a new and uniquely idiomatic form of electronic drawing. The paper also discusses how to practically program the application and presents examples of its use as a media manipulation tool.
We have entitled this volume Transdisciplinary Digital Art to distinguish it from the older term Interdisciplinary Art. Interdisciplinarity implies a certain level of detachment across the mediums: the artist, the engineer, the musician... more
We have entitled this volume Transdisciplinary Digital Art to distinguish it from the older term Interdisciplinary Art. Interdisciplinarity implies a certain level of detachment
across the mediums: the artist, the engineer, the musician and the dancer may collaborate with each other but in much interdisciplinary work there is a sense that they are separate entities performing their own expert functions without more thorough knowledge of the other’s technical or artistic processes.

Transdisciplinarity implies a level of direct connection and cross-over betweenmediums: the artist also becomes the engineer, the engineer becomes the artist, and when they collaborate they actually have enough expertise in the other’s field to be able to address concerns across the mediums and even across disciplines. This is not to say that there are not varying levels of expertise within transdisciplinary work, but rather that transdisciplinary art in its best sense makes the effort to understand the medium of the other in more than superficial terms. Here science is no less important than art, art no less than science. The elitism of the isolated discipline is broken down to a degree.
Research Interests:
My contribution to this project will be to discuss the interface between humans and machines, specifically some alternative approaches to human interaction with large-scale digital systems. Currently the majority of mainstream research in... more
My contribution to this project will be to discuss the interface between humans and machines, specifically some alternative approaches to human interaction with large-scale digital systems. Currently the majority of mainstream research in the area of interface technology is dominated by both concerns of ergonomics and matters of practicality in the mouse-keyboard-monitor configuration. This paper is interested in those interface strategies that purposely diverge from the standard mouse-keyboard arrangement and therefore offer a wider range of sensory interaction with digital systems.
The MINDful Play Environment (MPE), an acronym for Motion-tracking, Interactive Deliberation, is a virtual learning environment created as a performance-installation piece, driven by motion-tracking technology, in which three people... more
The MINDful Play Environment (MPE), an acronym for Motion-tracking, Interactive Deliberation, is a virtual learning environment created as a performance-installation piece, driven by motion-tracking technology, in which three people interact with one another and media elements like video, animation, music, lights, and spoken word. The artists and programmers involved in this project – Dene Grigar, Steve Gibson, Justin Love, and Jeannette Altman – have produced this educational environment so that it does not look or feel educational but rather game-like in a way that is both playful and mindful. Here, we provide an overview of the project, talk about the Phase I stage just completed, and describe the future steps planned for its development and use.
This paper lays out research into the use of motion tracking technology for real-time, embodied telepresence and collaboration. The central question underlying this essay is, "In what ways can telepresence and collaboration be enhanced by... more
This paper lays out research into the use of motion tracking technology for real-time, embodied telepresence and collaboration. The central question underlying this essay is, "In what ways can telepresence and collaboration be enhanced by motion tracking technology in performance and installations?" Preliminary findings suggest that motion tracking technology makes it possible for multiple users to manipulate not only data objects like images, video, sound, and light but also hardware and equipment, such as computers, robotic lights, and projectors, with their bodies in a 3D space across a network. Implications for use may be of interest to those working on digital media projects where hardware, software, and peripherals must be controlled in real-time by teams working together at-a-distance or where physical computing research is undertaken.