Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Chris Nash

    Chris Nash

    Modern gestural interaction and motion capture technology is frequently incorporated into Digital Musical Instruments (DMIs) to enable new methods of musical expression. A major topic of interest in this domain concerns how a... more
    Modern gestural interaction and motion capture technology is frequently incorporated into Digital Musical Instruments (DMIs) to enable new methods of musical expression. A major topic of interest in this domain concerns how a performer's actions are linked to the production of sound. Some DMI developers choose to design these mapping strategies themselves, while others expose this design space to performers. This work explores the latter of these scenarios, studying the user-defined mapping strategies of a group of experienced mid-air musicians chosen from a rare community of DMI practitioners. Participants are asked to design mappings for a piece of music to determine what factors influence their choices. The findings reveal novice performers spend little time reviewing mapping choices, more time practising, and design mappings that adhere to musical metaphors. Experienced performers edit mappings continuously and focus on the ergonomics of their mapping designs.
    This paper explores the concepts of virtuosity and flow in computer music, by looking at the technological, interactive and social factors in soundtracking; a textbased, computer keyboard-manipulated notation for real-time computer-aided... more
    This paper explores the concepts of virtuosity and flow in computer music, by looking at the technological, interactive and social factors in soundtracking; a textbased, computer keyboard-manipulated notation for real-time computer-aided music composition. The role of virtuosity in both the personal user experience and the wider demoscene hacker-artist subculture are discussed. Comparisons are made to mainstream music interaction paradigms, such as performance capture and sequencing, where support for virtuosity is present in MIDI devices and episodes of live performance recording, but otherwise impeded by mouse-driven interfaces designed around visual metaphors for novice use, rather than the development of practised skill. Discussions and observations are supported by initial findings from a large-scale, 2-year user study of over 1,000 tracker and sequencer users.
    This workshop provides a demonstration of the Manhattan digital environment, a platform designed to support and motivate learning programming through musical creativity. The software combines music editing with a scalable level of coding... more
    This workshop provides a demonstration of the Manhattan digital environment, a platform designed to support and motivate learning programming through musical creativity. The software combines music editing with a scalable level of coding – from simple formulas or code fragments to more advanced algorithms and generative music. Following a 20-minute introduction to the project and program, delegates will have the opportunity to interact with the software, through one of the lessons, which takes a topic in computer science (e.g. iteration, conditional statements, or variables and arrays) and explores it in a musical context. The session will show how the environment enables a wide range of complex computing concepts to be taught visually and musically – supporting intrinsic motivation, scalable challenge, and personal creativity, to foster deeper engagement with the subject. The session is ideal for educators in computing and non-computing disciplines looking to develop students’ computational thinking styles and aesthetics.
    This talk discusses the use of an end-user computing environment to engage students with programming practice and computational thinking using the context of musical creativity. The talk will detail (with live demonstrations) interactions... more
    This talk discusses the use of an end-user computing environment to engage students with programming practice and computational thinking using the context of musical creativity. The talk will detail (with live demonstrations) interactions and lessons using the Manhattan environment, which combines music editing with a scalable level of programming – from simple formulas or code fragments embedded in musical patterns to more advanced algorithms and generative music. The environment enables a wide range of fundamental and advanced computing concepts to be taught visually and musically. Discussions will focus on the importance of supporting intrinsic motivation, scalable challenge, and personal creativity, in tackling complex learning domains like computer science and music theory, and are supported by data and student feedback from classroom use. Recent and future developments, exploring realtime collaboration and connectivity will also be discussed in the context of supporting social learning and extrinsic motivation, and broadening use of the tool.
    This paper details a digital platform designed for digital creativity, learning, and engagement with new concepts and aesthetics in both music and coding. An open online ecosystem is outlined, connecting users for the purposes of... more
    This paper details a digital platform designed for digital creativity, learning, and engagement with new concepts and aesthetics in both music and coding. An open online ecosystem is outlined, connecting users for the purposes of collating, sharing, supporting, collaborating, and competing with works combining music and code – collectively designed to tackle both intrinsic and extrinsic motivational issues in both the learning of music and programming. Developing on observed practices and aesthetics in digital music subcultures, composing and coding through a unified digital notation is fashioned as a ‘serious game’; composers compete against themselves or others, in works that combine creativity and virtuosity in music and code. Mechanisms for scoring pieces with respect to both musical aesthetic (e.g. user reviews) and technique (e.g. code complexity) are considered, proposing a metric that rewards conciseness, in order to encourage abstraction and pattern recognition in both music and code. The platform develops on Manhattan (Nash, 2014), an end-user programming environment for music composition, based on a text-based pattern sequencer, using a grid/cell formula metaphor to integrate programming functionality. A rapid edit-audition cycle improves the liveness of notation interaction, facilitating learning and experimentation. Unlike other music programming tools, it prioritises the visibility and editing of musical data, rather than code; as in spreadsheets, users are able to engage with code expressions as much (or as little) as they wish, offering a lower entry threshold and shallower learning curve for programming – plus a continuum of musical applications from the fixed, structured notation of music (e.g. MIDI sequencing) to more dynamic and experimental elements, such as minimal (process-based) and algorithmic composition techniques. The paper provides musical examples and discusses existing use of the technologies in teaching, with reference to lessons and workshops based on the software, as well as current and future directions for the research.
    Abstract In this paper we draw an analogy between musical systems and programming environments, concentrating on user experience associated with feedback and its implications for flow. We present a number of different analytical frames... more
    Abstract In this paper we draw an analogy between musical systems and programming environments, concentrating on user experience associated with feedback and its implications for flow. We present a number of different analytical frames all of which, we ...
    This paper outlines a workshop to explore intersections of programming and music in digital notation. With the aid of the Manhattan music programming and sequencing environment (Nash, 2014), methods for representing both high-level... more
    This paper outlines a workshop to explore intersections of programming and music in digital notation. With the aid of the Manhattan music programming and sequencing environment (Nash, 2014), methods for representing both high-level processes and low-level data constructs in both domains will be explored and debated. The goal of this research is to establish ways of using music concepts to teach programming (and vice versa), working towards digital pedagogies and platforms supporting intrinsic motivation, virtuosity, and auto-didactic learning. The proposed schedule begins with a presentation of findings from studies of both programming and music students, followed by an introduction to the Manhattan software, a sequencer supporting end-user programming (combining declarative and imperative programming idioms) for real-time manipulation of live music notation. The second half of the workshop invites participants to explore concepts in, and overlaps between, programming and music using the software (provided). Beginning with simple structured exercises and examples, the activities will proceed to freer exploratory design and experimentation, drawing on the participants’ backgrounds in music and programming. The workshop concludes with a discussion of conclusions and future directions for research.
    Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically... more
    Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulated system errors. The results suggest that, for the gestural music system under study, controlled increases in system error correspond to an increase in the occurrence and severity of performer error. Furthermore, we find the system errors reduce a performer's sense of control and result in the instrument being perceived as less accurate and less responsive.
    The ability for machines to compose novel, interesting and human-esque music is an important and recurring theme in the history of computer music. Modern staff notation remains the most common tool for music composition, and despite the... more
    The ability for machines to compose novel, interesting and human-esque music is an important and recurring theme in the history of computer music. Modern staff notation remains the most common tool for music composition, and despite the emergence and abundance of highly (cid:13)exible computer-based score editors, efforts to incorporate generative music techniques into such systems and subsequent composer work(cid:13)ow are seldom explored. This paper explores and develops both theoretical and practical models that address these concerns, and present these in the context of a software prototype. The paper is constructed as a work in progress, detailing the work so far and how future research will be conducted.
    Computer composed music remains a novel and challenging problem to solve. Despite an abundance of techniques and systems little research has explored how these might be useful for end-users looking to compose with generative and... more
    Computer composed music remains a novel and challenging problem to solve. Despite an abundance of techniques and systems little research has explored how these might be useful for end-users looking to compose with generative and algorithmic music techniques. User interfaces for generative music systems are often inaccessible to non-programmers and neglect established composition workflow and design paradigms that are familiar to computer-based music composers. We have developed a system called the Interactive Generative Music Environment (IGME) that attempts to bridge the gap between generative music and music sequencing software, through an easy to use score editing interface. This paper discusses a series of user studies in which users explore generative music composition with IGME. A questionnaire evaluates the user's perception of interacting with generative music and from this provide recommendations for future generative music systems and interfaces.
    This paper details the development of Turnector, a control system based upon tangible widgets that are manipulated on the touchscreen of a capacitive touch device. Turnector widgets are modelled on rotary faders and aim to connect the... more
    This paper details the development of Turnector, a control system based upon tangible widgets that are manipulated on the touchscreen of a capacitive touch device. Turnector widgets are modelled on rotary faders and aim to connect the user to parameters in their audio software in a manner analogous to the one-to-one control mapping utilised in analogue studio equipment. The system aims to streamline workflow and facilitate hands-on experimentation through a simple and unobtrusive interface. The physical widgets provide the users with the freedom to glance away from the touchscreen surface whilst maintaining precise control of multiple parameters simultaneously. Related work in this area, including interaction design and TUIs in the context of musical control, is first discussed before setting out the design specification and manufacturing process of the Turnector widgets. A number of unique methods for widget detection, tracking are presented before closing the paper with initial fi...
    This paper details technologies and artistic approaches to crowddriven music, discussed in the context of a live public installation in which activity in a public space (e.g. a busy railway platform) is used to drive the automated... more
    This paper details technologies and artistic approaches to crowddriven music, discussed in the context of a live public installation in which activity in a public space (e.g. a busy railway platform) is used to drive the automated composition and performance of music. The approach presented uses realtime machine vision applied to a live video feed of a scene, from which detected objects and people are fed into Manhattan (Nash, 2014), a digital music notation that integrates sequencing and programming to support the live creation of complex musical works that combine static, algorithmic, and interactive elements. The paper discusses the technical details of the system and artistic development of specific musical works, introducing novel techniques to mapping chaotic systems to musical expression and exploring issues of agency, aesthetic, accessibility, and adaptability relating to composing interactive music for crowds and public spaces. In particular, performances as part of an inst...
    This paper presents a development of the ubiquitous computer keyboard to capture velocity and other continuous musical properties, in order to support more expressive interaction with music software. Building on existing ‘virtual piano’... more
    This paper presents a development of the ubiquitous computer keyboard to capture velocity and other continuous musical properties, in order to support more expressive interaction with music software. Building on existing ‘virtual piano’ utilities, the device is designed to provide a richer mechanism for note entry within predominantly non-realtime editing tasks, in applications where keyboard interaction is a central component of the user experience (score editors, sequencers, DAWs, trackers, live coding), and in which users draw on virtuosities in both music and computing. In the keyboard, additional hardware combines existing scan code (key press) data with accelerometer readings to create a secondary USB device, using the same cable but visible to software as a separate USB MIDI device aside existing USB HID functionality. This paper presents and evaluates an initial prototype, developed using an Arduino board and inexpensive sensors, and discusses design considerations and test ...
    Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically... more
    Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulate...
    This paper presents GestureChords, a mapping strategy for chord selection in freehand gestural instruments. The strategy maps chord variations to a series of hand postures using the concepts of iconicity and conceptual metaphor,... more
    This paper presents GestureChords, a mapping strategy for chord selection in freehand gestural instruments. The strategy maps chord variations to a series of hand postures using the concepts of iconicity and conceptual metaphor, influenced by their use in American Sign Language (ASL), to encode meaning in gestural signs. The mapping uses the conceptual metaphors MUSICAL NOTES ARE POINTS IN SPACE and INTERVALS BETWEEN NOTES ARE SPACES BETWEEN POINTS, which are mapped respectively to the number of extended fingers in a performer’s hand and the abduction or adduction between them. The strategy is incorporated into a digital musical instrument and tested in a preliminary study for transparency by both performers and spectators, which gave promising results for the technique.
    This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an... more
    This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an established model of end-user programming, Manhattan is designed to bridge the gap between traditional music editing methods (such as MIDI sequencing and typesetting) and generative and algorithmic music -seeking both to reduce the learning threshold of programming and support flexible integration of static and dynamic musical elements in a single work. Interaction draws on rudimentary knowledge of mathematics and spreadsheets to augment the sequencer notation with programming concepts such as expressions, built-in functions, variables, pointers and arrays, iteration (for loops), branching (goto), and conditional statements (if-then-else). In contrast to other programming tools, formulas emphasise the visibility of musical data (e.g. notes), rather ...
    Iterative design methods involving children and educators are difficult to conduct, given both the ethical implications and time commitments understandably required. The qualitative design process presented here recruits introductory... more
    Iterative design methods involving children and educators are difficult to conduct, given both the ethical implications and time commitments understandably required. The qualitative design process presented here recruits introductory teacher training students, in order to discover useful design insights relevant to music education technologies ‘by proxy’. Thus, some of the barriers present in child-computer interaction research are avoided. As an example, the method is applied to the creation of a block-based music notation system, named Codetta. Building upon successful educational technologies that intersect both music and programming, Codetta seeks to enable child composition, whilst aiding generalist educators’ confidence in teaching music.
    This paper presents and adapts the Cognitive Dimensions of Notations framework (Green and Petre, 1996) for use in designing and analysing notations (and user interfaces) in both digital and traditional music practice and study. Originally... more
    This paper presents and adapts the Cognitive Dimensions of Notations framework (Green and Petre, 1996) for use in designing and analysing notations (and user interfaces) in both digital and traditional music practice and study. Originally developed to research the psychology of programming languages, the framework has since found wider use in both general HCI and music. The paper provides an overview of the framework, its application, and a detailed account of the core cognitive dimensions, each discussed in the context of three music scenarios: the score, Max/MSP, and sequencer/DAW software. Qualitative and quantitative methodologies for applying the framework are presented in closing, highlighting directions for further development of the framework.
    This paper describes how the Cognitive Dimensions of Notation can guide the design of algorithmic composition tools. Prior research has also used the cognitive dimensions for analysing interaction design for algorithmic composition... more
    This paper describes how the Cognitive Dimensions of Notation can guide the design of algorithmic composition tools. Prior research has also used the cognitive dimensions for analysing interaction design for algorithmic composition software. This work aims to address the shortcomings of existing algorithmic composition software, by utilising the more commonly used score notation interfaces, rather than patch based or code based environments. The paper sets out design requirements in each dimension and presents these in the context of a software prototype. These principles are also applicable for general music composition systems.
    Standard western notation supports the understanding and performance of music, but has limited provisions for revealing overall musical characteristics and structure. This paper presents several visualisers for highlighting and providing... more
    Standard western notation supports the understanding and performance of music, but has limited provisions for revealing overall musical characteristics and structure. This paper presents several visualisers for highlighting and providing insights into musical structures, including rhythm, pitch, and interval transitions, also noting how these elements modulate over time. The visualisations are presented in the context of Shneiderman’s Visual Information-Seeking Mantra, and terminology from the Cognitive Dimensions of Music Notations usability framework. Such techniques are designed to make understanding musical structure quicker, easier, less error prone, and take better advantage of the intrinsic pattern recognition abilities of humans.
    The need for thorough evaluations is an emerging area of interest and importance in music interaction research. As a large degree of DMI evaluation is concerned with exploring the subjective experience: ergonomics, action-sound mappings... more
    The need for thorough evaluations is an emerging area of interest and importance in music interaction research. As a large degree of DMI evaluation is concerned with exploring the subjective experience: ergonomics, action-sound mappings and control intimacy; User Experience (UX) methods are increasingly being utilised to analyse an individual's experience of new musical instruments, from which we can extract meaningful, robust findings and subsequently generalised and useful recommendations. However, many music interaction evaluations remain informal. In this paper, we provide a meta-review of 132 papers from the 2014 -- 2016 proceedings of the NIME, SMC and ICMC conferences to collate the aspects of UX research that are already present in music interaction literature, and to highlight methods from UX's widening field of research that have not yet been explored. Our findings show that usability and aesthetics are the primary focus of evaluations in music interaction research...
    Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations, which have been shown to be particularly problematic when employed... more
    Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations, which have been shown to be particularly problematic when employed within musical contexts. This paper presents Leimu, a wrist mount that couples a Leap Motion optical sensor with an inertial measurement unit to combine the benefits of wearable and camera-based motion tracking. Leimu is designed, developed and then evaluated using discourse and statistical analysis methods. Qualitative results indicate that users consider Leimu to be an effective interface for gestural music interaction and the quantitative results demonstrate that the interface offers improved tracking precision over a Leap Motion positioned on a table top.
    This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an... more
    This paper explores the concept of end-user programming languages in music composition, and introduces the Manhattan system, which integrates formulas with a grid-based style of music sequencer. Following the paradigm of spreadsheets, an established model of end-user programming, Manhattan is designed to bridge the gap between traditional music editing methods (such as MIDI sequencing and typesetting) and generative and algorithmic music – seeking both to reduce the learning threshold of programming and support flexible integration of static and dynamic musical elements in a single work. Interaction draws on rudimentary knowledge of mathematics and spreadsheets to augment the sequencer notation with programming concepts such as expressions, built-in functions, variables, pointers and arrays, iteration (for loops), branching (goto), and conditional statements (if-then-else). In contrast to other programming tools, formulas emphasise the visibility of musical data (e.g. notes), rather...