Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Anna Xambo
  • School of Music
    840 McMillan Street
    Atlanta, Georgia 30332-0456 U.S.A.

Anna Xambo

  • Anna Xambó Ph.D. is a researcher and musician with background in computing, digital humanities and digital arts. She ... moreedit
El diseño gráfico y multimedia están en claro proceso de desarrollo. En el mercado se encuentra una gran variedad de aplicaciones distintas para satisfacer las necesidades de cada usuario. El Manual Imprescindible de Herramientas de... more
El diseño gráfico y multimedia están en claro proceso de desarrollo. En el mercado se encuentra una gran variedad de aplicaciones distintas para satisfacer las necesidades de cada usuario.
El Manual Imprescindible de Herramientas de diseño digital ofrece la posibilidad de adquirir conocimientos de diseño gráfico y diseño multimedia mediante el uso del software más avanzado.
El libro se divide en dos grandes bloques. El primero, dedicado al diseño gráfico, introduce conceptos sobre el diseño vectorial con FreeHand, el retoque fotográfico con Photoshop y la optimización de gráficos estáticos para la Web de la mano de Fireworks. El segundo bloque, centrado en el diseño multimedia, introduce elementos como la edición de audio digital con el programa Sonic Foundry Sound Forge, la edición de imágenes en movimiento con Premiere y la creación de animaciones para la Web con Flash.
Dado su carácter práctico, el texto contiene numerosos ejercicios, tratados de forma técnica y creativa. El manual es recomendable no sólo porque se aprenden los conceptos básicos de seis programas líderes en el mercado del diseño gráfico y multimedia, sino además porque se exponen conceptos fundamentales del diseño, el conocimiento de los cuales permite enfrentarse con soltura a proyectos creativos.
The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that... more
The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.
Research Interests:
In recent years, there has been an increase in awareness of the underrepresentation of women in the sound and music computing fields. The New Interfaces for Musical Expression (NIME) conference is not an exception, with a number of open... more
In recent years, there has been an increase in awareness of the underrepresentation of women in the sound and music computing fields. The New Interfaces for Musical Expression (NIME) conference is not an exception, with a number of open questions remaining around the issue. In the present paper, we study the presence and evolution over time of women authors in NIME since the beginning of the conference in 2001 until 2017. We discuss the results of such a gender imbalance and potential solutions by summarizing the actions taken by a number of worldwide initiatives that have put an effort into making women's work visible in our field, with a particular emphasis on Women in Music Tech (WiMT), a student-led organization that aims to encourage more women to join music technology, as a case study. We conclude with a hope for an improvement in the representation of women in NIME by presenting WiNIME, a public online database that details who are the women authors in NIME.
Research Interests:
In this paper we examine methodological innovation in the social sciences through a focus on researching the body in digital environments. There are two strands to our argument as to why this is a useful site to explore methodological... more
In this paper we examine methodological innovation in the social sciences through a focus on researching the body in digital environments. There are two strands to our argument as to why this is a useful site to explore methodological innovation in the social sciences. First, researching the body in digital environments places new methodological demands on social science. Second, as an area of interest at the intersection of the social sciences and the arts, it provides a focus for exploring how social science innovation can be informed by engagement with the arts, in this instance how the arts work with the body in digital environments and take up social science ideas in novel ways. We argue that social science engagement with the arts and the relatively unmapped terrain of the body in digital environments has the potential to open up spaces for innovative social science questions and methods: spaces, questions and methods that have potential for more general social science methodological innovation. We draw on the  ndings of the Methodological Innovation in Digital Arts and Social Sciences (MIDAS) project a multi-site ethnography of the research ecologies of the social sciences and the arts related to the body in digital environments. We propose a continuum of methodological innovation that attends to how methods are moved across research contexts and disciplines, in this instance the social sciences and the digital arts. We illustrate and discuss the innovative potential of expanding and re-situating methods across the social sciences and the arts, the transfer of methods and concepts across disciplinary borders and the interdisciplinary generation of new methods. We discuss the catalysts and challenges for social science methodological innovation in relation to the digital and the arts, with attention to how the social sciences might engage with the arts towards innovative research.
Collaborative music live coding (CMLC) approaches the music improvisation practice of live coding in collaboration. In network music, co-located and remote interactions are possible, and communication is typically supported by the use of... more
Collaborative music live coding (CMLC) approaches the music improvisation practice of live coding in collaboration. In network music, co-located and remote interactions are possible, and communication is typically supported by the use of a chat window. However, paying attention to simultaneous multi-user actions, such as chat texts and code, can be demanding to follow. In this paper we explore co-located and remote CMLC using the live coding environment and pedagogical tool EarSketch. In particular, we examine the mechanism of turn-taking and the use of a small set of semantic hashtags in online chatting by using an autoethnographic approach of duo and trio live coding. This approach is inspired by (1) the practice of pair programming, a team-based strategy to efficiently solving computational problems; (2) the language used in short messaging service (SMS) texting and social media. The results from an online survey with six practitioners in live coding and collaboration complements the autoethnographic findings and point to education as the most suitable domain for this approach to CMLC. We conclude discussing the challenges and opportunities of turn-taking and the use of semantic hashtags focusing on educational settings.
Research Interests:
This performance invites the audience to participate in an immersive experience using their mobile devices. The aim is at capturing their actions on a digital painting inspired by Jackson Pollock's action painting technique. The audience... more
This performance invites the audience to participate in an immersive experience using their mobile devices. The aim is at capturing their actions on a digital painting inspired by Jackson Pollock's action painting technique. The audience is connected to a wireless network and a Web Audio application that recognizes a number of gestures through the mobile accelerometer sensor, which trigger different sounds. Gestures will be recognized and mapped to a digital canvas. A set of loudspeakers will complement the audience's actions with ambient sounds. The performance explores audio spatialization using both loudspeakers and mobile phone speakers, that combined with the digital painting provides an immersive audiovisual experience. The final digital canvas will be available online as a memory of the performance.
Co-located collaborative live coding is a potential approach to network music and to the music improvisation practice known as live coding. A common strategy to support communication between live coders and the audience is the use of a... more
Co-located collaborative live coding is a potential approach to network music and to the music improvisation practice known as live coding. A common strategy to support communication between live coders and the audience is the use of a chat window. However , paying attention to simultaneous multiuser actions, such as chat texts and code, can be too demanding to follow. In this paper, we explore collaborative music live coding (CMLC) using the live coding environment and pedagogical tool EarSketch. In particular, we examine the use of turn-taking and a customized chat window inspired by the practice of pair programming, a team-based strategy to efficiently solving computational problems. Our approach to CMLC also aims at facilitating the understanding of this practice to the audience. We conclude discussing the benefits of this approach in both performance and educational settings.
In this paper we present a preliminary design and initial assessment of a computational musical tabletop exhibit for children and teenagers at the Museum of Design Atlanta (MODA). We explore how participatory workshops can promote... more
In this paper we present a preliminary design and initial assessment of a computational musical tabletop exhibit for children and teenagers at the Museum of Design Atlanta (MODA). We explore how participatory workshops can promote hands-on learning of computational concepts through making music. We also use a hands-on approach to assess informal learning based on maker interviews. Maker interviews serve to subjectively capture impromptu reflections of the visitors' achievements from casual interactions with the exhibit. Findings from our workshops and preliminary assessment indicate that experiencing and taking ownership of tangible programming on a musical tabletop is related to: ownership of failure, ownership through collaboration, ownership of the design, and ownership of code. Overall, this work suggests how to better support ownership of computational concepts in tangible programming, which can inform how to design self-learning experiences at the museum, and future trajectories between the museum and the school or home.
Research Interests:
Co-creation between a human agent (HA) and a virtual agent (VA) is an approach to collaboration that has been explored in different creative domains, particulary in computer music. With a few exceptions, there is little research on the... more
Co-creation between a human agent (HA) and a virtual agent (VA) is an approach to collaboration that has been explored in different creative domains, particulary in computer music. With a few exceptions, there is little research on the use of virtual agents in collaborative music live coding (CMLC), a network music improvi-sational practice. This paper considers the benefits of CMLC both in education and in performance involving human agents, with or without virtual agents. We reflect on our previous work on, and lessons learned from, two studies of collaboration and live coding using EarS-ketch, an educational online platform for learning music via code based on audio clips. We speculate future scenarios , in particular, we envision a virtual agent that can help students to improve their programming and musical skills, and can help musicians to exploit computational creativity applied to music.
Research Interests:
— The EarSketch computer science learning environment and curriculum (http://earsketch.gatech.edu) seeks to increase and broaden participation in computing using a STEAM (STEM + Arts) approach. EarSketch creates an authentic learning... more
— The EarSketch computer science learning environment and curriculum (http://earsketch.gatech.edu) seeks to increase and broaden participation in computing using a STEAM (STEM + Arts) approach. EarSketch creates an authentic learning environment in that it is both personally meaningful and industry relevant in terms of its STEM component (computing) and its artistic domain (music remixing). Students learn to code in JavaScript or Python, tackling learning objectives in the Computer Science Principles curricular framework as they simultaneously learn core concepts in music technology. They create music through code by uploading their own audio content or remixing loops in popular genres created by music industry veterans. No prior experience in music or computer science is required. EarSketch is entirely browser-based and free.
This article presents a video-based field study of the Reactable, a tabletop tangible user interface (TUI) for music performance, in a hands-on science centre. The goal was to investigate visitors’ social interactions in a public setting.... more
This article presents a video-based field study of the Reactable, a tabletop tangible user interface (TUI) for music performance, in a hands-on science centre. The goal was to investigate visitors’ social interactions in a public setting. We describe liminality and cross-group interaction, both synchronous with fluid transitions and overlaps in use between groups and asynchronous. Our findings indicate the importance of: (i) facilitating smooth transitions and overlaps between groups and (ii) supporting not only synchronous but also asynchronous group interaction. We discuss the lessons learned on how best to enable liminal situations in the design of interactive tabletops and TUIs for social interaction and particularly collaborative tangible music in public museum settings.

Presentation of a video-based field study describing cross-group interaction.

Description of exemplar vignettes of video for illustrating liminal situations in social interaction.

Discussion of the nature of different levels of cross-group interaction and their relevance for social interaction and collaboration.

Lessons learned in how best to enable liminal situations in the design of collaborative tangible user interfaces for promoting social interaction and collaboration.
An approach to teaching computer science (CS) in high-schools is using EarSketch, a free online tool for teaching CS concepts while making music. In this demonstration we present the potential of teaching music information retrieval (MIR)... more
An approach to teaching computer science (CS) in high-schools is using EarSketch, a free online tool for teaching CS concepts while making music. In this demonstration we present the potential of teaching music information retrieval (MIR) concepts using EarSketch. The aim is twofold: to discuss the benefits of introducing MIR concepts in the classroom and to shed light on how MIR concepts can be gently introduced in a CS curriculum. We conclude by identifying the advantages of teaching MIR in the classroom and pointing to future directions for research.
Research Interests:
This paper focuses on the potential of collaborative live coding in educational settings. In particular, it draws from our experience with EarSketch, a free online platform for algorithmic composition and computational music remixing that... more
This paper focuses on the potential of collaborative live coding in educational settings. In particular, it draws from our experience with EarSketch, a free online platform for algorithmic composition and computational music remixing that allows students to learn Computer Science Principles (CSP) by making music using either Python or JavaScript. We argue that collaborative live coding is a promising approach to learning CSP through computer music in the classroom. We draw on interviews with teachers and observations in schools. We discuss how collaboration can be better supported in educational settings when learning CSP using EarSketch, and the challenges and potential for learning music and code using a computer-supported collaborative environment.
Research Interests:
— The EarSketch computer science learning environment and curriculum (http://earsketch.gatech.edu) seeks to increase and broaden participation in computing using a STEAM (STEM + Arts) approach. EarSketch creates an authentic learning... more
— The EarSketch computer science learning environment and curriculum (http://earsketch.gatech.edu) seeks to increase and broaden participation in computing using a STEAM (STEM + Arts) approach. EarSketch creates an authentic learning environment in that it is both personally meaningful and industry relevant in terms of its STEM component (computing) and its artistic domain (music remixing). Students learn to code in JavaScript or Python, tackling learning objectives in the Computer Science Principles curricular framework as they simultaneously learn core concepts in music technology. They create music through code by uploading their own audio content or remixing loops in popular genres created by music industry veterans. No prior experience in music or computer science is required. EarSketch is entirely browser-based and free.
Research Interests:
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group.... more
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group. We are particularly interested in systems that will engage novices and experts. We designed a simple application in an initial attempt to clearly analyse some of the issues. Our application allows groups of users to express themselves in collaborative music making using pre-composed materials. User ...
In this paper we give an analysis of the literature on a set of problems that can arise when undertaking the interaction design of multi-touch applications for collaborative real-time music activities, which are designed for multitouch... more
In this paper we give an analysis of the literature on a set of problems that can arise when undertaking the interaction design of multi-touch applications for collaborative real-time music activities, which are designed for multitouch technologies (eg smartphones, tablets, interactive tabletops, among others). Each problem is described, and a candidate design pattern (CDP) is suggested in the form of a short sentence and a diagram—an approach inspired by Christopher Alexander's A Pattern Language. These solutions relate to the ...
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of... more
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of understanding individual actions and the other group members actions, in parallel. In this paper, we investigate the effects of providing auditory feedback using ambisonics
There has been little research on how interactions with tabletop and Tangible User Interfaces (TUIs) by groups of users change over time. In this article, we investigate the challenges and opportunities of a tabletop tangible interface... more
There has been little research on how interactions with tabletop and Tangible User Interfaces (TUIs) by groups of users change over time. In this article, we investigate the challenges and opportunities of a tabletop tangible interface based on constructive building blocks.We describe a long-term lab study of groups of expert musicians improvising with the Reactable, a commercial tabletop TUI for music performance. We examine interaction, focusing on interface, tangible, musical, and social phenomena. Our findings reveal a practice-based learning between peers in situated contexts, and new forms of participation, all of which is facilitated by the Reactable’s tangible interface, if compared to traditional musical ensembles. We summarise our findings as a set of design considerations and conclude that construction processes on interactive tabletops support learning by doing and peer learning, which can inform constructivist approaches to learning with technology.
Research Interests:
The third wave in HCI reveals how embodiment matters in post-WIMP computing systems. Yet it is still unclear what methods provide effective insight into the nature of embodiment in HCI in relation to both design and use. This paper... more
The third wave in HCI reveals how embodiment matters in post-WIMP computing systems. Yet it is still unclear what methods provide effective insight into the nature of embodiment in HCI in relation to both design and use. This paper presents work in progress on MIDAS, a cross-disciplinary methodological research project on embodiment and technology exploring synergies across the fields of Digital Arts and Social Sciences. We argue that exploiting these synergies can contribute towards an integrated, innovative and progressive framework for understanding digital body interactions. We introduce the 5 ongoing case studies that inform MIDAS, outline the project's use of multimodal ethnography, and discuss two emerging themes: "conceptualising the body" and "the sensory", which will contribute to a methodological framework for informing future design, analysis and evaluation of HCI systems.
Research Interests:
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of... more
Co-located tabletop tangible user interfaces (TUIs) for music performance are known for promoting multi-player collaboration with a shared interface, yet it is still unclear how to best support the awareness of the workspace in terms of understanding individual actions and the other group members actions, in parallel. In this paper, we investigate the effects of providing auditory feedback using ambisonics spatialisation, aimed at informing users about the location of the tangibles on the tabletop surface, with groups of mixed musical backgrounds. Participants were asked to improvise music on "SoundXY4: The Art of Noise", a tabletop system that includes sound samples inspired by Russolo's taxonomy of noises. We compared spatialisation vs. nospatialisation conditions, and findings suggest that, when using spatialisation, there was a clearer workspace awareness, and a greater engagement in the musical activity as an immersive experience.
Research Interests:
There has been little research on how interactions with tabletop and Tangible User Interfaces (TUIs) by groups of users change over time. In this article, we investigate the challenges and opportunities of a tabletop tangible interface... more
There has been little research on how interactions with tabletop and Tangible User Interfaces (TUIs) by groups of users change over time. In this article, we investigate the challenges and opportunities of a tabletop tangible interface based on constructive building blocks. We describe a long-term lab study of groups of expert musicians improvising with the Reactable, a commercial tabletop TUI for music performance. We examine interaction, focusing on interface, tangible, musical, and social phenomena. Our findings reveal a practice-based learning between peers in situated contexts, and new forms of participation, all of which is facilitated by the Reactable's tangible interface, if compared to traditional musical ensembles. We summarise our findings as a set of design considerations and conclude that construction processes on interactive tabletops support learning by doing and peer learning, which can inform constructivist approaches to learning with technology.
Research Interests:
There is little evaluation of musical tabletops for music performance, and current approaches tend to have little consideration of social interaction. However, in collaborative settings, social aspects such as coordination, communication,... more
There is little evaluation of musical tabletops for music performance, and current approaches tend to have little consideration of social interaction. However, in collaborative settings, social aspects such as coordination, communication, or musical engagement between collaborators are fundamental for a successful performance. After an overview of the use of video in music interaction research as a convenient method for understanding interaction between people and technology, we present three empirical examples of approaches to video analysis applied to musical tabletops; firstly, an exploratory approach to give informal insight towards understanding collaboration in new situations; secondly, a participatory design approach oriented to improve an interface design by getting feedback from the user experience; thirdly, a quantitative approach, towards understanding collaboration by considering frequencies of interaction events. The aim of this chapter is to provide a useful insight into how to evaluate musical tabletops using video as a data source. Furthermore, this overview can shed light on understanding shareable interfaces in a wider HCI context of group creativity and multi-player interaction.
Research Interests:
This position paper summarises some themes encountered when analysing video data in the context of music performance with interactive tabletops. It presents methodological approaches and coding schemes used for a set of experiments on... more
This position paper summarises some themes encountered when analysing video data in the context of music performance with interactive tabletops. It presents methodological approaches and coding schemes used for a set of experiments on musical tabletops and collaboration. Finally, it outlines an initial taxonomy based on the outcomes of the projects introduced, which can be used for video annotation of collaborative music
interaction.
Research Interests:
Since the development of sound recording technologies, the palette of sound timbres available for music creation was extended way beyond traditional musical instruments. The organization and categorization of timbre has been a common... more
Since the development of sound recording technologies, the palette of sound timbres available for music creation was extended way beyond traditional musical instruments. The organization and categorization of timbre has been a common endeavor. The availability of large databases of sound clips provides an opportunity for obtaining datadriven timbre categorizations via content-based clustering. In this article we describe an experiment aimed at understanding what factors influence the process of learning a given clustering of sound samples. We clustered a large database of short sound clips, and analyzed the success of participants in assigning sounds to the “correct” clusters after listening to a few examples of each. The results of the experiment suggest a number of relevant factors related both to the strategies followed by users and to the quality measures of the clustering solution, which can guide the design of creative applications based on audio clip clustering.
In this paper we give an analysis of the literature on a set of problems that can arise when undertaking the interaction design of multi-touch applications for collaborative real-time music activities, which are designed for multi-touch... more
In this paper we give an analysis of the literature on a set of problems that can arise when undertaking the interaction design of multi-touch applications for collaborative real-time music activities, which are designed for multi-touch technologies (e.g. smartphones, tablets, interactive tabletops, among others). Each problem is described, and a candidate design pattern (CDP) is suggested in the form of a short sentence and a diagram—an approach inspired by Christopher Alexander’s A Pattern Language. These solutions relate to the fundamental collaborative principles of democratic relationships, identities and collective interplay. We believe that this approach might disseminate forms of best design practice for collaborative music applications, in order to produce real-time musical systems which are collaborative and expressive.
The amount of digital music has grown unprecedentedly during the last years and requires the development of effective methods for search and retrieval. In particular, content-based preference elicitation for music recommendation is a... more
The amount of digital music has grown unprecedentedly during the last years and requires the development of effective methods for search and retrieval. In particular, content-based preference elicitation for music recommendation is a challenging problem that is effectively addressed in this paper. We present a system which automatically generates recommendations and visualizes a user's musical preferences, given her/his accounts on popular online music services. Using these services, the system retrieves a set of tracks preferred by a user, and further computes a semantic description of musical preferences based on raw audio information. For the audio analysis we used the capabilities of the Canoris API. Thereafter, the system generates music recommendations, using a semantic music similarity measure, and a user's preference visualization, mapping semantic descriptors to visual elements.
With the advent of tabletop interaction, collaborative activities are better supported than they are on single-user PCs because there exists a physical shareable space, and interaction with digital data is more embodied and social. In... more
With the advent of tabletop interaction, collaborative activities are better supported than they are on single-user PCs because there exists a physical shareable space, and interaction with digital data is more embodied and social. In sound and music computing, collaborative music making has traditionally been done using interconnected networks, but using separated computers. Musical tabletops introduce opportunities of playing in collaboration through sharing physically the same musical interface. However, few tabletop musical interfaces exploit this collaborative potential (e.g. the Reactable). We are interested in looking into how collaboration can be fully supported by means of musical tabletops for music performance in contrast with more traditional settings. We are also looking at whether collective musical engagement can be enhanced by providing more suitable interfaces to collaboration. In HCI and software development, we find an iterative process approach of design and evaluation—where evaluation allows us to identify key issues that can be addressed in the next design iteration of the system. Using a similar iterative approach, we plan to design and evaluate some tabletop musical interfaces. The aim is to understand what design choices can enhance and enrich collaboration and collective musical engagement on these systems. In this paper, we explain the evaluation methodologies we have undertaken in three preliminary pilot studies, and the lessons we have learned. Initial findings indicate that evaluating tabletop musical interfaces is a complex endeavour which requires an approach as close as possible to a real context, with an interdisciplinary approach provided by interaction analysis techniques.
In this paper, we describe a playable musical interface for tablets and multi-touch tables. The interface is a generalized keyboard, inspired by the Thummer, and consists of an array of virtual buttons. On a generalized keyboard, any... more
In this paper, we describe a playable musical interface for tablets and multi-touch tables. The interface is a generalized keyboard, inspired by the Thummer, and consists of an array of virtual buttons. On a generalized keyboard, any given interval always has the same shape (and therefore fingering); furthermore, the fingering is consistent over a broad range of tunings. Compared to a physical generalized keyboard, a virtual version has some advantages—notably, that the spatial location of the buttons can be transformed by shears and rotations, and their colouring can be changed to reflect their musical function in different scales.

We exploit these flexibilities to facilitate the playing not just of conventional Western scales but also a wide variety of microtonal generalized diatonic scales known as moment of symmetry, or well-formed, scales. A user can choose such a scale, and the buttons are automatically arranged so their spatial height corresponds to their pitch, and buttons an octave apart are always vertically above each other. Furthermore, the most numerous scale steps run along rows, while buttons within the scale are light-coloured, and those outside are dark or removed.

These features can aid beginners; for example, the chosen scale might be the diatonic, in which case the piano’s familiar white and black colouring of the seven diatonic and five chromatic notes is used, but only one scale fingering need ever be learned (unlike a piano where every key needs a different fingering). Alternatively, it can assist advanced composers and musicians seeking to explore the universe of unfamiliar microtonal scales.
When electronic musicians compose collaboratively, they typically use their own single-user musical controllers. It may, therefore, be useful to develop novel controllers that support collaborative workflows and democratic principles.... more
When electronic musicians compose collaboratively, they typically use their own single-user musical controllers. It may, therefore, be useful to develop novel controllers that support collaborative workflows and democratic principles. After describing the design principles for developing such controllers, we present TOUCHtr4ck, a prototype multi-touch system designed to facilitate such democratic relationships. Informal testing has revealed that this approach does facilitate democratic and collaborative music making, and can produce creative musical results.
The music we like (i.e. our musical preferences) encodes and communicates key information about ourselves. Depicting such preferences in a condensed and easily understandable way is very appealing, especially considering the current... more
The music we like (i.e. our musical preferences) encodes and communicates key information about ourselves. Depicting such preferences in a condensed and easily understandable way is very appealing, especially considering the current trends in social network communication. In this paper we propose a method to automatically generate, given a provided set of preferred music tracks, an iconic representation of a user's musical preferences - the Musical Avatar. Starting from the raw audio signal we first compute over 60 low-level audio features. Then, by applying pattern recognition methods, we infer a set of semantic descriptors for each track in the collection. Next, we summarize these track-level semantic descriptors, obtaining a user profile. Finally, we map this collection-wise description to the visual domain by creating a humanoid cartoony character that represents the user's musical preferences. We performed a proof-of-concept evaluation of the proposed method on 11 subjects with promising results. The analysis of the users' evaluations shows a clear preference for avatars generated by the proposed semantic descriptors over avatars derived from neutral or randomly generated values. We also found a general agreement on the representativeness of the users' musical preferences via the proposed visualization strategy.
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group.... more
A range of systems exist for collaborative music making on multi-touch surfaces. Some of them have been highly successful, but currently there is no systematic way of designing them, to maximise collaboration for a particular user group. We are particularly interested in systems that will engage novices and experts. We designed a simple application in an initial attempt to clearly analyse some of the issues. Our application allows groups of users to express themselves in collaborative music making using pre-composed materials. User studies were video recorded and analysed using two techniques derived from Grounded Theory and Content Analysis. A questionnaire was also conducted and evaluated. Findings suggest that the application affords engaging interaction. Enhancements for collaborative music making on multi-touch surfaces are discussed. Finally, future work on the prototype is proposed to maximise engagement.
We present an audio waveform editor that can be operated in real time through a tabletop interface. The system combines multi-touch and tangible interaction techniques in order to implement the metaphor of a toolkit that allows direct... more
We present an audio waveform editor that can be operated in real time through a tabletop interface. The system combines multi-touch and tangible interaction techniques in order to implement the metaphor of a toolkit that allows direct manipulation of a sound sample. The resulting instrument is well suited for live performance based on evolving loops.