Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Music Representation for Analysis using Data

Mining
C. Halkiopoulos
Dept. of Mathematics,
University of Patras Artificial Intelligence Research Center (UPAIRC),
University of Patras,
26500 Rio, Patras, Greece,
e-mail address: halkion@upatras.gr
Basilis Boutsinas
Dept. of Business Administration,
UPAIRC,
University of Patras,
May 13, 2009

Abstract
Music analysis, i.e. using computers to analyze fully notated pieces of
musical score, is one of the most important research issues in computer
music. Machine learning has played a crucial role in the computer music
almost since its beginning. Recently, research in the field has focused on
music mining. Data Mining is an emerging knowledge discovery process
of extracting previously unknown, actionable information from very large
scientific and commercial databases. Classification, clustering and asso-
ciation are the most well known data mining techniques. Data mining
techniques are good candidates for music analysis. However, a proper
music representation scheme is a prerequisite for their application. In
this paper, we propose such a scheme for monophonic music representa-
tion as traditional data sets suitable for common data mining algorithms.
We also present experimental results that demonstrate how the proposed
representation technique is useful and helpful for analyzing and under-
standing music.

Keywords: Music representation, Data Mining

1 Introduction
In this paper, music analysis is considered as using computers to analyze fully
notated pieces of musical score. Alternatively, music analysis could be consid-
ered as using computers to analyze performed and recorded music. Of course,
the latter could be transformed to the former (the Automatic Music Transcrip-
tion problem).

1
Artificial Intelligence was early related to music analysis within the computer
music field. A lot of AI methodologies applied to music analysis as mathematical
models, genetic algorithms, neural networks, hybrid systems and machine learn-
ing - symbolic systems. Machine learning tasks, like classification, prediction,
forecasting, and the extraction of patterns and regularities from data, were early
used both in music practice and in music research [32]. Example applications in
music practice are reactive instruments, artificial performers that interact with
human performers, adaptive music editing or composition systems, trainable op-
tical notation recognition systems, automatic classification and characterization
of styles, authors or music performers, etc. In music research, the musicologi-
cal investigations are based primarily on the study of corpora of existing music
data. Thus, especially in music research, data analysis and knowledge discovery
capabilities of machine learning methods are proved very promising, supporting
the human analyst in finding patterns and regularities in collections of real data.
In music, as in most of other fields of application, machine learning is adopted
through inductive learning algorithms where concepts are learnt from positive
and negative examples. Musical scores, as examples, have been also used as in-
put to artificial neural networks, in order to learn habitual characteristics within
compositions [21]. There are also some attempts based on analytical learning,
as Explanation Based Learning [33] where the learning system is provided with
music knowledge in order to guide the learning process.
Moreover, there have been numerous attempts to describe music in more or
less grammatical terms. The idea common to all of these approaches is that
in music a grammatical formalism may be used to give a finite (and therefore
manageable) description of an infinite (and therefore intractable) set of struc-
tures. However, in such cases, there are arguments against using the notion of
meaning/semantics in music, as linguistics and logicians do for their fields (e.g.
[36]).
Data Mining is an emerging knowledge discovery process of extracting previ-
ously unknown, actionable information from very large scientific and commercial
databases. It is imposed by the explosive growth of such databases. Usually, a
data mining process extracts rules by processing high dimensional categorical
and/or numerical data. Classification, clustering and association are the most
well known data mining tasks.
Classification is one of the most popular data mining tasks. Classification
aims at extracting knowledge which can be used to classify data into prede-
fined classes, described by a set of attributes. The extracted knowledge can be
represented using various schemas. Decision trees, ”if-then” rules and neural
networks are the most popular such schemas. A lot of algorithms have been
proposed in the literature for extracting classification rules from large relational
databases, such as symbolic learning algorithms including decision trees algo-
rithms (e.g. C4.5 [23]) and rule based algorithms (e.g. CN2 [8]), connectionist
learning algorithms (e.g. back–propagation networks [27]), instance-based algo-
rithms (e.g. PEBLS [9]) and hybrid algorithms (e.g. [3]).
Association rules can be used to represent frequent patterns in data, in the
form of dependencies among concepts-attributes. In this paper, we consider the
special case, that is known as the the market basket problem, where concepts-
attributes represent products and the initial database is a set of customer pur-
chases (transactions). This particular problem is well-studied in data mining.
In this parer, we consider association rules of the form “90% of melodies that

2
include G also include E” (boolean association rules) (e.g [1, 4]). Formally, an
association rule is a rule of the form X ⇒ Y , where X, Y named respectively
antecedent
T and consequent of the rule and X, Y ⊂ I = {i1 , i2 , . . . ij }, such that
X Y = ∅ and ik , 1 ≤ k ≤ j is an item in the transaction database D.The
informative power (named interestingness) of each association rule is measured
by two indexes: the Support that measures the proportion of transactions in
D containing both X and Y and the Confidence that measures the conditional
probability of the consequent given the antecedent.
Clustering involves finding a specific number of subgroups (k) within a set of
n observations (data points/objects); each described by d attributes. A cluster-
ing algorithm generates cluster descriptions and assigns each observation to one
cluster (exclusive assignment) or in part to many clusters (partial assignment).
Throughout this paper, we shall refer to the output of a clustering algorithm (e.g
the medoids of the clusters) as clustering rules. Clustering methods have been
widely studied in various scientific fields including Machine Learning, Neural
Networks and Statistics. Clustering algorithms can be classified as either hier-
archical or iterative (partitional, density search, factor analytic or clumping and
graph theoretic). Complete-link, average link and single-link algorithms [11] are
some popular hierarchical clustering algorithms. K-means [20], along with its
variants (e.g. [19, 31]) are some popular partitional clustering algorithms.
In the literature, the term music data mining has been related to the ap-
plication of machine learning algorithms [25, 35]. Machine learning and data
mining algorithms differ in the input data. Machine learning algorithms use a
small set of carefully selected data and could have the ability to interact with
their environment, e.g. they could request new examples. Input data in data
mining algorithms are large databases, (usually noisy and incomplete), while
they cannot manipulate their environment.
In general, the application of artificial intelligence learning techniques to
music analysis is based on either a Gestalt-based approach, where a predefined
set of rules or principles is used, or on a memory-based approach, where a corpus
of grouping structures of previously encountered musical pieces is used.
In listening to a piece of music, the human perceptual system segments the
sequence of notes into groups or phrases that form a grouping structure for the
whole piece. However, several different grouping structures may be compatible
with a melody, i.e. a sequence of notes. Gestalt principles of proximity and
similarity and the principle of melodic parallelism could be used for segmenting
the melodies. If we wish to propose a memory-based approach to music as a
serious alternative to a Gestalt-based approach, we should address the question
of how any structure can be acquired if we do not have any structured pieces in
our corpus to start with.
Data mining in music analysis aims at detecting, discovering, extracting or
inducing of melodic or harmonic (sequential) patterns in given sets of composed
or improvised works, (see [25] for a definition). Apart from mining music pat-
terns forming traditional data sets, lately mining music patterns represented as
data steams is also proposed [13].
In this paper we adopt a memory-based approach for music analysis, ex-
ploiting the benefits of data mining techniques. The effectiveness of data min-
ing techniques does not heavily rely on the selection of input structure pieces.
Thus, data mining techniques are good candidates for music analysis. However,
a proper music representation scheme is a prerequisite for their application. In

3
this paper, we propose such a scheme for monophonic music representation as
traditional data sets, suitable for common data mining algorithms. We also
demonstrate both its efficiency and accuracy.
Little work has be done on the use of data mining association, classification
and clustering algorithms in music analysis. In [28] the application of association
rule mining is presented in order to find out the syntactic description of music
style. The authors chose to represent chords in whole melody or in chorus,
as a music feature that influence music style. Items, in input data set, are
formed by either chords, or bi-grams: adjacent pairs of chords, or n-grams:
a sequence of chords. Moreover, in [10] artificial neural networks, linear and
Bayesian classifiers are used for the same problem. Also, the combination of
rules extracted by ensemble of simple classifiers is used in [34], in order to
extract rules covering the expressive music performance. In [22] a hierarchical
clustering algorithm is used for music clustering based on key. In [17] clustering
is applied to 88 Chinese Shanxi melodies from the city of Hequ and 30 German
children songs from the Essen database to extract abstract motive sequences.
Finally, clustering algorithms have been used for music performances.
In the rest of the paper we first describe the proposed music representation
scheme (Section 2), then we present experimental results that demonstrate how
the proposed scheme is useful and helpful for music analysis (Section 3), and
finally we conclude (Section 4).

2 The proposed music representation scheme


Melodies should be represented in a multi-dimensional space. Examples of such
dimensions are pitch, duration, dynamics and timbre. Almost all the relevant
work, in the computer music literature, adopt pitch and duration as basic di-
mensions. Thus, as in most of music representation schemes proposed in the
literature, we choose to represent these two acoustical music features. However,
the proposed approach treats pitch and duration separately.
The proposed music representation scheme can be used for data mining
analysis which aims at learning general patterns for both pitch and duration in
certain music styles. Input data are melodies of musical pieces, i.e. sequences of
notes. Note that we do not aim at a readable or user-friendly scheme, although
the proposed scheme can be translated to such one.
The proposed scheme is not a general music representation scheme, (e.g. as
that presented in [37]), which is abstract enough to be used for implementations
of different tasks, (e.g. analysis, composition, etc), on different computer sys-
tems. General purpose representation scheme as well as most of music represen-
tation schemes (see [38] for an overview), even these based on recent approaches
as XML [26], are not suitable for the commercial data mining algorithms. On
the other hand, hierarchical music representation schemes (e.g. [15, 37] are not
suitable for forming traditional data sets as input to data mining algorithms.
It is often hypothesised that a musical surface may be seen as a string of
musical entities (e.g. [18]) such as notes, chords etc (see [7, 24] for an overview
of the application of pattern processing algorithms on musical strings). Strings
of musical entities impose different requirements to representation schemes [5, 7]
with respect to data mining.
For instance, the proposed scheme adopts an absolute pitch representation.

4
Although most computer-aided musical applications adopt an absolute numeric
pitch and duration representation, it is stated [7] that the absolute pitch en-
coding may be insufficient for applications in tonal music as it discards tonal
qualities of pitches and pitch intervals, e.g. a tonal transportation from a major
to minor key results in a different encoding of the musical passage and thus
exact matches cannot detect the similarity between the two passages. Thus
transpositions are not accounted for (e.g. the repeating pitch motive in bars 1
& 2 in Fig. 1, taken from [7]). Transposition is paramount in the understanding
of musical patterns and pattern-matching and pattern-induction algorithms are
developed primarily for sequences of pitch intervals [7]. However, association
rule mining for instance, could extract the rule: C#,D ⇒ C, if input data in-
clude sufficient number of identical to first bar in Fig. 1 bars. But such a rule,
for various analysis purposes (e.g. composition or matching), could represent
also the second bar in Figure 1 (after some preprocessing).

Figure 1: Beginning of theme of the Amajor sonata by Mozart

As another example, the proposed scheme does not adopt duration ratios.
Thus, according to the proposed scheme, the rhythmic patterns shown in Fig. 2
(taken from [7]) do not match. Of course, one could argue that duration rations
could result in mismatching of the left rhythmic pattern in Fig. 2 to the one
shown in Fig. 3, which is not true. Note that the proposed scheme represents
duration in such a way that the latter rhythmic patterns match. The latter is
confirmed also by an experiment in [14] investigating the splitting of durations,
where the result is that the smaller the split ratio is, the larger is the measured
similarity. On the other hand, association rule mining for instance, encourage
the items in transactions to be sorted. This could not be achieved using duration
rations.

Figure 2: Rhythmic patterns Figure 3: Rhythmic patterns


matching at the level of duration matching at the level of duration
ratios. ratios.

Thus, it is worth to note that the proposed scheme tries to satisfy data
mining input requirements only.

5
2.1 Pitch representation
Pitch is a subjective sensation in which a listener assigns perceived tones to
notes on a musical scale based mainly on the frequency of vibration, with a
lesser relation to sound pressure level (loudness, volume). The pitch of a tone
typically rises as frequency increases.
The proposed representation scheme is based on the measurement of the
absolute values of the events in series. Moreover, consider that the source of
melodies is a typical MIDI channel. In what follows we will describe pitch
representation through an example application to the musical piece shown in
Fig. 4.

Figure 4: An example musical piece.

When a musical piece is played on a MIDI instrument (midi keyboard or


any midi controller) it transmits MIDI channel messages from its MIDI Out
connector. A typical MIDI channel message sequence corresponding to a key
being struck and released on a keyboard is: ”When we press the middle C key
(60) with a specific velocity (which is usually translated into the volume of the
note) then the instrument sends one Note-On message”. When we release the
middle C key again with the possibility of velocity of release controlling some
parameters, then the instrument sends one Note-Off message. Note-On and
Note-Off are all channel messages. For the Note-On and Note-Off messages, the
MIDI specification defines a number (from 0-127) for every possible note pitch
(C, C], D, etc.), and this number is included in the message.
Another information that we extract from the Midi Representation System
is the Key signature ( KS). This denotes the harmonic scale of the musical
piece. It is kept for all the musical pieces in the input database, in order to be
transposed in the same key. As mentioned in the previous section, the proposed
scheme supports pitch processing within the same key. This piece of information
for Key signature is inserted in Database via msq file (variable KS). Thus, input
database initially includes a table with the form shown in Table 1 (with the help
of msq files).
Then, each melody is segmented manually into fragments. A new table is
included in the input database, in which a new line is created for each fragment,
based on the primary line that has been imported by the msq file by name
midi note : [60,64,60,62,64,65,67,59,60] (removing the double entries in which
velocity value = 0). Then, each fragment is further segmented into fragments
that have the same melodic direction, i.e. either upwards or downwards, such
as: [60,64],[64,60],[60,62,64,65,67],[67,59],[59,60].
For an efficient application of association rule mining algorithms, these frag-
ments are sorted in ascending order. Moreover, a new column is added holding
information about the melodic direction, i.e. ascending (1) or descending (0).
Thus, the table for association rule mining, called “melody”, is formed as in
Table 2.

6
Table 1: Initial Table
Line midi note track no midi file id velocity
··· ··· ··· ··· ··· ···
7706 0 1 NON 1 60 42 60 1 45 42
7707 16 1 NON 1 64 87 64 1 45 87
7708 24 1 NON 1 60 68 60 1 45 68
7709 32 1 NON 1 62 78 62 1 45 78
7710 48 1 NON 1 64 80 64 1 45 80
7711 56 1 NON 1 65 83 65 1 45 83
7712 63 1 NON 1 67 50 67 1 45 50
··· ··· ··· ··· ··· ···

Table 2: Table “melody”

id direction note note note note


··· ··· ··· ··· ··· ··· ···
T1 1 60 64
T2 0 60 64
T3 1 60 62 64 65 67
T4 0 59 67
T5 1 59 60
··· ··· ··· ··· ··· ··· ···

Table 3: Table “melody s”

id direction ··· 59 60 61 62 63 64 65 66 67 68 ···


··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···
T1 0 ··· 0 1 0 0 0 1 0 0 0 0 ···
T2 1 ··· 0 1 0 0 0 1 0 0 0 0 ···
T3 0 ··· 0 1 0 1 0 1 1 0 1 0 ···
T4 1 ··· 1 0 0 0 0 0 0 0 0 0 ···
··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···

7
Table 4: Time intervals
64 Whole Note (1)
32 Half Note (2)
16 Quarter Note (4)
8 Quaver (8)
4 Semi Quaver (16)
2 Demi Semi Quaver (32)
1 Hemi Demi Semi Quaver (64)

For classification and clustering each final fragment is represented as a 128


dimensional binary vector, since each one of its notes corresponds to a num-
ber from 0 to 127 (midi note number). The existence of a specific note in a
fragment is indicated by “1” while its absence by “0”. Moreover, a column is
also added holding information about the melodic direction, i.e. ascending (1)
or descending (0). Thus, the corresponding input table, called “melody s” is
formed as in Table 3.

2.2 Duration representation


Rhythm is the arrangement of sounds in time. Meter animates time in regular
pulse groupings, called measures or bars. The time signature (variable TS) or
meter signature specifies how many beats are in a measure, and which value of
written note is counted and felt as a single beat. Through increased stress and
attack (and subtle variations in duration), particular tones may be accented.
There are conventions in most musical traditions for a regular and hierarchical
accentuation of beats to reinforce the meter.
The proposed representation scheme is based on the key idea to indicate
which discrete values of time a rhythm event happens. Moreover, consider that
the source of rhythm patterns is a typical MIDI channel. In what follows we
will describe duration representation through an example.
According to MIDI Representation System, time intervals are defined by
the variable Ticks value. First, we convert all the midi files with the value of
Ticks equal to t, i.e., the number of ticks for ”whole time”. For the analysis
of rhythm patterns the maximum rhythmic length is assigned to i measures of
”whole time”, namely until the value t × i. The latter constraint is imposed by
the data base management system used for the implementation. For example,
if t = 64 we use the time intervals described in Table 4.
For association rule mining, rhythm patterns of extracted fragments de-
scribed in the previous subsection are represented in the table of the database
called “rhythm”, in which a new line is created for each melody final fragment.
For example, setting t = 64 and i = 2, the table “rhythm” which is formed as in
Table 5. Each final fragment is represented as a 128 dimensional binary vector.
For classification and clustering each final fragment is also represented as
a 128 dimensional binary vector, since each time corresponds to a number of
ticks. An event within a fragment is indicated by “1” while its absence by “0”.
Thus, the corresponding input table, called “rhythm s” is formed as in Table 6.

8
Table 5: Table “rhythm”

··· ···
T1 0 16 24 32 48 56 63 79 96
··· ···

Table 6: Table “rhythm s”

id 0 ··· 16 ··· 24 ··· 32 ··· 48 ··· 56 ··· 63 ··· 79 ··· 96 ···


··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···
T1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ···

3 Experimental results
Collection of input data is an essential task, if the learning experiments are to
produce new knowledge and insight into musical phenomena. Thus, input data
should be unbiased realistic, contrary to preliminary experiments where input
data were produced by the experimenter her/himself. Common data mining
algorithms accept input data as tables.
Of course, a melodic segmentation technique [6] could be used in order to
obtain input records. However, it is true that that there is no a single correct
segmentation of a musical piece. Moreover, since the construction of the training
set is an off-line task, we used a manual segmentation for better control over
input records [29].
As shown in [16], the clustering of melodic segments (used in Paradigmatic
Analysis) is heavily relies on the used knowledge representation scheme and
distance function.
Each melody is converted to a .msq file, using a midi score program (we used
the Sibelius program). The .msq files are ASCII files with a specific structure
that can be parsed. The .msq files were originally created by Siegfried Koepf
and Bernd Haerpfer in 1998, needing a MIDI compatible computer system for
algorithmic composition and score synthesis. A command line client parses .msq
files and inserts the information it extracts in a database. This information can
then be processed in various ways. A different command line client processes
the database content and creates reports that can be fed in another client that
generates rules using classification, clustering and association algorithms. All
code is written using Java; the database is MySql 5. Database connectivity is
accomplished via JDBC.
Each .msq file contains a set of lines, separated with CRLF. Different lines
contain different types of information. The application models the lines of in-
terest with the MsqLine class. The following subclasses are implemented:

• MsqKsLine, which contains nformation about the music key signature in


each music composition.

9
• MsqNonLine, which contains information about a note in the music com-
position (midi note, channel, duration, temporal instance of occurrence,
etc).
• MsqTicksLine, which is a line that occurs once in an .msq file and declares
the number of ticks for the composition.
• MsqTeLine, which contains information about the music style in each
music composition.

Lines which cannot be parsed into one of the above are not considered of interest
to the application and are ignored. MsqLineFactory is a singleton that parses a
line to the appropriate MsqLine instance. The MsqFileParser parses an .msq file
line by line and uses the MsqLineFactory to create meaningful content for the
application. After a line has been parsed to an MsqLine subclass, DbConnector
is used for inserting relevant data into the database. MsqDbDumper class or-
chestrates all the above, by iterating over all .msq files in a given directory and
using MsqFileParser for parsing the files and inserting the information in the
database. Each successfully processed file is archived in a timestamped archive
directory. MsqExportHelper is the class that generates the report, by processing
the data that has been inserted in the database. It generates the four previously
mentioned files: melody, melody s, rhythm and rhythm s. melody and rhythm
files are used for association rule mining while the rest are used for classification
and clustering
The proposed system is tested in extracting data mining rules from a great
number of melodies taken from Bach’s Chorales (www.jsbchorales.net). As a
technical result, the application of data mining algorithms seems very encour-
aging.
Extracted information can be used in several applications. For instance,
given an input set of pitches/durations in a certain music style, an association
rule indicates another set of pitches/durations which is similar to the input
one. As another example of use, given a certain input pitch/duration in a
certain music style, a classification rule indicates what pitches/durations can
be combined with the input pitch/duration in this music style. Also, given
an input set of pitches/durations in a certain music style, a clustering rule
indicates another set of pitches/durations which is the most similar to the input
set general representative (medoid) of this music style.

4 Conclusion
A symbolic music representation might be used [39] for recording (a record of
some musical object, to be retrieved at a later date), analysis (to retrieve not the
raw musical object, but some analyzed version) and generation/composition. In
this paper, we present a new music representation scheme, demonstrating that
it is suitable for recording and analysis.
We plan to extend the music features that the proposed scheme can repre-
sent to static (e.g. key, tempo) and thematic (e.g. chords) features. To this, we
are currently working in extending the proposed music representation scheme in
order to represent actual performances of melodies by human performers. Per-
formances could represented by tempo and loudness information. Thus, a data

10
mining analysis would provide general expression rules for the application of
dynamics (crescendo vs. decrescendo) and tempo (accelerando vs. ritardando).
Moreover, we plan to use the proposed scheme in applications of data mining
in other specific computer music problems: automatic generation of melodies,
melodic segmentation, discovering repeated patterns and music performance
recognition.

References
[1] R. Agrawal, H. Mannila, R. Srikant, A.I. Verkamo, Fast Discovery of Asso-
ciation Rules, in: [12] (1996) 307–328.
[2] R. Bod, A Memory-Based Model for Music Analysis: Challenging the
Gestalt Principles.
[3] B. Boutsinas, M.N. Vrahatis, Artificial Nonmonotonic Neural Networks, Ar-
tificial Intelligence 132(1) (2001) 1–38.
[4] B. Boutsinas, C. Siotos, A. Gerolymatos, ”Distributed mining of association
rules based on reducing the support threshold”, International Journal on
Artificial Intelligence Tools, World Scientific Publishing Company, 17(6),
2008, pp. 1109-1129.
[5] E. Cambouropoulos, A General Pitch Interval Representation: Theory and
Applications, Journal of New Music Research, 25 (3), 1996, pp. 231-251.
[6] E. Cambouropoulos, Musical Parallelism and Melodic Segmentation, In Pro-
ceedings of the XII Colloquium of Musical Informatics, Gorizia, Italy, 1998.
[7] E. Cambouropoulos,T. Crawford, C.S. Iliopoulos, Pattern Processing in
Melodic Sequences: Challenges, Caveats and Prospects, Computers and the
Humanities, 34:4, 2000.
[8] P. Clark, T. Niblett, The CN2 Induction Algorithm, Machine Learning 3(4)
(1989) 261–283.
[9] S. Cost, S. Salzberg, A Weighted Nearest Neighbor Algorithm for Learning
with Symbolic Features, Machine Learning 10 (1993) 57–78.
[10] R.B. Dannenberg, B. Thom, D. Watson, A Machine Learning Approach
to Musical Style Recognition, Proceedings of International Computer Music
Conference ICMC’97, 1997.
[11] R.C. Dubes, A.K. Jain, Clustering methodologies in exploratory data
analysis, Adv. Comput. 19 (1980) 113–228.
[12] U.M. Fayyad, G. Piatetsky-Shapiro and P. Smyth, Advances in Knowledge
Discovery and Data Mining, AAAI Press/MIT Press, 1996.
[13] M.M. Gaber, A.Zaslavsky and S.Krishnaswamy, Mining data streams: a
review, ACM SIGMOD Record, 34(1), June 2005.
[14] L.J. HofmannEngl, Melodic similarity a theoretical and empirical approach,
PhD thesis, Keele University, UK, 2003.

11
[15] T. Horton, Some Formal Problems with Schenkerian Representations of
Tonal Structure.
[16] K. Höthker, D. Hörnel, C. Anagnostopoulou, Investigating the Influence of
Representations and Algorithms in Music Classification
[17] K. Höthker, Modelling the Motivic Process of Melodies with Markov
Chains, Computers and the Humanities, 35(1), 2001, pp. 65-79(15).
[18] J.L. Hsu, C.C. Liu and A.L.P. Chen, Discovering nontrivial repeating pat-
terns in music data, IEEE Transactions on Multimedia, 3 (3), 311325, 2001.
[19] Z. Huang, Extensions to the k-means algorithm for clustering large data
sets with categorical values, Data mining and Knowl. Disc. 2 (1998) 283–304.
[20] A.K. Jain and R.C. Dubes, Algorithms for Clustering Data, Prentice-Hall,
Englewoods Cliffs, NJ, 1988.
[21] F.J. Kiernan, Score-based style recognition using artificial neural networks.
[22] Y. Liu, Y. Wang, A. Shenoy, W.-H. Tsai, L. Cai, Clustering Music Record-
ings by their Keys, ISMIR 2008.
[23] J.R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann
CA, 1993.
[24] P.Y. Rolland, J.G. Ganascia, Musical Pattern Extraction and Similarity
Assessment. In Readings in Music and Artificial Intelligence. E. Miranda.
(ed.), 1999.
[25] P.Y. Rolland, J.G. Ganascia, Pattern Detection and Discovery: The Case
of Music Data Mining.

[26] P.Y. Rolland, The Music Encoding Initiative (MEI).


[27] D. Rumelhart, G. Hinton, R. Williams, Learning internal representations
by error propagation, in: D. Rumelhart and J. McClelland (Eds.), Parallel
Distributed Processing: Explorations in the Microstructure of Cognition,
MIT Press, (1986) 318–363.
[28] M.K. Shan and F.F. Kuo, Music style mining and classification by melody,
IEICE Transactions on Information and Systems, E86-D (4), 655659, 2003.
[29] C. Spevak, B. Thom, K. Höthker, Evaluating Melodic Segmentation
[30] UCI Repository Of Machine Learning Databases and Domain Theories,
http://www.ics.uci.edu/ mlearn/MLRepository.html
[31] M.N. Vrahatis, B. Boutsinas, P. Alevizos, G. Pavlides, The New k-windows
Algorithm for Improving the k-means Clustering Algorithm, Journal of Com-
plexity 18 (2002) 375–391.
[32] G. Widmer, On the Potential of Machine Learning for Music Research.
[33] G. Widmer, A Knowledge Intensive Approach to Machine Learning in Mu-
sic.

12
[34] G. Widmer, Discovering Simple Rules in Complex Data - A Meta-learning
Algorithm and Some Surprising Musical Discoveries.
[35] G. Widmer, Using AI and Machine Learning to Study Expressive Music
Performance-Project Survey and First Report
[36] G.A. Wiggins, Music, syntax, and the meaning of meaning.
[37] G.A. Wiggins, Hierarchical Music Representation for Composition and
Analysis.
[38] G.A. Wiggins, E. Miranda, A. Smaill, M. Harris, Surveying Musical Rep-
resentation Systems - A Framework for Evaluation
[39] G.A. Wiggins, E. Miranda, A. Smaill, M. Harris, A Framework for the Eval-
uation of Music Representation Systems, Computer Music Journal 17(3),
1993, pp. 3142.

13
UNIVERZITET UMETNOSTI U BEOGRADU / UNIVERSITY OF ARTS IN BELGRADE
FAKULTET MUZIČKE UMETNOSTI / FACULTY OF MUSIC

Muzička teorija i analiza


Music Theory and Analysis

Sedmi godišnji skup Katedre za muzičku teoriju


Fakultet muzičke umetnosti u Beogradu
15-17 maj 2009

7th Annual Conference


Department of Music Theory
Faculty of Music in Belgrade
15-17 May 2009

Program skupa
Conference Program

PETAK, 15. MAJ 2009 / Friday, May 15, 2009


Univerzitet umetnosti / University of Arts, Kosančićev venac, 29

9.30-10.00
Registracija učesnika i otvaranje
Registration and opening ceremony

10.00-11.00 KEYNOTE SPEAKER


Byron Almén, Univerzitet Teksasa u Ostinu, SAD / University of Texas at Austin, USA
Narativni arhetipovi: teorija i analiza
Narrative Archetypes: Theory and Analysis

I Sesija / Session I Predsedava / Chair Byron Almén

11.15-11.45
Martin Eybl, Univerzitet za muziku i izvođačke umetnosti, Beč, Austrija / Universität für Musik
und Darstellende Kunst, Wien, Austria
Da capo i repriza: ponavljanje u muzičkom narativu
Da Capo and Recapitulation: Repetition in Musical Narrative

11.45-12.15
Jeremy Barham, Univerzitet Sari, Gildford, V. Britanija / University of Surrey, Guildford, Great
Britain
Rikerovske narativne strukture, kognitivni modeli XIX veka i „anksioznost dekadencije” u prvom
stavu Malerove Treće simfonije
Ricoeurian Narrative Structures, 19th-Century Cognitive Models, and the 'Anxiety of Decadence'
in the First Movement of Mahler's Third Symphony

12.15-12.30 pauza / break

12.30-13.00
Tijana Popović, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Šta mi priča Pesma o zemlji Gustava Malera
What Das Lied von der Erde tells me

13.00-13.30
Nataša Crnjanski, Akademija umetnosti, Novi Sad / Academy of Arts, Novi Sad
Metaforična muzika i muzička metafora
Metaphorical Music and Musical Metaphor

13.30-15.00 pauza za ručak / lunch break

II Sesija / Session II Predsedava / Chair Martin Eybl

15.00-15.30
Kalliopi Stiga, Muzikološki fakultet Univerziteta u Atini, Grčka / Faculté de musicologie –
Universite d’Athènes, Greece
Teodorakisovska narativna melodija u susretu s poetskim lirizmom
La mélodie narrative théodorakienne à la rencontre du lyrisme poétique;

15.30-16.00
Jelena Novak, Amsterdamska škola za analizu kulture, Univerzitet Amsterdama / Amsterdam
School for Cultural Analysis, University of Amsterdam
Pripovedanje ponavljanjem: funkcija muzike u postoperskoj narativnosti
Narrating by Repeating: Function of Music in Post-operatic Narrativity

16.00-16.30
Wu Dongpan, Institut za tehnologiju Huang Ši, Kina / Huangshi Institute of Technology, China
Prefinjeni Qing Sheng i graciozni Ce Sheng – analiza kompozicije Duan Qing
Exquisite Qing Sheng and Graceful Ce Sheng – Analysis of Duan Qing in qin qu Ji Shi Si Nong;

16.30-17.00 pauza / break

17.00-17.30
Ivana Perković, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Bitka u balskoj dvorani? Ekspresivni zanrovi u Mocartovoj kontradanci La Battaille KV 535;
Battle in the Ballroom? Expressive Genres in Mozart's Contredance La Battaille K 535

17.30-18.00
Ana Stefanović, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Još jednom o muzičkim „topicima“ i analizi stila
Once more on musical“ topics” and style analysis

18.00 Koktel / Coctail


20.30 – 21.30 Fakultet muzičke umetnosti / Faculty of Music Kralja Milana 50

Koncert / Concert
Srpska kamerna muzika / Serbian chamber music
Klasa kamerne muzike profesora Gorana Marinkovića / Chamber Music class of Professor Goran
Marinković

SUBOTA, 16. MAJ, 2009 / Saturday, May 16, 2009


Univerzitet umetnosti / University of Arts, Kosančićev venac, 29

III Sesija / Session III Predsedava / Chair Nico Schuler

10.00-10.30
Luca Bruno, Odsek za komunikacije, Univerzitet Kalabrije, Italija / Dipartimento di
Comunicazione e D.A.M.S., Università della Calabria, Italy.
Harmonija i postavka teksta u delu Canzone villanesche alla napolitana Adriana Vilerta (1542-
1545)
Harmony and Text Setting in Adrian Willaert’s Canzone villanesche alla napolitana (1542-1545)

10.30-11.00
Théodora Psychoyou, Univerzitet Pariz IV - Sorbona / Université Paris IV - Sorbonne
Kakav zakon za énergie des modes? Percepcija i racionalizacija izražajnih svojstava muzike u
francuskoj muzičkoj misli XVII veka
What statute for the énergie des modes? Perception and rationalization of expressive properties
of music in 17th-century French musical thought

11.00-11.30
Miloš Zatkalik, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Može li se Doktor Džekil pomiriti s Mister Hajdom: tonalnost i pripitomljeni tonski nizovi u
Dvanaestom gudačkom kvartetu Dmitrija Šostakoviča
Can Dr. Jekyll Be Reconciled with Mr. Hyde: Tonality and Domesticated Tone-Rows in Dmitri
Shostakovich’s Twelfth String Quartet

11.30-12.00 pauza / break

IV Sesija / Session IV Predsedava / Chair Miloš Zatkalik

12.00-12.30
Denis Collins, Univerzitet Kvinslenda, Brizbejn, Australija / The University of Queensland,
Brisbane, Australia:
Henri Persl: tri deonice nad ostinatom i tradicija engleskog kontrapunkta
Henry Purcell's Three Parts Upon a Ground and the Traditions of English Counterpoint

12.30-13.00
Zoran Božanić, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Ispoljavanje skrivene polifonije na primeru Bahovih svita za solo violončelo
Manifestations of Hidden Polyphony in Bach’s Suites for Unaccompanied Cello

13.00-13.30
Senka Belić, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Hristovo rođenje i dvohorski canon multiplex: motet Nesciens mater virgo virum Žana Mutona
Birth of Jesus Christ and Double-Chorus canon multiplex: Motet Nesciens mater virgo virum by
Jean Mouton

13.30-14.00
Predrag Repanić, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Primena tehnike pomerajućih kontrapunkta u radu sa osnovnom temom iz ciklusa Umetnost fuge
J. S. Baha
Application of the Movable Counterpoint Technique in Working with the Principal Subject of J. S.
Bach’s The Art of the Fugue

14.00-15.00 pauza za ručak / lunch break

V Sesija / Session V Predsedava / Chair Denis Collins

15.00-15.30
Andreas Holzer, Univerzitet za muziku i izvođačke umetnosti, Beč, Austrija / Universität für
Musik und Darstellende Kunst, Wien, Austria
Forma: pledoaje za jednu zanemarenu kategoriju
Form: Ein Plädoyer für eine vernachlässigte Kategorie

15.30-16.00
Dimitar Ninov, Državni univerzitet Teksasa, San Markos, SAD / Texas State University, San
Marcos, USA
Nezavisna fraza, univerzalna rečenica i grupe fraza: predlog klasifikacije strukturnih ekvivalenata
jednodelne forme
The Independent Phrase, the Universal Sentence and the Phrase Group: Suggested Classification
of Structures Equivalent to a One-Part Form

16.00-16.30
Danijela Zdravić Mihajlović, Fakultet umetnosti, Niš / Faculty of Arts, Niš
Razmatranje muzičke rečenice i perioda u literaturi ruskih, bugarskih, makedonskih i srpskih
autora (Mazelj, Stojanov, Bužarovski, Peričić, Mihajlović, Čavlović, Popović, Zatkalik, Sabo)
Consideration of the Musical Sentence and Period by Russian, Bulgarian, Macedonian and
Serbian Authors (Mazel’, Stoyanov, Bužarovski, Peričić, Mihajlović, Čavlović, Popović,
Zatkalik, Sabo)

16.30-17.00 pauza / break

VI Sesija / Session VI Predsedava / Chair Dimitar Ninov

17.00-17.30
Nico Schuler, Državni univerzitet Teksasa, San Markos, SAD / Texas State University, San
Marcos, USA
Razvoj kompjuterske tehnologije i njen uticaj na razvoj muzičko-analitičkih metoda
The Development of Computing Technology and Its Influence on the Development of Music-
Analytical Methods
17.30-18.00
C. Halkiopoulos, Univerzitet Patrasa, Grčka / University of Patras, Greece
B. Boutsinas, Univerzitet Patrasa, Grčka / University of Patras, Greece
Predstavljanje muzike za analizu korišćenjem tehnike data mining
Music Representation for Analysis Using Data Mining

20.00 večera (fakultativno) / dinner (optional)

NEDELJA, 17. MAJ, 2009 / Sunday, May 17, 2009

Univerzitet umetnosti / University of Arts, Kosančićev venac, 29

VII Sesija / Session VII

Predsedava / Chair Andreas Holzer

10.00-10.30
Jasna Veljanovic-Ranković, Filološko-umetnički fakultet, Kragujevac / Faculty of Philology
and Arts, Kragujevac
Francuske svite J. S. Baha: predlog tipologije baroknog dvodela;
J. S. Bach’s French Suites: A Proposition for the Typology of Binary Form

10.30-11.00
Anica Sabo, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Oblik varijacija – koraci u analitičkoj proceduri;
Variation Form – Steps in Analytical Procedure

11.00-11.30
Eduardo Abrantes, Novi lisabonski univerzitet, Portugal / New University of Lisbon, Portugal
Beznačajni glasovi – fenomenologija vokalne improvizacije i odsustvo značenja
Insignificant Voices – the Phenomenology of Vocal Improvisation and Meaninglessness

11.30-12.00 pauza / break

12.00-12.30
Sonja Marinković, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Tretman Glinkinih varijacija u Baladi Finca;
The Treatment of Glinka’s Variations in the Finn’s Ballad

12.30-13.00
Mirjana Živković, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Svite iz baleta Ohridska legenda – potraga za njihovim identitetima
(povodom 50 godina od smrti Stevana Hristića)
Suites from the Ballet The Legend of Ohrid by Stevan Hristić – A Quest for Their Identities
(on the 50thAnniversary of Stevan Hristić’s Death)
13.00-14.30 pauza za ručak / lunch break

VIII Sesija / Session VIII Predsedava / Chair Mirjana Živković

14.30-15.00
Marko Aleksić, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Završni monolog Salome u istoimenoj muzičkoj drami Riharda Štrausa: sistem harmonskih
simbolizacija
Salome’s Final Monologue in Richard Strauss’s Opera: The System of Harmonic Symbolization

15.00-15.30
Srđan Teparić, Fakultet muzičke umetnosti, Beograd / Faculty of Music, Belgrade
Tonalnost u funkciji stila u klavirskoj sviti Kuprenov grob Morisa Ravela
Tonality as a Function of Style in the Piano Suite Le tombeau de Couperin by Maurice Ravel

15.30-16.00 pauza / break

16-16.30
Atila Sabo, Filološko-umetnički fakultet, Kragujevac / Faculty of Philology and Arts,
Kragujevac
Odnos melodije i harmonije u horskim kompozicijama zasnovanim na folklornom uzorku
Relationships between Melody and Harmony in Choral Compositions Based on Folklore

16.30-17.00
Filip Pavličić, Filološko-umetnički fakultet, Kragujevac / Faculty of Philology and Arts,
Kragujevac
Harmonski sklopovi u kompoziciji Rondo sekvenca Mirjane Živković
Harmonic Configurations in Rondo Sequence by Mirjana Živković

17.00-18.00
zaključci / conclusions

You might also like