Abstract
This paper presents a model capable of generating and completing musical compositions automatically. The model is based on generative learning paradigms of machine learning and deep learning, such as recurrent neural networks. Related works consider music as a text of a natural language, requiring the network to learn the syntax of the sheet music completely and the dependencies among symbols. This involves a very intense training and may produce overfitting in many cases. This paper contributes with a data preprocessing that eliminates the most complex dependencies allowing the musical content to be abstracted from the syntax. Moreover, a web application based on the trained models is presented. The tool allows inexperienced users to generate automatic music from scratch or from a given fragment of sheet music.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agarwala, N., Inoue, Y., Sly, A.: CS224N Final Project. https://github.com/yinoue93/CS224N_proj. Accessed Dec 2017
Agarwala, N., Inoue, Y., Sly, A.: Music composition using recurrent neural networks
Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)
Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. In: Proceedings of the Twenty-nine International Conference on Machine Learning (ICML 2012). ACM (2012)
Google Brain Team. Magenta. https://github.com/tensorflow/magenta. Accessed Dec 2017
Google Brain Team. Magenta Drums RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/drums_rnn. Accessed Jan 2018
Google Brain Team. Magenta Melody RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn. Accessed Jan 2018
Google Brain Team. Magenta Performance RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn. Accessed Jan 2018
Google Brain Team. Magenta Pianoroll RNN-NADE. https://github.com/tensorflow/magenta/tree/master/magenta/models/pianoroll_rnn_nade. Accessed Jan 2018
Google Brain Team. Magenta Polyphony RNN. https://github.com/tensorflow/magenta/tree/master/magenta/models/polyphony_rnn. Accessed Jan 2018
Hadjeres, G.: DeepBach. https://github.com/Ghadjeres/DeepBach. Accessed Dec 2017
Hadjeres, G., Pachet, F., Nielsen, F.: Deepbach: a steerable model for bach chorales generation. arXiv preprint arXiv:1612.01010 (2016)
Johnson, D.D.: Generating polyphonic music using tied parallel networks. In: International Conference on Evolutionary and Biologically Inspired Music and Art, pp. 128–143. Springer, Heidelberg (2017)
Liang, F.: BachBot: automatic composition in the style of Bach chorales. Ph.D. thesis, Masters thesis, University of Cambridge (2016)
Liang, F., Gotham, M., Tomczak, M., Johnson, M., Shotton, J.: BachBot. https://github.com/feynmanliang/bachbot. Accessed Dec 2017
Serrano, E., Rovatsos, M., Botía, J.A.: Data mining agent conversations: A qualitative approach to multiagent systems analysis. Inf. Sci. 230, 132–146 (2013)
Tomczak, M.: Bachbot. Ph.D. thesis, Masters thesis, University of Cambridge (2016)
Wirth, R., Hipp, J.: CRISP-DM: towards a standard process model for data mining. In: Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining. Citeseer, pp. 29–39 (2000)
Acknowledgments
This research work is supported by the Universidad Politécnica de Madrid under the education innovation project “Aprendizaje basado en retos para la Biología Computacional y la Ciencia de Datos”, code IE1718.1003; and by the Spanish Ministry of Economy, Indystry and Competitiveness under the R&D project Datos 4.0: Retos y soluciones (TIN2016-78011-C4-4-R, AEI/FEDER, UE).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
García, J.C., Serrano, E. (2019). Automatic Music Generation by Deep Learning. In: De La Prieta, F., Omatu, S., Fernández-Caballero, A. (eds) Distributed Computing and Artificial Intelligence, 15th International Conference. DCAI2018 2018. Advances in Intelligent Systems and Computing, vol 800. Springer, Cham. https://doi.org/10.1007/978-3-319-94649-8_34
Download citation
DOI: https://doi.org/10.1007/978-3-319-94649-8_34
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94648-1
Online ISBN: 978-3-319-94649-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)