Automated Music Compositions using Neural Networks

by Romain Sabathé, Eduardo Coutinho, Björn Schuller
February 23, 2017, 11:58 am

In recent years, there has been an increasing interest in music generation using machine learning techniques typically used for classification or regression tasks. This is a field still in its infancy, and most attempts are still characterized by the imposition of many restrictions to the music composition process in order to favor the creation of “interesting” outputs. Furthermore, and most importantly, none of the past attempts has focused on developing objective measures to evaluate the music composed, which would allow to evaluate the pieces composed against a predetermined standard as well as permitting to fine-tune models for better “performance” and music composition goals.

In our work, we intend to advance state-of-the-art in this area by introducing and evaluating new metrics for an objective assessment of the quality of the generated pieces. These metrics can be used to measure to evaluate the outputs of different models. We are also interested in developing new approaches for automated music compositions that are truly generative.

Stage 1

Deep Recurrent Music Writer

Memory-enhanced Variational Autoencoder-based Musical Score Composition and an Objective Measures

Romain Sabathé, Imperial College London
Eduardo Coutinho, University of Liverpool and Imperial College London
Björn Schuller, Imperial College London

In this work, we applied a model previously used for image generation for generating new music - Variations Autoencoders (VAEs). This type of neural network attempts to model the underlying and complex joint distribution of a given data, sample from it, and generate new examples that fit the same distribution. An important advantage of VAEs is that the input data can be of any kind (e.g., images, sound, video, text). For example, a VAE could be used to learn the distribution of pictures of sunflowers. Then, by sampling from this distribution we would obtain pictures whose content fundamentally follows all the “rules” that make a sunflower what it is – its color, its shape, etc. In our case, we want to learn the distribution of musical pieces belonging to a given style by allowing the VAE to capture the relevant musical rules that underlie the composition process.

Furthermore, and most importantly, we developed a simple objective measure to evaluate the music composed, which allowed us to evaluate the pieces composed against a predetermined standard as well as fine-tuning our models for better “performance” and music composition goals. We demonstrate that our model can generate music pieces that follow general stylistic characteristics of a given composer or musical genre, and that our measure permits investigating the impact of various parameters and model architectures on the compositional process and output.

Article describing this work:
Sabathé, R., Coutinho, E., Schuller, B. W. (to appear). Deep Recurrent Music Writer: Memory-enhanced Variational Autoencoder-based Musical Score Composition and an Objective Measure. 2017 International Joint Conference on Neural Networks (IJCNN 2017).

Examples








[Last update: 1.03.2017]

For additional information please see here.

Computer-Aided Composition (4)