Eduardo Coutinho works in the interdisciplinary fields of Music Psychology and Affective Computing, where his expertise is in the study of emotional expression, perception and induction through music, and the automatic recognition of emotion in music and speech. Before his appointment at Liverpool, he was a Research Fellow in Music Psychology at the University of Sheffield and the Swiss Center for Affective Sciences, and a Research Associate in Affective Computing at the Technical University of Munich and Imperial College London. He has contributed significantly to a broader understanding of the emotional impact of music on listeners, namely on the link between music structure and emotion, the types of emotions induced by music, and individual and contextual factors that mediate the relationships between music and listeners. Coutinho pioneered research on the analysis of emotional dynamics in music, and made significant contributions to the field of music emotion recognition, setting the new standard approach for recognition of emotional dynamics in music. Currently his work focuses on the application of music in Healthcare. He has published significantly in peer-reviewed top journals and conferences in both Music Psychology and Affective Computing topics. In 2013, he received the Knowledge Transfer Award from the National Center of Competence in Research in Affective Sciences, and in 2014 the Young Investigator Award from the International Neural Network Society. http://www.eadward.org/, https://www.liverpool.ac.uk/music/staff/eduardo-coutinho
06.07.2021 Jacopo de Berardinis, Samuel Barrett, Angelo Cangelosi and Eduardo Coutinho introduced a new approach for music modelling that combines recent advancements of transformer models with recurrent networks – the long-short term universal transformer (LSTUT). The LSTUT outperforms other state-of-the-art models and can potentially learn features related to music structure at different time scales. They show the importance of integrating both recurrence and attention in the architecture of music models, and their potential use in automatic music composition systems. This work was presented at 2020 Joint Conference on AI Music Creativity. Paper: https://boblsturm.github.io/aimusic2020/papers/CSMC__MuMe_2020_paper_46.pdf. Presentation: https://youtu.be/Bj4RAaFqqLo. For more information please see here.
06.07.2021 Jacopo de Berardinis, Angelo Cangelosi and Eduardo Coutinho introduced a new computational model (EmoMucs) that considers the role of different musical voices in the prediction of the emotions induced by music. They combined source separation algorithms for breaking up music signals into independent song elements (e.g., vocals, bass, drums) and end-to-end state-of-the-art machine learning techniques for feature extraction and emotion modelling (valence and arousal regression). Through a series of computational experiments on a benchmark dataset using source-specialised models trained independently and different fusion strategies, they demonstrated that EmoMucs outperforms state-of-the-art approaches with the advantage of providing insights into the relative contribution of different musical elements tothe emotions perceived by listeners. This work was presented at 21stInternational Society for Music Information Retrieval Conference. For more information please see here.
06.07.2021 Jacopo de Berardinis, Michalis Vamvakaris, Angelo Cangelosi and Eduado Coutinho developed a novel methodology for the hierarchical analysis of music structure that is based on graph theory and multi-resolution community detection. This unsupervised method can perform both the tasks of boundary detection and structural grouping, without the need of particular constraints that would limit the resulting segmentation. They demonstrate that this methodology can achieve state of the art performance on a well-known benchmark dataset, whilst providing a deeper analysis of musical structure. Their work has been published in Transactions of the International Society for Music Information Retrieval. For more information please see here.
27.02.2017 Eduardo Coutinho was awarded a Knowledge Exchange & Impact Voucher from the University of Liverpool for a project entitled "Music Selections for Improving Road Safety." In this project, we will collect data that will permit analysing the link between the music heard by drivers on the road and specific driving behaviours (e.g., speeding, risk taking). This data will then be used to develop computational models that predict the potential risk of specific music pieces for driving. For more information please see here.
23.02.2017 Romain Sabathé, Eduardo Coutinho and Björn Schuller's recent work on automated music composition done in collaboration with the Imperial College London has been accepted to the 2017 International Joint Conference on Neural Networks (IJCNN). "Deep Recurrent Music Writer: Memory-enhanced Variational Autoencoder-based Musical Score Composition and an Objective Measure." For more information please see here.