Interdisciplinary Centre for Composition and Technology

Our Research

The Interdisciplinary Centre for Composition and Technology (ICCaT), based in the Department of Music at the University of Liverpool, investigates how music composition and sonic artforms intersect with new technology, performance and perception. ICCaT's research is focussed both on musical practice as well as developing technological resources and software-based tools for creative practitioners and analyists. This work falls into three main themes: Data-Driven Composition, Interactivity in A/V Composition and Performance, and Sound and Agency.

Our Research Themes

Data-Driven Composition

Data-Driven Composition explores the application of low-level technologies to help music practitioners comprehend, explore, model, and creatively manipulate digital resources.

One strand of this research involves employing a variety of computer algorithms to analyze sounds in order to elucidate their similarities and differences. Here, such models are employed to make large collections of sound browsable as well as offer insight into structural organization and stylistic features.

A second area of focus is on creating tools to help creative practitioners work with large collections of digital materials, enabling a fluid sense of musical expression when working with sound resources that might otherwise be prohibitively large. The ultimate goal is to create new and viable modes of digital authorship across a variety of practices, including electronic music and sound art as well as developing tools for computer assisted acoustic composition and mixed media composition.

A third strand focuses on the development of automated/assisted music composition systems using technologies such as machine learning, neural networks, and concatenative synthesis as well as developing metrics for computational assessment of the quality of the generated music.

Lead Members: Ben Hackbarth, Eduardo Coutinho

Example Projects: AudioGuide, DSCOM, Deep Recurrent Music Writer


Interactivity in A/V Composition and Performance

Interactivity in A/V Composition and Performance involves the use of digital technologies to facilitate the design and creation of interactive musical compositions and their realization in performance.

A central aspect of this practice is the consideration of composition and improvisation as defining a continuum for all musical practices. This is sometimes referred to as 'comprovisation' (cf. Sandeep Bhagwati, Richard Dudas).

A major strand of this theme is the creation of game-based or 'gamified' works, which draw upon pre-existing fields of inquiry such as game design theory and animated/screen-based scores (cf. Cat Hope, Lindsay Vickery, Ryan Ross Smith).

  • Composition-improvisation as a dynamic continuum
  • Musical representation with digital media
  • Principles of game design
  • Interpretation and esthesis; immersion, flow, learning, accessibility, etc.
  • Digital interfaces in musical performance

Lead Member: Paul Turowski

Example Project: Embodied Musicking Dataset


Sound and Agency

As practitioners, we are passionate about developing innovative musical works that shed new light on aspects of Sound and Agency. Some key questions for our creative processes include: how timbres may be modified; how what we are doing is affected by the conceptual frames that we devise; and how sounds may carry traces and resonances that are musical and contextual. We are interested in the practical and aesthetic matters pertaining to embodied sound, as well as in music producing and the effects of production. Our work features detailed consideration of interpretative and performance matters, often in the context of non-traditional collaborative workflows.

One strand of our research explores acousmatic composition – more specifically studio-based composition that explores links between both sound and spatial morphology, gesture and implied agency. Approaches focus on an interplay between the intrinsic (abstract inner characteristics of sound) and the extrinsic (those aspects of sound that may be anecdotal or reference broader aspects of the human experience). Such approaches raise interesting questions about sound and source identification from the point of view of the listener.

A further strand of research focuses on fluid compositional and performance processes, specifically in the context of music that may be considered ‘hybrid’ or ‘on a cusp’. The work involves blending different approaches to performance, and addressing the challenge of how to distribute authorial responsibility amongst creative agents. Outcomes from such research include the creation of new poetic texts on the basis of resonance potential; here, phonological knowledge is used to create layers of meaning, and the processes of album production enable the technical, musical and narrative characteristics of new poetry to grow within a complex system.

In performance and in production, the kinds of modifications to sound quality that occur in our work can help us to express particular psychological or spiritual states, to distinguish between internal and external worlds, or to experience the beauties of liminal spaces. When harnessed in a particular frame or context, we as musical creators may use sound’s inherent and transformative aspects to communicate distinctive and powerful ideas that operate concurrently within different domains, musical and beyond.

Lead Members: Lee Tsang, Oliver Carman

Events

The centre curates and contextualises the performance of new music by presenting and promoting a diverse set of public musical activities which are linked to various types of cutting-edge technology and research. Our main platform is the Open Circuit Festival, which began in 2014 and has hosted nearly two dozen events at the University of Liverpool since its inception. The festival not only offers a series of free contemporary music events in Liverpool, but also provides academic context on the future of music making and technology, including panel discussions, artist talks and public demonstrations. The festival is made possible through funding from the School of the Arts.

video by Bob Wass

See more images...

News

02.08.2021 Online concert for The Palaces Festival. Performance with Joby Burgess and Kathy Hinde. Works by Max de Wardener, Linda Buckley, Javier Alvarez and Eric Whitacre. For more information please see here.


02.08.2021 Completed performances and recordings of newly commissioned works by Gabriel Prokofiev, Graham Fitkin, Dobrinka Tabakova, Dario Marianelli, John Metcalfe and Tunde Jegede for Percussion and Electronics, to be broadcast by Cambridge Music Festival, Autumn 2021.


02.08.2021 Friday 29th July 2021, Matthew Fairclough devised and performed live electronics with London Sinfonietta at the Royal Festival Hall in the world premiere of Laura Bowler's opera Houses Slide. The entire performance was off-grid, powered by 16 on stage cyclists. The performance was broadcast by BBC Radio 3, 31st July 2021.


06.07.2021 Jacopo de Berardinis, Samuel Barrett, Angelo Cangelosi and Eduardo Coutinho introduced a new approach for music modelling that combines recent advancements of transformer models with recurrent networks – the long-short term universal transformer (LSTUT). The  LSTUT  outperforms other state-of-the-art models and can potentially learn features related to music structure at different time scales. They show the importance of integrating both recurrence and attention in the architecture of music models, and their potential use in automatic music composition systems. This work was presented at 2020 Joint Conference on AI Music Creativity. Paper: https://boblsturm.github.io/aimusic2020/papers/CSMC__MuMe_2020_paper_46.pdf. Presentation: https://youtu.be/Bj4RAaFqqLo. For more information please see here.


06.07.2021 Jacopo de Berardinis, Angelo Cangelosi and Eduardo Coutinho introduced a  new computational  model  (EmoMucs)  that  considers  the  role of different musical voices in the prediction of the emotions  induced  by  music.  They  combined  source  separation algorithms for breaking up music signals into independent song elements (e.g., vocals, bass, drums) and end-to-end state-of-the-art machine learning techniques for feature extraction  and  emotion  modelling  (valence  and  arousal  regression).  Through a series of computational experiments on  a  benchmark  dataset  using  source-specialised  models trained  independently  and  different  fusion  strategies,  they demonstrated  that  EmoMucs  outperforms  state-of-the-art approaches with the advantage of providing insights into the relative contribution of different musical elements tothe emotions perceived by listeners. This work was presented at 21stInternational Society for Music Information Retrieval Conference. For more information please see here.


06.07.2021 Jacopo de Berardinis, Michalis Vamvakaris, Angelo Cangelosi and Eduado Coutinho developed a novel methodology for the hierarchical analysis of music structure that is based on graph theory and multi-resolution community detection. This unsupervised method can perform both the tasks of boundary detection and structural grouping, without the need of particular constraints that would limit the resulting segmentation. They demonstrate that this methodology can achieve state of the art performance on a well-known benchmark dataset, whilst providing a deeper analysis of musical structure. Their work has been published in Transactions of the International Society for Music Information Retrieval. For more information please see here.


05.07.2019

Paul Turowski will attend and have work performed at the fifth international conference on Technologies for Music Notation and Representation, TENOR. The conference—previously held in Montreal, A Coruna, Cambridge and Paris—is presented by The Sir Zelman Cowen School of Music of Monash University in Melbourne, Victoria, Australia.





#tenor2019, #szcsom, #animatednotation, #graphicnotation, #monashuniversity

For more information please see here.


03.02.2019 Dr Lee Tsang will accompany singer Patricia O'Callaghan on the red carpet at this year's Juno Awards. His recent work with Canadian composer David Braid has been nominated for Classical Album of the Year: Vocal or Choral https://junoawards.ca/nomination/2019-classical-album-of-the-year-vocal-or-choral-elmer-iseler-singers-featuring-patricia-ocallaghan/. The Junos are Canada's leading music industry awards, equivalent to the Brits/Grammys. For more information please see here.


21.05.2018 We are pleased to announce the successful awarding of a Partnership Development Grant from the Social Sciences and Humanities Research Council of Canada to the TENOR (Technologies of Notation and Representation) Network, which includes the University of Liverpool as well as several other research institutions throughout the UK, Australia, France, Germany, US and Canada. This network is dedicated to the development, exploration, categorization, and critical examination of new technologies in the field of music representation. Dr. Paul Turowski will serve as the liaison for the ICCaT's involvement in this partnership.

More details about this partnership and the opportunities that it will provide will follow the upcoming TENOR conference in Montreal. For more information please see here.


08.03.2017 Ben Hackbarth's Am I a Particle or a Wave? will performed by Nexeduet (Juanjo Llopico and Sisco Aparici) at the Festival de Musica Contemporanea de Cordoba on April 1st 2017 and at the Festival "Flesap" of Segorbe Spain on March 25th 2017.
 For more information please see here.


06.03.2017 Oli Carman's work Electric Strings has been awarded 2nd Prize in the 2017 Xenakis Electronic Music Competition. There were 270 international submissions, selection was anonymous and the jury chaired by Denis Smalley. For more information please see here.


28.02.2017 Bido Lito has a nice write up about this year's Open Circuit Festival in issue 75. For more information please see here.


27.02.2017 Eduardo Coutinho was awarded a Knowledge Exchange & Impact Voucher from the University of Liverpool for a project entitled "Music Selections for Improving Road Safety." In this project, we will collect data that will permit analysing the link between the music heard by drivers on the road and specific driving behaviours (e.g., speeding, risk taking). This data will then be used to develop computational models that predict the potential risk of specific music pieces for driving.
 For more information please see here.


24.02.2017 Benjamin Hackbarth's new piece "Liquid Study no. 2" for piano and electronic sound will be premiered by Ian Buckle at the University of Leed's International Concert Series on March 10. For more information please see here.


23.02.2017 Romain Sabathé, Eduardo Coutinho and Björn Schuller's recent work on automated music composition done in collaboration with the Imperial College London has been accepted to the 2017 International Joint Conference on Neural Networks (IJCNN). "Deep Recurrent Music Writer: Memory-enhanced Variational Autoencoder-based Musical Score Composition and an Objective Measure."
 For more information please see here.