Parametric Synthesis with Audioguide

by Benjamin Hackbarth
September 7, 2016, 3:00 pm

I’ve been experimenting with a method for using concatenative synthesis to generate parameters for realtime signal processing.  We begin with two input sources, a stable, held tone, and a target sound file. The stable tone will be used in the output as a mockup of realtime signal that gets processed. The timbral contour of the target soundfile will be used to drive the processing routine of the stable tone.



stable sound (clarinet):
target sound (john):

First, a database is created in order to exhaustively sample the combination of a stable sound processed with a DSP algorithm. In this example, I use an clarinet playing G3 for the steady sound and a combination of bandpass filtering and pulse modulation as the DSP routine. I use python to algorithmically create permutations of all combinations of different parameters which are articulated at different step sizes. The result is a long soundfile with short utterances that consist of the clarinet sound and different parameterisations of the signal processing. Think of this database as capturing a range of possibilities when combining a steady sound with a DSP routine.



Next I run audioguide using a target soundfile of my choice and the database created in step one. I set up Audioguide to pick the best matching database segment for each 20 millisecond slice of the target. The resulting output soundfile isn’t all that interesting, but rather than using the sounds contained in the database soundfile, we can extract the original DSP parameters that created this each of these slices. We end up a sequence of time-varying parameters for our DSP algorithm which follow the timbral profile of the target sound, but are remapped onto the timbral space of the DSP processing of a steady state sound. The result follows the temporality of the target and I’m also used the target’s amplitude envelope to make the correspondence a bit more palpable.

output:

output mixed with target sound:

Here is a second another rendering which uses more extreme value ranges for the parameters of the signal processing.

output:

output mixed with target sound:

Below is a graph of the parameters that create the first output sound. As you can see, the movement of parameters as well as the manner in which parameters correlate is complex and would be difficult (perhaps impossible) to create by hand. Using a target soundfile, however, permits the composer to specify temporal and timbral aspects both intuitively and with precision.

Plots showing the values of each of the six parameters changing in time according to AudioGuide’s concatenative synthesis selections.


I have used this technique in a number of recent pieces. For example, the following extended passage in my piece Volleys of Light and Shadow (2014) uses a speech-like target sound to drive the DSP of the long cello glissando.

Instruments and Computer Sound (10) Computer-Aided Composition (4) Music Information Retrieval (3) Concatenative Synthesis (2)