OMpursuit-notation-2b

OM-Pursuit

 image

Corpus-Based Sound Modelling for Computer-aided Composition.

 

Overview

OM-Pursuit is a library for dictionary-based sound analysis/synthesis methods in OpenMusic.

Parametric sound representations have since long served as conceptual models in composition contexts (see e.g. the French spectralist school). Today there are a number of software tools allowing composers to derive symbolic data from continuous sound phenomena, such as extracting frequency structures via sinusoidal partial tracking. Most of these tools, however, are based on the Fourier transform -which decomposes (static) frames of a time-domain signal into sinusoidal (frequency) components. This method can be less adequate to faithfully represent non-stationary sounds such as noise and transients, for example.

Dictionary-based methods offer a different model, representing sound as a linear combination of individual atoms stored in a dictionary. Mostly used for sparse representation of signals for audio coding purposes (compression, transmission, etc), this “granular” model offers interesting new possibilities for computer-aided composition applications, ranging from algorithmic transcription, to sound synthesis and applications for computer-aided orchestration.

Using sound samples as atoms for the dictionary we can design the ‘timbral vocabulary’ of an analysis/synthesis system. The dictionaries used to analyze a given audio signal can be built from arbitrary collections of sounds, such as instrumental samples, synthesized sounds, recordings, etc. OM-Pursuit uses an adapted matching pursuit algorithm (see pydbm by G.Boyes), to iteratively approximate a given sound using a combination of the samples in the dictionary -in a way comparable to photo-mosaicing techniques in the visual domain. Due to its greedy (iterative) nature, this approach is particularly well-suited for deferred-time applications.

Communication between OpenMusic and the dsp-kernel (pydbm-executable) is handled via an SDIF interface and scripts which are generated and manipulated using the visual programming tools in the computer-aided composition environment. The decomposition of a target sound results in a sound file (the model), the residual (difference between target and model), and a parametric description of the model. We developed a number of tools for visualization and editing of this model using different representations (spatial, tabulated, temporal). The model-data can be used and manipulated within OpenMusic and mapped for a variety of domains, including  sound spatialization and synthesis processes, computer-aided orchestration and -transcription, graphical notation of musical audio.imageimage