The Music Technology Group is active in research related to multimedia, signal processing, human-computer interaction, information retrieval, musical acoustics, psychoacoustics, and music perception and cognition.
The M.A. and Ph.D. graduate programs offered in Music Technology at McGill University are heavily based on technological and scientific research, with applications to music and sound. While our program is administered through the Schulich School of Music, our research goals share many similarities with those found in engineering and science faculties.
Our group's main expertise relates to:
A more detailed description of the research interests of each staff member can be found below.
Dr. Philippe Depalle
My research in digital synthesis and processing of sound mainly concerns the analysis and re-synthesis of audio signals. The fundamental component of my work is the systematising of the “analysis/synthesis” point of view in the conception of computer music tools. For the user, it unifies sound synthesis and sound processing into the same framework. For the researcher, it directs mainly his work toward the analysis part, since the major difficulty in studying sound signals is to track precisely their temporal and spectral evolutions. Sensitivity and precision of the human ear reinforce this difficulty, which imposes high quality results.
The basic goal of analysis/synthesis is to conceive relevant models of acoustical signals. Once a synthesis model has been chosen, a sound signal is represented in this model by a set of temporal functions often called control parameters. These temporal functions are extracted from a pre-recorded sound during the analysis process. When the analysis becomes sufficiently precise, re-synthesis produces a signal that sounds perceptually identical to the original sound. By modifying and substituting control parameters, this analysis-synthesis scheme can provide for very refined and precise processing of sound material. For example, it may allow one to produce a family of synthetic sound signals derived from a single original source or to carry out a morphing between two key sounds.
The significance of this research is multiple: scientific, industrial and artistic. In science, the study of musical instruments and musical sounds gives the opportunity for innovative research on non-linear oscillating systems or fast time-varying systems, room-acoustics, psycho-acoustics, etc. It provides simulation tools for psycho-acoustic experiments and coding hypotheses among others. For the industry, it brings knowledge to design audio processing, audio recording, transmission systems and electronic musical instruments. In the artistic field, it provides composers and multimedia artists tools for the creation and the processing of sounds. It should be noted that the requirements for high quality in musical application have always been an incentive for prospective research in applied acoustics, signal processing and computer science.
Dr. Ichiro Fujinaga
My research goal is to provide technological tools for musicians, music scholars, and researchers in other music-related area such as psychology, psychoacoustics, and neurosciences.
Current primary research focus is on learning. In particular, the exemplar-based learning model. This type of learning, I believe, is at the core of learning about music and other perceptional learning. This is in contrast to the traditional rule-based understanding of music, which in my opinion has not been successful at explaining how we hear music. The computer implementation of this cognition model using k-nearest neighbour classifier and genetic algorithms is being developed. This research grew out of my work on the optical music recognition system where application of this model has been extremely successful.
Application possibilities using this model include: music instrument recognition, conducting gesture learning, music style recognition, expressive performance, counterpoint, and harmonization.
Other research interests include: distributed digital libraries, music information retrieval, software synthesis, virtual concert stage, vibrato analysis, and continuing research in optical music recognition.
Dr. Gary P. Scavone
The shape and design of most acoustic music instruments, refined and advanced by craftsmen through empirical, “trial and error” methods, have changed little over the past century. The study of the acoustic phenomena underlying the operation of these instruments, however, is a relatively young science. My research is focused on, but not limited to, woodwind music instruments and includes:
Recent acoustic analyses have focused on vocal-tract influence in woodwind instrument performance and fluid-structure interactions in wind instrument systems. My acoustic modeling work is concerned with the characterization of the various interdependent components of a music instrument system, such as the mouthpiece, air column, and toneholes of a clarinet. This approach is commonly referred to as “physical modeling”. A discrete-time technique called “digital waveguide synthesis” is often used to efficiently and accurately implement these acoustic models. Recent synthesis developments have been focused on aspects of woodwind instrument toneholes, conical air columns, vocal tract influences, and reed/mouthpiece interactions. Several human-computer interfaces have been developed in the course of this research for the purposes of experimenting and performing with the real-time synthesis models.
Acoustic and psychoacoustic experiments also play a role in this research. In many instances, acoustic theory must be validated by experimental measurements. Psychoacoustic studies can aid in the development of efficient and convincing synthesis models by helping identify acoustic features of a system which have less perceptual importance for human listeners.
To support the design and implementation of real-time synthesis models, a software synthesis environment called the Synthesis ToolKit in C++ (STK) has been developed in collaboration with Perry Cook at Princeton University. STK is a set of open source audio signal processing and algorithmic synthesis classes written in C++. The ToolKit was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, real-time control, ease of use, and educational example code.
This research has applications in the development of commercial sound synthesizers for the creation of common acoustic instrument sounds, as well as sounds based on physical, yet unrealizable, instruments. In addition, continuing developments in our understanding of acoustic principles will make possible computer-based prototyping tools which will ultimately lead to improvements in music instrument designs.
Dr. Stephen McAdams
My research goal is to understand how listeners mentally organize a complex musical scene into sources, events, sequences, and musical structures. In my laboratory we use techniques of digital signal processing, mechanics, psychophysics, cognitive psychology, and cognitive neuroscience.
The origin of music is in sound-producing objects. We seek to understand how listeners perceive the events produced by such objects in terms of the mechanical nature of the objects and the ways objects interact (impact, friction, blowing) to set them in vibration: a new field that I have dubbed “psychomechanics” since we try to quantify the relation between the properties of mechanical objects and perception of the events they produce. An understanding of the minimal sound cues that allow us to identify sources and events is important for sound synthesis technologies included in virtual reality technologies, for example.
One of the most mysterious of musical properties of sound events, very closely related to source properties, is their timbre. Timbre is a whole set of dimensions of musical perception such as brightness, roughness, attack quality, richness, inharmonicity, and so on. We try to understand how this pallette of attributes is organized perceptually and how it depends on both the acoustic properties of sound events and on the context in which they occur. We are also interested in how timbre can be used as an integral part of musical discourse through orchestration or sound synthesis and processing.
In music, many such (musical) objects are often playing at the same time, which means that the listener must organize the musical scene into events and sequences that carry musical information about the behavior of sound sources (a musical instrument playing a melody, for example). However, composers can play with sound in ways that make a listener hear several sources as one (blending sounds), or with sound synthesis to make a single sound split into several (sound segregation). Musical scene analysis is affected not only by what we hear but also what we see. We will thus be extending work on auditory scene analysis to multimodal scene analysis more generally.
Music happens in time and the ephemeral world of temporal experience is another concern in my laboratory. We are interested in how cognitive processes such as attention, memory, recognition, and structural processing, as well as more emotional and esthetic experience of music, take place in time and are related to musical structure. We have developed and employed various techniques for measuring and analyzing continuous responses during music listening in live concert settings to probe the cognitive dynamics of musical experience.
Dr. Marcelo M. Wanderley
The fast evolution of computer technology has produced current personal computers powerful enough to synthesize high quality sound in real-time. One of the utmost research problems concerning music technology nowadays is how to use this processing power as part of novel musical instruments - digital musical instruments (DMI) - where sound is generated by the machine.
But what does it mean to play a digital musical instrument? Can one actually play a computer in the sense one plays an acoustic instrument? Can similar levels of control subtlety be achieved with this new paradigm?
At the Music Technology area at McGill, we focus on the analysis of performer-instrument interaction with applications to gestural control of sound synthesis. This goal is pursued through a two-pronged approach:
In the first approach we explore the notion of gesture in music and consider ways to devise gestural acquisition and the design of input devices, including the proposition of evaluation techniques derived from human-computer interaction suitable in a musical context. This is complemented by the analysis of mapping strategies between controller variables and synthesis variables. Applications include the prototyping of novel gestural controllers and digital musical instruments, as well as software systems such as ESCHER, a real-time system developed in collaboration with researchers at IRCAM.
In the second approach we perform and analyze quantitative measurements of instrumentalists' expressive movements - those not produced in order to generate sound - during the execution of pieces. We also focus on the acoustical influence of performer expressive movements, and the modeling of this effect. This research suggests that expressive movements can be used as extra synthesis parameter and eventually improve the design of existing digital musical instruments that simulate traditional ones.