Fall or Winter
Discrete-time signal processing concepts and techniques. Discrete-time fourier transform and series, linear time-invariant systems, digital filtering, spectral analysis of discrete-time signals, and the z-transform.
Prerequisite: MUMT 307
Prof. Philippe Depalle
Most digital sound synthesis methods and audio processing techniques are based on the spectral representation of sound signals. This seminar starts with a theoretical and practical study of spectral representation, spectral analysis, and spectral modification of sound signals. Digital sound synthesis and sound processing techniques are then presented as specific spectral modelling or alterations from which their capabilities, properties, and limitations are deduced. The techniques that are explored in this context include the phase-vocoder, additive synthesis, source-filter synthesis, distortion synthesis and processing, waveguide synthesis, and reverberation. Available Computer Music software and ad hoc patches are used as examples and illustrations. Although the emphasis is given on basic principles rather than details of implementation, a full command of Max/MSP is required for the assignments.
Kojiro Umezaki and Bruce Pennycook
This seminar explores topics central to the research areas of machine listening and machine composition from technical and aesthetic perspectives.
Techniques including rule-based, symbolic processing and network-based, sub-symbolic processing will be considered. These techniques will be applied to musical representations based on pre-transcribed data (e.g. MIDI) and direct processing of audio data all with a focus on viable realtime implementations.
Machine listening topics will include beat tracking, meter induction, key induction, score following, segmentation, and pattern processing. Machine composition topics will focus on fundamental algorithmic techniques and aesthetic issues.
Five special presentations will be given by Bruce Pennycook on the following topics:
There will be one examination based on the principal readings, one paper, an individual presentation, and a final project. The final project will be a musical example based on ideas explored in the paper and typically implemented in C/C++ or MaxMSP.
Prof. Stephen McAdams
Music theoretic, performance-related, psychophysical, and cognitive perspectives on musical timbre and its role as a bearer of musical form will be surveyed and discussed. The main aim is to lay the groundwork for a unified theory of musical timbre. A variety of interdisciplinary topics will be covered concerning the psychophysical “representation” of timbre in the auditory system, multidimensional models of timbre as predictors of perceptual and musical effects of timbre, the role of timbre as a structuring force in music, the eventual limits imposed on this role by perception and memory for absolute timbre and timbral relations, and the use of timbre as an expressive device in performance and sound recording.
This seminar is open to graduate students in music theory, composition, performance, music technology, sound recording, cognitive psychology and other related disciplines. Evaluation will be based on active in-class participation, group projects presented in class, and a term paper.
Prof. Stephen McAdams
Music theoretic, performance-related, psychophysical, and cognitive perspectives on contemporary musical materials and form will be surveyed and discussed. The main aim is to lay the groundwork for a theory of the dynamics of musical listening and experience. The seminar covers a variety of interdisciplinary topics concerning the conception, perception, and memory of contemporary musical materials, as well as the cognitive, emotional and aesthetic aspects of music listening in time. It will combine considerations of a compositional, music theoretic and cognitive psychological nature to attempt to understand these complex phenomena as they operate in real music listening, whether to recorded or to live music in a concert setting.
Prerequisites: Ability to read music. It will be based on the E-book McAdams, S. & Battier, M. (2005) Creation and Perception of a Contemporary Musical Work: The Angel of Death by Roger Reynolds, IRCAM, Paris. Evaluation: Grades will be based on active participation in class discussions and debates [10%], class assignments [10%], presentation of group projects [40%], and a term paper [40%].
Prof. Gary Scavone
This seminar will focus on methods for discrete-time modeling of musical acoustic systems. Topics to be covered will include discretization techniques, lumped vs. distributed system characterizations, and delay-line interpolation with applications to delay-based audio effects (phasing, flanging, chorus), artificial reverberation, and musical instrument models (plucked, struck, and bowed strings, woodwinds, brasses, and the human voice). In addition, multi-dimensional modeling techniques will be presented. Assignments will make use of Matlab and C++ software.
Prof. Marcelo Wanderley
Review of basic technologies used in the design of input devices for musical expression. Discussion of the most common types of electronic sensors and associated conditioning circuits and examples of their application on several gestural controllers presented in the literature. Students should have some prior knowledge of analog electronics.
Prof. Marcelo Wanderley
Computers have long been able to synthesize high quality sound in real-time. The question nowadays is how to play the computer as a real time instrument. In order to answer this question, the analysis of performer gestures and the design of digital musical instruments using the computer are essential steps towards the definition of the interaction possibilities between the performer and the machine. This seminar aims at presenting the basic notions regarding human-computer interaction (HCI) in complex, multi-parametric contexts such as computer music and interactive live performance.<.p>
Specifically, the design of gestural controllers will be analysed in detail through:
This seminar will investigate the current research activities in the area of music information acquisition, preservation, and retrieval. The goal is discovering ways to efficiently find, store, and retrieve musical information. Although the field is relatively new, it encompasses various music disciplines including music analysis, music education, music history, music theory, music psychology, and audio signal processing. Each student will be expected to present various music information acquisition, preservation, and retrieval topics along with literature reviews. Each presentation should be accompanied by web pages created by the presenter. Final project may consist of software development, a theoretical paper, or an extended review paper. Class format will be presentations followed by discussions. Potential topics include: Themefinder, MELDEX, Cantus, audio content analysis and search, focused web crawling, melodic similarities, computer-aided transcription, timbre recognition, speech / music separation, best practice for music and audio preservation, P2P technologies, audio and music formats (MPEG-4/7/21, MP3, XML, MEI), and Web Services.
Prof. Philippe Depalle
This seminar presents current research trends in time-frequency representations and parametric modeling in the context of music and audio applications. A specific focus is made on the analysis of sounds using parametric methods. Students should have prior knowledge of sound analysis and resynthesis techniques and of digital signal processing.
Profs Sean Ferguson, Marcelo Wanderley
Until the beginning of the 20th century, the design of musical instruments relied upon mechanical systems and acoustical properties of tubes, strings, and membranes. With the advent of electricity, luthiers were able to experiment with the new possibilities offered by electrical and electronic means. A whole different set of possibilities became available to instrument designers, including new ways to generate sound and to design control surfaces of any arbitrary shape.
This course will focus on systems that use the computer as the sound-generating device, a choice that offers the flexibility of a general-purpose architecture able to implement different synthesis techniques. An instrument that uses computer-generated sound is known as a digital musical instrument and consists of a control surface driving in real-time the parameters of a synthesis algorithm implemented in the computer. The synthesis parameters are controlled using input devices, or gestural controllers, that may eventually track any type of movement or gesture, thereby allowing far more control possibilities than those offered by the standard piano-like interface.
As a consequence, new digital musical instruments do not necessarily bear any resemblance to existing acoustic instruments. In this context, a number of questions arise: How does one play or compose for these new instruments? Can digital musical instruments become as viable as those on which we are accustomed to performing? What is the role of virtuosity in such contexts? Will a repertoire ever be built for these instruments? What is the balance between technological obsolescence and technical mastery?
This course will deal with the various issues relating to new digital musical instruments in the following three areas: a) Music Technology b) Performance and c) Composition. Course work will consist of collaborative projects involving students in Composition, Performance and Music Technology.
Any Music Technology Professor
Independent Music Technology project. Students will prepare a statement of objectives, a comprehensive project design and a schedule of work, and will undertake the project on appropriate music technology platforms.