Current Research

This is a space that outlines research projects that I am currently working on or that I am hoping to return to soon. Most of the software related to these projects can be found in the software projects section of this web site where appropriate.

My commercial research is not listed here, as I am limited by non-disclosure agreements.

SIMSSA: The Single Interface for Music Score Searching and Analysis Project

SIMSSA (Single Interface for Music Score Searching and Analysis) is a major long-term international research project involving a large number of institutions and millions of dollars of grant funding. The primary goal of the project is to teach computers to recognize and understand the symbols in musical manuscripts archived at libraries and museums around the world. The resultant data is then to be assembled on a single website, making it possible to easily search and analyze the online scores. SIMSSA will thus create an architecture for processing music documents, transforming vast music collections into symbolic representations that can be searched, studied, analysed, and performed anywhere in the world. This involves two main research axes:

My own primary role in this research is in the Analysis Axis, where I will focus on applying music information retrieval (MIR) techniques to the symbolic content in order to arrive at meaningful statistics that can be used to characterize and organize music. In particular, this will involve expanding and adapting the jSymbolic and ACE components of the jMIR suite in order to take advantage of and integrate their feature extraction and machine learning functionality. This work will also be combined with work done as part of the Music Information, Research, and Infrastructure (MIRAI) program.

jMIR: General-purpose standardized software for music information retrieval research

jMIR is an open-source software suite for use in music information retrieval (MIR) research. It can be used to study music in both audio and symbolic formats as well as mine cultural information from the web and manage music collections. jMIR also includes software for extracting features, applying machine learning algorithms and analyzing metadata.

The primary emphasis of jMIR is on providing software for general research in automatic music classification and similarity analysis. The main goals of the project are as follows:

More information on jMIR is available on the jMIR SourceForge page. There are also many publications on the jMIR components available in the publications section of this web site.

Data Mining and Machine Learning

Feature Extraction

Education and Audio Production

Data and Metadata

Legacy

Musical similarity analysis

This research examines the notion of musical similarity from both an applied and theoretical sense. Software is being developed that can automatically cluster and segment musical recordings based on similarity. This will be integrated with the jMIR project. Experiments will be performed with both supervised and unsupervised learning, and both audio and symbolic data are being considered.

There are four primary tasks involved in this research:

Such research could, for example, be used to classify or identify music based on compositional or performance style, to search for unknown music that a user might like based on examples of what he or she is known to like, to group music based on when a user might want to listen to it (e.g. while driving, while eating dinner, etc.), to perform similarity analysis for copyright purposes and to perform content-based searches of on-line databases. The following course paper outlines some of the exploratory research underlying this project:

McKay, C. 2005. Automatic music classification and similarity analysis. Course Paper. Université de Montreal, Canada.

Real-time audio transcription for interactive performance systems

This research involves developing a system for extracting control information regarding pitch, rhythm, dynamics and timbre from polyphonic audio signals in real time. This project is being worked on in the context of developing a standardized approach for parameter acquisition that addresses problems relating to the longevity, distribution and robustness of interactive accompaniment systems. The research is initially concentrating on electric guitar music, although it is hoped to eventually generalize the system to any monophonic or polyphonic instrument. A PowerPoint presentation is available from the MGSS 2005 Symposium describing some of the key ideas of the project, and this initial paper has been published outlining the priorities of such a standardized parameter extraction system:

McKay, C. 2005. Approaches to overcoming problems in interactive musical performance systems. Presented at the McGill Graduate Students Society Symposium.

The following course paper describes several transcription techniques that are being considered:

McKay, C and W. Hatch. 2003. Transcriber: A system for automatically transcribing musical duets. Course Paper. McGill University, Canada.

Optical recognition of medieval musical manuscripts

Modeling electric, acoustic and classical guitars, including slide guitar

Slide guitar inspired hyper-instrument

Emotion and music


-top of page-