This space outlines research projects I am currently involved in. My industry research is not listed here, as it is constrained by non-disclosure agreements.
SIMSSA: Single Interface for Music Score Searching and Analysis
SIMSSA (Single Interface for Music Score Searching and Analysis) is a broad, long-term international research project involving a large number of institutions and millions of dollars of grant funding. The primary goal of the project is to teach computers to recognize and understand the symbols in musical manuscripts archived at libraries and museums around the world. The resultant data will ultimately be assembled on a single web site, making it possible to easily search and analyze the online scores. SIMSSA is therefore creating an architecture for processing music documents into symbolic representations that can be searched, studied, analyzed and performed anywhere in the world. This involves two main research axes:
- Content: Addresses the process of creating optical music recognition (OMR) systems for transforming digital images of scores into searchable symbolic notation.
- Analysis: Addresses the creation of tools and techniques for large-scale search and analysis of the scores after they have been converted into symbolic representations.
My own primary role is in the Analysis Axis, where I focus on applying music information retrieval (MIR) techniques to the symbolic content in order to arrive at meaningful statistics that can be used to characterize, classify, organize and search music. In particular, this involves expanding, adapting and applying the jSymbolic and ACE components of the jMIR research software suite in order to take advantage of and integrate their feature extraction and machine learning functionality into the SIMSSA framework.
I am also involved in designing and supervising the initial implementation of the SIMSSA DB, a repository and discovery tool for symbolic music files that puts a particular emphasis on:
- Sophisticated modeling and interconnecting of music metadata.
- Provenance tracking.
- Searching by metadata as well as musical content (based on auto-extracted jSymbolic features).
- Archiving data and metadata associated with specific research projects.
My work on the SIMSSA project is done in loose affiliation with the MIRAI project, described below.
MIRAI: Music Information, Research and Infrastructure
MIRAI (Music Information, Research and Infrastructure) is another large-scale, multi-institutional project, and is loosely linked to the SIMSSA project described above. The main goal of MIRAI is to use computational tools to study and answer questions about how humans interact with music as information. This can perhaps be described as “data-intensive musicology,” a comprehensive research program for music information retrieval and computer-aided musicology addressing large amounts of music information. This work includes musical information in a wide variety of digital modes, including scores, audio, text and metadata.
My own role in the MIRAI project to date has focused on carrying out musicological research using the jMIR tools, particularly jSymbolic, in collaboration with expert musicologists. So far, this has primarily involved applying feature extraction, machine learning and statistical analysis techniques to Renaissance music, for the purpose of quantitatively delineating elements of music style associated with composers, genres, regions and more. The Publications section of this web site provides links to the products of this work to date.
Future steps of my research as part of the MIRAI project will turn to multimodal analysis of music, including audio, musical texts and metadata, in addition to the digitized symbolic scores I am already working with. This will involve the integrated use of the various jMIR tools, described below, and will build on my earlier work as part of the Networked Environment for Music Analysis (NEMA) project and during my doctoral and post-doctoral studies.
jMIR: General-Purpose Standardized Software for Music Information Retrieval Research
jMIR is an open-source software suite for use in music information retrieval (MIR) research. It can be used to study music in both audio and symbolic formats, as well as to mine cultural information from the web and manage music collections. jMIR also includes software for extracting features, applying machine learning algorithms and analyzing metadata.
The primary emphasis of jMIR is on providing software for general research in automatic music classification and similarity analysis. The main goals of the project are as follows:
- Make sophisticated pattern recognition technologies accessible to music researchers with both technical and non-technical backgrounds.
- Eliminate redundant duplication of effort.
- Increase cooperation and communication between research groups.
- Facilitate iterative development and sharing of new MIR technologies.
- Facilitate objective comparisons of algorithms.
- Facilitate research combining high-level, low-level and cultural musical features (i.e. symbolic, audio and web-mined features).
More information on jMIR is available on the dedicated jMIR homepage, and there are also many publications on the jMIR components available in the publications section of this web site. The bullet points below outline each of the specific components of jMIR:
- jSymbolic: Software for extracting high-level features from symbolic music encodings.
- jAudio: Software for extracting low and high-level features from audio recordings.
- jWebMiner: Software for extracting cultural features from the internet.
- jLyrics: Software for mining lyrics from the web and extracting textual features from them.
Data Mining and Machine Learning
- ACE: Pattern recognition software that utilizes meta-learning. Evaluates, trains and uses a variety of classifiers, classifier ensembles and dimensionality reduction algorithms based on the needs of each particular research problem.
- ACE XML: Standardized file formats for representing information related to automatic music classification, including feature values, feature metadata, instance labels and class ontologies.
- jMIRUtilities: Tools for performing miscellaneous tasks, such as labeling instances, extracting data from Apple iTunes XML files, merging features extracted from different sources, etc.
Education and Audio Production
- jProductionCritic: Educational software for automatically finding technical recording and production errors in audio files.
Data and Metadata
- jSongMiner: Software for identifying unknown audio and extracting metadata about songs, artists and albums from various web services and embedded sources.
- jMusicMetaManager: Software for profiling music collections and detecting metadata errors and redundancies.
- Codaich, Bodhidharma MIDI and SLAC: Labeled datasets for training, testing and evaluating MIR systems.
- Bodhidharma: MIREX 2005-winning software for classifying MIDI recordings by genre. The ancestor of ACE and jSymbolic.
MML16: Mapping the Musical Landscape of the Sixteenth Century
MML16 (Mapping the Musical Landscape of the Sixteenth Century) is a new major international project that aims to build digital inventories of all musical sources of polyphonic music printed or copied during the sixteenth century, with information on dates, provenance and type of source. This is being carried out in collaboration with RISM.