Trevor LeFebvre

MUMT-306 Final Project

Fall Semester 2020

SEQUENCER WITH EFFECTS AND RANDOMIZED EFFECT-PARAMETER MODULATION

OBJECTIVES:

The goal of my project was to create a 6-channel 16-beat sequencer with a selection of effects for each channel as well as randomization capabilities for effect parameter-changing (automation) and sound triggering. I wanted to put an emphasis on the program being user-friendly, so I used presentation mode to give the patch an organized structure for more intuitive interaction. I wasn’t able to accomplish all of the goals that I had for this assignment since my design was open-ended and new goals would spring up as I designed the structure, so I will include my thoughts on how I would go about implementing those.

SEQUENCER:

The first order of business with a sequencer is how we go about sequencing sounds (this feature is interacted with in the light blue component in presentation view). I implemented this in two different ways: a discrete on/off sequencer that uses trigger-objects (where a channel is either triggered at a certain step in the sequence or not), and a probability-based sequencer that uses slider objects (channel more likely to be triggered if the slider is higher). The architecture is essentially the same for both types of sequencers, the only difference is that different types of gate objects are used (triggers attached to basic gate vs. sliders attached to randomized gate sub-patches). For the sequencer interface, we have access to tempo control (in bpm), an on/off trigger for the metro object that is used for triggering steps in the sequence, and a trigger for selecting which type of sequencer to use. The architecture is quite simple, using counter, select, and gate objects to control whether a bang is sent to a channel or not (mainly contained in the lower left portion of my project when not in presentation view).

EFFECT CHAINS:

For each of the 6 channels, there is an effect chain in this order: signal -> low/high-pass filter -> chorus effect -> simple delay effect -> reverb (based on Schroeder reverb). These effects are represented by the orange boxes in presentation view.

LOW/HIGH-PASS FILTER:

The simplest implementation out of all of the effects I used, this filter uses Max MSP’s biquad~ object to filter out high or low frequencies and the filtergraph~ object which is connected to it is used to control the shape of the filter. The user can click the message bubbles to change between a high- and a low-pass shape and can click and drag the filter shape on the filtergraph~ object to change the quality of the filter. This is admittedly not very technically involved -- we’re letting Max do the work for us here. I did not implement randomized parameter changing for this effect.

CHORUS FILTER:

For my chorus effect implementation, I followed the guidelines provided by official Max documentation and used a tapin~ and tapout~ objects and a rand~ object to randomly modulate the pitch of input signal. I created a subpatch for this using four different inputs:

Input 1: contains the audio signal that we want to process.

Input 2: controls the delay time for the delay line as well as the range of variation in the delay time. We could instead use two inputs for controlling the delay time and the range of variation individually and achieve similar results but Cycling 74 recommended the initial approach. (Units are in milliseconds).

Input 3: controls the modulation rate for the randomized values. (Units are in Hz).

Input 4: controls the depth of randomization -- this interacts with the second input to control the variation in delay time. (Thus, the units are also in milliseconds).

It’s difficult to describe exactly how each of these parameters affects the affected signal in a practical sense (using non-academic language) so I encourage the reader to experiment with the parameters to get a feel for how this effect can be used. I recommend that the reader use higher values for the chorus rate and chorus depth (inputs 3 and 4) if they would like to achieve a more dissonant/garbled sound and to use very small values (or have the effect be turned off entirely) if they would like to retain the sample’s tonality. The second input also increases the extent of randomization so increasing that value will also make the input more dissonant, but only to a certain extent -- the effect becomes less noticeable if this input is over 200 ms.

SIMPLE DELAY:

The implementation for this effect is quite simple -- we send the input signal to a delay line using tapin~ and tapout~, using one input to control the length of the delay time and another input controlling the coefficient used at the feedback loop for repeated delayed signals. I also send out a dry signal to the output or else we would only get the delayed signals without the original input. Here are the 3 different types of inputs:

Input 1: contains the audio signal that we want to process.

Input 2: controls the delay time (in milliseconds). Higher values means there’s a longer gap between delayed signals.

Input 3: controls the feedback coefficient. The maximum value is 1 and the minimum is 0. When at the maximum, the delayed signals never decay and the output progressively gets louder until the user turns this down from the maximum. When at the minimum, no delayed signals are sent out.

REVERB:

This reverb was based off of Schroeder’s reverb (a reverb effect designed in the 1960s) and I will include the documentation for it in the references section. There are two different types of filters used in this effect: comb filters and allpass filters:

Comb filter: very similar to the simple delay, this is basically the same thing except with only the delayed signals getting sent to the output (not the original signal). We have the same inputs as the simple delay and they fundamentally do the same things. The reason it’s called a comb filter is that the resulting signal looks like a comb (short higher-amplitude signals separated by periods of relative silence). 4 comb filters are used and they are meant to represent how it sounds when sound waves reflect off of walls.

Allpass filter: similar to the comb filter except with one main difference, that being the addition of a feedforward component. The feedforward component bypasses the delay line, gets multiplied by the feedback coefficient, and gets sent to the output. The feedback loop component behaves in a similar way as the comb filter except that we multiply the signal by -1 (presumably to correct phasing issues between the feedback and feedforward components). The feedback coefficient affects the feedback and feedforward components similarly besides this vertical flipping of the soundwave for the feedback loop. Allpass filters are meant to imitate the sound of diffusion (the spreading of sound energy within an environment). Similarly to the comb filter, the inputs for this effect are delay time and feedback coefficient.

The particular documentation that I used to develop this reverb effect suggested a preset for the parameters that represent Schroeder’s initial design, so I included a button that sets the parameters to this configuration.

AUTOMATION:

I wanted to include functionality that would allow the timbre of the sequence to automatically change over time, so I decided to include an automation feature for the chorus and simple delay effects. For my automator subpatch object, there are four different inputs:

input 1: Bang input -- If a bang is received, then the automation is triggered.

Input 2: Input for range of values. This sets the range for the values that the parameter being changed can be changed to. For user-interaction, I did not provide the ability to change this parameter and set it according to each individual parameter

Input 3: Input for automation time range. This works in conjunction with input 4.

Input 4: Input for automation time minimum. Input 4 sets the minimum time length of the envelope used for changing the parameter and input 3 determines how much longer the time length of the envelope can extend past the minimum. We can’t set the time length for envelopes directly because I decided to integrate automation with a randomized design to put emphasis on the “generative” aspect of my project.

This feature has been designed such that input 1 is connected to a slider + probability subpatch (used similarly as in the probability-based sequencer). Bangs are sent to automation objects when the relevant channel is triggered, and the probability object exists in between to control the probability that a randomized automation occurs in reaction to this event. Although I have designed this in such a way that we are unable to trigger automations directly, we are able to set the probabilistic frequency of automations occurring.

PERSONAL THOUGHTS:

I believe that while my project is fairly rigid/limited in its user interface, it is diverse in its potential application.

One potential approach is to use it as a straight-forward drum machine. We could set each channel to a drum sound (e.g. kick, snare, open and closed hi-hats, clap, and percussion). We can use the simple sequencer to create a metronome-like drum pattern for the user to practice an instrument to. We could use the probability-based sequencer with subtle low-pass filter and reverb effects to make a more dynamic drum accompaniment for the user/performer.

Another approach is what I like to call the wind-chime generator. If we use the probability-based sequencer, set each channel to a tonal sample (where everything is in the same scale) and we set the tempo to be very slow, we can evoke a soundscape that is reminiscent to that of a wind-chime. Simple delay and reverb work particularly well for this purpose.

Lastly, my favorite approach is that of the industrial soundscape generator. The goal of this is to set each channel to incongruent/clashing/ugly sounds, make the automation parameters very diverse, and then tweak each parameter until it sounds interesting. This is my favorite approach because with the other approaches I mentioned, one can come in with fairly accurate expectations of what they’re going to get out of the program. With this approach, however, many more decisions are left to the program itself, resulting in surprising and inspiring soundscapes. The reason I describe it as industrial is that the automation for delay and chorus can typically sound very harsh and mechanical (not ideal for a drum machine or a wind-chime generator).

THINGS I COULD HAVE DONE

With my project being as open-ended as it was, there were many ideas that I either never got around to implementing and/or wasn’t able to figure out how to implement, so I will detail a few of those ideas here. Most of these ideas are related to improving the user-interface and/or adding more functionality.

I could have tried to implement a chorus effect more similarly to that of guitar pedals -- an affected vibrato signal layered with a dry signal. This probably would have sounded less garbled/sporadic unless, but I could have similarly run into issues with trying to change the pitch in a continuous way with discrete means.

One way to implement amplitude randomization functionality would be to build a similar structure to that of the probability-based sequencer (a grid of sliders) except for each beat for each channel gets two sliders (one to represent the maximum possible amplitude for the channel at that beat and another to represent the minimum possible amplitude). While this would be fairly ugly, it would give the user much more expressive control over the sequencer.

I considered making a button and slider for setting the simple delay to subdivisions of the tempo (quarter, eighth, triplet eighth, and sixteenth note) but I opted against this as it was an extraneous functionality and not important for my project’s goals.

It would add much more expressive capability if we were able to control/randomize the starting time and length of the samples for each channel.

I considered allowing the pitch of the samples to be changed but I don’t believe fully randomized pitch modulation would add much to the true functionality of this project. Furthermore, a more structured means of pitch modulation would be a whole other project in itself.

The automation design doesn’t allow for the user to trigger automation directly, rather it forces them to rely on a probabilistic model. Adding a “point and shoot” feature for this as an alternative to randomized automation would be a good idea. This would include a button for triggering the automation, a slider to determine the destination state, and a slider to determine how long the automation takes.

REFERENCES:

Filter:

https://docs.cycling74.com/max5/refpages/msp-ref/biquad~.html

https://docs.cycling74.com/max5/refpages/msp-ref/filtergraph~.html

Chorus:

https://docs.cycling74.com/max5/tutorials/msp-tut/mspchapter30.html

Reverb:

https://ccrma.stanford.edu/~jos/pasp/Schroeder_Reverberators.html