Live processing of audio/video via physical/digital feedback networks
Video processing done via projector → camera → software (Pd/GEM) loop
Audio processing done via Supercollider → speaker → microphone → Pd loop
Pure data code on Github (CC BY-NC-SA)
And finally a link to Matt Pearson, who developed the processing framework (and from whom I heard about this project).
Studies in harmonic motion. Processing + SuperCollider
These animations are similar to those by Memo Atken (although my code is different).
The following piece applies a similar philosophy:
First experiment with the Web Audio API
(note this only works in the Google Chrome browser at this point)
Second experiment projection mapping (code written in Processing and Supercollider):
First experiment projection mapping (code written in Processing):