Essentially, I converted the varying luminescence levels of projected video (via LDR sensors modulating a set 1-5v voltage) into control voltage for my modular synth. The synth was then outputted via a Critter and Guitari videoscope, forming a neato little audiovisual feedback loop. There's more details, including technical details and diagrams, here: http://www.asoundeconomy.com/post/14306 ... xploration
What I really like about it is depending on the original video and depending on how I patch the synth and depending on how I mix the visuals in real time, it will be different every single time. I doubt I could replicate a performance even if I meticulously wrote down every detail.
Maybe you'll like it?



