In a piece like this, finding the correct balance of randomly generated parameters can be a bit tricky. At least for me. If the music has a tendency to remain too similar for long periods of time, everything becomes blah. On the other hand, subtlety and nuance are decimated from too much too fast. I personally enjoy a blend of semi-frequent sonically interesting interruptions that jump from a monotonous trance-inducing track.
Did I accomplish this with this track. Somewhat. I’m also training to force myself to work faster in Csound. For practice, I’m rushing things out. I can always revisit the piece later.
Sometime ago, I was inspired by the article How to Use Transeint Designers in Your Mixes at audio tuts+. I ended up creating a crude Max/MSP patch that loosely approximated what I took away from the article. Over the weekend, I reimplemented this patch in Csound, creating a drum transient processor.
And once again, I use a sample from the BT sample pack from the OLPC sample library. Download it here.
So how does it work? Audio from a sample is played through a user-defined opcode transient detector, called Transient. Transient works by running the audio through the built-in opcode rms. The yielded rms value is divided by the previous rms value. If the ratio is greater than or equal to a user-set threshold, then the output is 1, else 0.
The output of Transient is then used to trigger (or not trigger) a built-in envelope generator. Basically, a trigger causes the envelope to be reset to 1. The envelope value decays over time until it reaches 0; The decay is set by the user. This envelope is multiplied by a copy of the original sample audio signal and an amplitude value set with pfield 6. The original audio and transient audio are mixed together, based on the value specified at pfield 5, and then sent to the output
One other feature built into this example is that user’s can set a value that determines the minimum amount of time that must pass before the envelope can be triggered again.
As for the audio example. The tempo of the sample is originally 110, though I play it at 165 BPM. It is played 8 times, with various transient settings. These settings are as follows:
1. Original audio unmodified.
2. Transients only.
3. Original audio mixed with transients.
4. Original audio mixed with transients, with emphasis on transients.
5. Transients only, with the RMS threshold and minimum delta time settings modified so that less transients are detected.
6. Same as 5, with a little of the original audio mixed in.
7. Original audio mixed with transients, except transients have been given a negative amplitude value, so that it is now 180 degrees out of phase, which cancels out the transient.
8. Same as 7, though with subtle changes to the numbers.
Because I’m lazy and didn’t feel like creating an audio montage in a wave editor, I created a Csound utility that plays multiple audio files in a row, much like a playlist in iTunes. And it automates much of the grunt work, so one doesn’t have to type in and keep track of a bunch of start and stop times.
First, about the music. The drum loops aren’t mine; They belong to BT. These samples are released under Creative Commons, and are part of the OLPC Sample Library. You’ll need to download the BT sample pack to run today’s file without modification.
Let’s get a look look at the relevant portion of the score:
i $Sample 0 1 "120 Scratchim.wav"
i $Sample 0 1 "120 KarmaTonic.wav"
i $Sample 0 1 "120 Fast Satellite.wav"
i $Sample 0 1 "120 KarmaTonic.wav"
i $Sample 0 1 "120 Fast Satellite.wav"
i $Sample 0 1 "120 Scratchim.wav"
i $Sample 0 1 "120 Drive the Bouncer.wav"
i $Sample 0 1 "120 Drive the Bouncer.wav"
Notice that the start times for each loop is set to zero. Though when you listen to the example, the loops are played serially, rather than mixed together. And even though the durations are set to 1 second, each sample runs its course. This is because the instruments use a bit of logic to automatically arrange the samples so they play one after the other, in the order which they are listed in the score, with their full durations.
TRON is my favorite movie of all time. I’ve been toying with recreating some of the classic sounds from the film on my analog synth. Though I really want to get some of these things happening in the digital domain. So this is my first Csound attempt at the recognizer.
Listen at SoundCloud
The first version of my recognizer was done on my analog modular synth. Truth be told, I can get a much better sound out of my modular than Csound. Though my Csound one is coming along. I think the secret lies in the filters, so I’ll have to do many more trial and errors before I get it sounding right.
According to wikia, the original recognizer sound was realized on a Prophet-5 synthesizer by sound designer Frank Serafine; He modified the “helicopter” preset.
I am in need of big sounds. So I created Oscil Omega, a user-defined opcode that generates one or more detuned table oscillators. The number of oscillators is up to the user; Need 1000? Not a problem. Depending on the amount of oscillators, detuning value, and waveform, one can get everything from super digital flange-like sounds to massive tone clusters out of Oscil Omega.
Perhaps it was the beautiful Boston weather that inspired me to take a walk down by the river that day. I eventually found myself in front of the Museum of Science, where I had once attended a Nine Inch Nails laser show in their planetarium. I am quite fond of planetariums, with or without the industrial music. So I bought a ticket, and watched a show that featured the SETI Institute and signals from space. After the show, I walked back to my apartment and started coding, completing Message From Another Planet within six hours time. Days later, SETI@Home was launched.
I plugged my Doepfer + Cwejman + Plan-B analog synth into Csound to get a sense how to better integrate the two systems.
In a nutshell, I created a simple Csound granular synthesizer that emits snippets of an acoustic guitar recording. Csound takes two inputs from the modular synth. The first input triggers a new grain, for which I use the unipolar pulse wave from the Doepfer A-146 LFO2; This allows me to control the rate in which grains are fired. The second input determines where in the audio recording the grain playhead begins. To move forward through the audio, a I use a saw up wave. A saw down wave moves through the soundfile in reverse. A sine or triangle wave ping-pongs. Noise randomly selects a start position. Etc…
There’s definitely some potential here, though I’m learning it takes quite a bit of setup time to get right, and getting it right has yet to be achieved. I’m determined to stick with it, because the potential of getting these two things talking to each other will be well worth the effort.
The Csound Blog now has a twitter account. So if RSS isn’t your thing, you can alternatively get all new Csound Blog updates there. I’ll also be retweeting any interesting Csound twits that I come across, along with anything relevant to computer music, DSP and synthesizers.
I also made the graphic above for promotional purposes. Do what you want with it.
This is my second Csound piece, composed while I was enrolled in Dr. Boulanger’s Csound class at the Berklee College of Music. The code has been cleaned up and slightly refactored, though I left all the audio/synth characteristics in tact.
What can I say about this piece. The title is a reference to a pro-nuclear movie shown by Waylon Smithers in an episode of The Simpsons (Homer’s Odyssey.) I picked the name last minute on the way over to the ‘Over The Edge’ concert where the piece premiered. The code became the foundation for a Csound tutorial I wrote as part of my Berklee senior project, Exploring Analogue Synth Techniques. The sequencer riff is more or less *borrowed* from Nine Inch Nail’s “Head Like a Hole.” The drum samples were recorded off of my Roland MC-303 Groovebox. I also heard through the grapevine that there were some traditional computer music composers at the time that were none-to-happy to see a sophisticated computer music program like Csound being used in such a way. Back in 1998, it took a good 10-15 minutes to render.
I woke up this morning to an excellent interview with Csound composer Steven Yi about his graphical composition environment Blue. The interview is rich with technological details about the software, including bits about its development history. If you are unfamiliar with Blue, will get you up to speed about what it is and what it can do. If you haven’t played with it, do yourself a favor and give it a try.