Synthesis with computers has changed dramatically since the mid-nineties when Jean-Luc and I first began using Csound. Back then, computers were mostly used as devices that controlled external synths via MIDI. There were trackers and wave editors. Though not much existed in terms of pure digital synthesis.
There was Csound. Though computers weren’t fast enough for real-time, Csound gave users the opportunity to work with a fully-loaded universal modular synthesiser. For those of us that were fortunate to have personal computers, we could stay in our apartments and make experimental noise without having to make the trek to the music labs.
Fast forward to today. Laptop computers are common, there is no shortage of real-time music apps, most software synths provides instant gratification, and even our phones are full-fledged synthesizers.
When Jean-Luc and I started talking a few months ago about the possibility of him redesigning his software synthesis course at NYU, one key issue popped up, “Why is Csound still relevant?” From our perspective, many of the reasons why we were originally interested in learning Csound, spending hours writing code and waiting for renders to finish, no longer apply to today’s world of countless apps and ultra-fast CPUs. Yet, Csound is still here, nor does it appear to be going anywhere. Why?
There isn’t anything else quite like Csound out there. Yes, Csound is powerful, modular and has a huge library of legacy instruments. Though more than that, one of its most defining qualities is that it is different, and being different encourages composers to write music that is also different.
I’ve been wanting to explore Csound for real-time externally-controlled synthesis. It does have a few quirks that work against this goal, though. I’m thinking mainly the separation between orchestras and scores. Now, this makes perfect sense in a non-real-time system, but some strange things happen when you want to make, say, a real-time MIDI synthesizer. The biggest of these are storing sample data in ftables in the score, and then that weird thing of having to tell the score how long to let the synth run, thus making every MIDI synth made in Csound into a time-limited demo of sorts.
Of course, there may be ways around all this stuff now, as it has been a few years since I dove into anything. I seem to recall something about new sample playback opcodes, which may eliminate the ftable thing. I need to look into it more.
Is my ignorance showing yet? :)
Darren, I agree there are some definite quarks to Csound. Though most of them are easily dealt with. For example, if I need the Csound engine to run for an unspecified amount of time, I create a dummy instrument, and keep it playing for 24 hours or longer.
MIDI is one of the trickier subjects, because of the way things are internally routed. I’m looking into ways of bypassing the MIDI issues by using Processing instead of Csound’s built-in MIDI.
I think what needs to happen is that someone needs to write a tightly focused tutorial for doing sample work with Csound. I’ll definitely cover some of these details this semester, which help get the ball rolling.
And no, your ignorance isn’t showing as you’re bringing up some real issues. :)
Well, I guess in the end, people using Csound are people who know how to. Professionals who need to “just get work done” use other much more expensive tools. I just really like how Csound can be made to not sound like any other software synthesis package on the market. But if the Csound people are really serious about wanting to stay relevant in the 21st century, whatever that may mean, addressing some of these quirks would be a good place to start. That’s just my opinion, though, and can be taken with a grain of salt. :)
Hmm, I’ve wanted to get into computer generated music for years. But for some reason I just can’t commit the time to deal with the learning curve yet :( Csound looks like something I’d use but waaaay down the line, right?
Csound is going to still be relevant, no worries. The current work being done on a parallel processing version will hopefully keep Csound ahead of the pack on rendered sound quality and performance.
I spent time and effort learning Csound, then after reading the usual posts about how it is not good for realtime, decided I should try other things. After spending a pile of money on a DAW, VSTs and Max/Msp and considerable time learning their quirks, guess what I found out?
I was getting better performance (and far more control) with Csound. Now Max/Msp and the DAW/VSTs sit unused on my hard drive. Supercollider is probably better if you entire focus is realtime, but I will stick with the better disk-render quality of Csound.
My advice, particularly if your goals are xenharmonic and/or algorithmic music, is to learn Csound or Supercollider. Just be aware, these are programming languages. If you are not up to learning a programming language, then these are not for you. Both are comparatively easy languages to learn, much easier than C++ or Java for example.
No doubt if you are a professional or just plain wealthy, you can buy Kyma and all kinds of amazing synths. The relevance of Csound in the future is assured by the fact that most people are neither wealthy nor professional. ;)