Let’s set the way back machine to January 2001. This is around the time I took my first steps into designing an additive synthesizer. I’m not sure when user-defined opcodes were introduced, though there is a good chance they had not existed then. And if they did, I had no idea of their existence. Same goes for the event series of opcodes. In my legacy code, each overtone, along with supporting envelopes and transfer functions, were written explicitly. I was a perl junky at the time, so I wrote scripts that would generate the instruments for me.
The example I’m posting today is the legacy code from 2001. I did not change the code, except for converting tabs to spaces and placing the orc/sco pair into a csd.
In my new additive synth, I’m employing a recursive user-defined opcode technique, which I first read about in Steven Yi’s Csound Journal articles Control Flow Part I and Part II.
If you look at part 2, Steven actually demos an additive synth, which is eerily similar to the core design of the one I’m in the process of making. Which means I either independently came up with a similar design, or more likely, I’m suffering from a bout of cryptomnesia. Either way, if you haven’t studied up on these two articles, then perhaps it’s time you make a weekend project out of it; They are pure gold.
About 12 years ago, someone told me that additive synthesis will never be practical. My initial reaction was, “we’ll see about that.” This person’s background was that of modular subtractive synths, and was quite knowledgeable. From that perspective, I could see their point. Who want’s to take the time to create complex envelopes for each individual harmonic? My mind has often wondered back to this incident.
Today, additive synths are becoming more common thanks to faster computers. Many of their UIs do help programmers with the large amounts of complexity additive brings to the table. And there are many useful and valid approaches, each with their own strengths and weaknesses.
My approach has been floating around my head on and off for about a decade. The first iteration was completed around 2002, and was shelved until recently. Today’s csd is a continuation of the second iteration of the design, which I had originally intended to use with Fragments (see here), but ran out of time. For the next couple of weeks, I plan on taking this instrument to its illogical conclusion (I have no idea what it’ll be like when it’s done.) When I am finished, I’ll write an in depth article for The Csound Journal on the final design.
The premise for my approach is to use f-tables as a shortcut for specifying and controlling additive synth data. In today’s example, the audio generator produces a 32 band-limited sawtooth wave. However, before the sine waves are generated with oscil, the synth data is run through two transfer functions, stored as f-tables. One transfer function changes the amplitudes of the harmonics, emulating the EQ of a virtual acoustic body. The other bends the frequencies, causing frequency distortions. Frequencies continue to be processed by the transfer functions, even as they are modulated, which I believe is key to convincing acoustic viability.
The reason why this sounds similar to a bowed stringed instrument is because the amplitude transfer function is filled with the right amount of bipolar noise. The truth is, I had no intention of creating a string-like sound. I was just toying with it and thought I’d try something drastic like using a table filled with noise. After the discovery, I spent considerable time tweaking the values trying to get it to sound a little bit more expressive.
I should warn you, there are some clear cases of aliasing occurring in today’s example. I think I know what’s causing it, but I’ll have to go back and run some tests to be certain. In the mean time, I hope you enjoy.
Since Friday, I’ve had almost nil in terms of time to work on fragments. The little time I did have, I put together an additive synthesizer prototype, which I call Tadd for the time being; short for Table Additive Synthesizer. Perhaps I should be calling it tads? It’s very rough, glitchy and a bit poppy, but I love it.
I’m not going to go into the details on how it works here. It’s still in early development, and I plan on only using it in a rough state for fragments. It’s a time constraint thing. Beyond this Bohlen-Pierce piece, I have some big ideas for Tadd, and will most likely write an article about it for the Csound Journal. In the mean time, if anyone has an questions about it, I’d be happy to answer.
Where am I with the piece? I now have a pretty good idea what my final instrumentation will be like. Effects will play a major role in this. I’m considering writing a stutter delay buffer, and possibly a trippy space dub echo machine. There will be heavy use of note generating algorithms present. Timbre wise, it could use one more instrument that contrasts the FM and additive sounds. I’m thinking granular.
There is the question of whether or not to do this in 4-channel quadraphonic surround. If there is time, I’ll give it a try. Though technically speaking, I’m not set up for it here. I’m not ruling it out, yet.
As for a metaphor for what the piece is about… That’s just now coming to me, but it’ll take me a few more days before it truly begins to manifest. The one thing I’m hoping to avoid is to use the Bohlen-Pierce scale as a gimmick rather than a primary element of the piece.