Looper Prototype

I built a performable looper prototype this past Sunday night. At this point, all it can do is start and stop two loops and a metronome. Rough around the edges is an understatement, though there are a few points of interest worth mentioning.

Rather than use Csound’s built-in tempo engine, I built one from a phasor. Players can trigger, retrigger and stop loops from the ascii keyboard. Triggering is also quantized to the beat, similar to how clips in Ableton Live are handled.

All in all, the whole thing came together in about two hours, plus another hour to tidy things up. Sure, there are some glaring flaws to the current design, but these things will improve with revision. For example, only two loops are supported, and they are hardwired into the engine. Samples and their mappings need greater levels of flexibility if this thing is going to be useful to anyone.

If you’re curious to try it out, download looper_prototype.csd. You will also need two samples from the OLPC sound library. The first is “110 Kool Skool II.wav” by BT, included in BT44.zip. The second is “BNGABetterArpeggio01.wav” by Flavio Gaete, included in FlavioGaete44.zip. Just drop these two samples into the same folder as the csd before running. You can replace these with your own loops, just make sure that you also update the values of the tempo loop macros, right above where the samples are loaded in the orchestra.

Here are the keys (lowercase only):

a – trigger loop a
z – stop loop a
s – trigger loop s
x – stop loop s
d – turn on metronome
c – turn off metronome

I’ll post a newer version once improvements to the design have been made.

Deep Synth — Dynamically Generated Oscillators

The situation — You want an instrument that can play any number of oscillators, determined by a p-field value in the score. The problem — Unit generators cannot be dynamically created in an instrument with a simple loop. One possible solution — Multiple events can be generated in a loop, with each event triggering an oscillator-based instrument.

Download: Deep_Synth.csd
Listen: Deep_Synth.mp3

The Csound file Deep_Synth.csd provides an example of how to dynamically generate oscillators using the compound instrument technique. A compound instrument is two or more instruments that operate as a single functioning unit. This particular compound instrument is built from two instruments: DeepSynth and SynthEngine. SynthEngine is, you guessed it, the synth engine, while DeepSynth is a player instrument that generates multiple events for SynthEngine using the opcodes loop_lt and event_i:

i_index = 0
loop_start:
    ...
    event_i "i", $SynthEngine, 0, idur, iamp, ipch, iattack, idecay, ipan,
            irange, icps_min, icps_max, ifn
loop_lt i_index, 1, ivoices, loop_start

If you are wondering why we can’t just place a unit generator, such as oscil, inside of a loop, read Steven Yi’s articles Control Flow Pt I and Pt II. Pay special attention to the section IV. Recursion – Tecnical Explanation near the end of Pt. II. Not only does Mr. Yi do an excellent job explaining these technical reasons, but he also provides another applicable solution for creating multiple unit generator instances utilizing recursion and user-defined opcodes.

Sound Design

The instrument SynthEngine uses a single wavetable oscillator, an amplitude envelope and the jitter opcode to randomly modulate frequency. A single instance of DeepSynth can generate multiple instances of SynthEngine. DeepSynth can generate a single instance, or 10,000+. Users have control over the depth of frequency modulation, as well as the rate in which jitter ramps from one random value to the next. Panning between instances of SynthEngine is evenly distributed.

“Turn it up!” – Abe Simpson

The name DeepSynth is a homage to Dr. James A. Moorer‘s piece Deep Note, also known as the infamous THX Logo Theme. Very early in the design, it became evident that DeepSynth is capable of making very Deep Note like drones. This is due to the fact that it does utilize some of the defining techniques used in Dr. Moorer’s piece.

I highly recommend reading Recreating the THX Deep Note by Batuhan Bozkurt at EarSlap. The author conveniently walks readers through each step of the process, providing both audio and Supercollider code examples. If you have ever yearned to create that amazing sound for yourself, here’s your opportunity.

Sensing Angels

Sensing Angels

by Charles Gran
in three movements
for solo clarinet and electronics with improvisation

Performed by Jesse Krebs
Recorded Sunday, November 8, 2009

…Sensing Angeles was conceived as a solo piece in which the acoustic sound of the clarinet would be manipulated in real-time by a computer. Today, this isn’t so remarkable and can be achieved by a variety of means. For the project I chose to use Csound primarily as a challenge, but also for a certain kind of historical connection.

If the clarinet is an instrument with history so is Csound. A direct descendant of software created at Bell Labs in the 1950s[2], many would consider it antique in the same way the clarinet might be seen. Of course, it turns out they are both quite modern. Like most things, modernity is about use. Today, the use of a command-line interface and computer code for the creation of music is as much a point-of-view as a practicality…

Listen

Normalize Your Render

Here’s a somewhat controversial hack that let’s you normalize a Csound composition during the rendering process, scaling the amplitudes using Csound’s 32-bit or 64-bit resolutions.

This hack requires two renders. The first time you render a composition, look for the overall amps value at the end of the output message. It will look something similar to this:

end of score.           overall amps:  0.64963

Next, you need to calculate the +3dB value above the overall amps value. Here is a function that does the trick, where x is overall amps:

f(x) = 10 ** (3.0 / 20.0) * x

Or if you want to skip most of the math involved:

f(x) = 1.4125375446227544 * x

Plug 0.64963 into x, and you get 0.91762676511328001. Set 0dbfs to this:

sr = 44100
kr = 4410
ksmps = 10
nchnls = 1
;0dbfs = 1.0
0dbfs = 0.91762676511328001  ; 3dB normalization

Render again. Done.

This is a Hack

Though I do love this technique, I highly recommend that you use it only for a few situations, and certainly advise that you don’t make a habit of it.

I like to use this when I’m finished with a composition, and I’m ready to make a master audio file. This allows me to squeeze out a tiny bit of extra sound quality by scaling the audio inside csound64’s internal 64-bit float resolution. Which in theory, will do a better job that if I did the same process to a rendered sample of a lower bit resolution.

This can also be a last ditch effort for fixing problems with a final project that is due in 15 minutes. You have some minor clipping, and no time to globally adjust all your amplitudes? This could save you in a pinch.

Warning!!

This only works with absolute amplitudes. If there are any relative values or opcodes in play, then this technique will fail. For example, these two lines of instrument code will not work:

a1 oscils p4 * 0dbfs, 440, 0  ; Scales the amplitude relative to 0dbfs
a2 diskin2 "foo.wav", 1       ; Scales the sample relative to 0dbfs

There are some other risks involved. If random elements exist in the composition, then the overall amps may vary between renders, which could lead to innaccurate normalization. I also highly suspect that the overall amps value at the end of the render is truncated, thus, it not pinpoint accurate — close enough for most situations.

Are you setting 0dbfs to 1.0?

Csound’s default output range is +/- 32767. Setting amplitudes with these numbers is, more or less, the hard way. The easy way is to use a normalized range of +/- 1. You can alter Csound’s output range with the 0dbfs header statement by placing it beneath the standard orchestra header, like this:

sr = 44100
kr = 4410
ksmps = 10
nchnls = 1
0dbfs = 1.0

The issue with the default 16-bit range is that it makes little sense to do so in a world of multiple bit depths (8, 16, 24, 32, 64, etc). If one is rendering a 16-bit file, the argument could be made in favor of the default range since there is a one-to-one mapping of input to output values. However, once one leaves the realm of 16-bit, values of +/- 32767 become arbitrary. On the other hand, a normalized range is not married to any single bit-depth, and translates well to other resolutions.

Besides being easier to compose and design instruments with, there are other practical reasons to adopt this good programming practice. The Csound manual says, “Using 0dbfs=1 is in accordance to industry practice, as ranges from -1 to 1 are used in most commercial plugin formats and in most other synthesis systems like Pure Data.” If you are ever to use Csound in conjunction with PD, MaxMSP, etc, this is the range you will use.

Make it habit. Start every new orchestra as if 0dbfs is every bit as important as sr, kr, ksmps and nchnls. And always set it to 1 (there are exceptions).

In case you are wondering why more orchestras out in the wild haven’t adopted this practice… 0dbfs first came into play in Csound version 4.10 in 2002. Most of existing knowledge base, such as books and tutorials, were written prior to this.

Patterns, Gestures and Behaviors

Patterns, Gestures and BehaviorsThink if an electric guitar as if it were a Csound instrument, metaphorically speaking. Potential ways in which someone can interact with this guitar include: pick, fingers, ebow, power drill, slide, capo, etc. While the guitar is always a guitar, the output changes with the way a person interfaces with it. This concept is every bit as true for digital instruments as physical ones.

The original Splice and Stutter came with a single loop-based sample engine, aptly named SampleEngine. This engine is played with three interface instruments (Basic, Stutter and Random) each with its own unique musical behavior. Today’s example leaves SampleEngine exactly as is, and continues to demonstrate interface versatility with three new instruments: RandomPhrase, Swell and Flam.

The designs of these new instruments were influenced by the original Splice and Stutter score. After I had completed the demo, I noticed some gestures/phrases/effects that were translatable into instrument behaviors.

Download: splice_and_stutter_v2.csd. BT Sample Pack (13.2 MB)

RandomPhrase

If you look at the end of the score in the original splice_and_stutter.csd, you’ll see a 32 line long phrase using the Random instrument. With the exception of the start times, the p-fields for each line are identical.

i $Random 32.25 1 0.25 0.333 100
i $Random +     . .    .     .
...
i $Random +     . .    .     .

That is a pattern, and patterns are translatable into behaviors. RandomPhrase achieves this by generating multiple events of over an interval of time, specified in p-field 4, with events spaced evenly apart by the value in p-field 5.

i $RandomPhrase 32.25 1 32 1 0.25 0.2 100

In the new score, the original 32 lines of code are consolidated into a single call to RandomPhrase. This means less code to maintain, while giving the loop-based sampler a new behavior for me to play with.

Important note. RandomPhrase generates events for instrument Random, which generates events for instrument SampleEngine, which produces the sound. You can create new interfaces out of other interfaces.

Swell

Another pattern revealed itself with this gesture:

i $Stutter 7    1 0.25 0.5     100 12 [1 / 12]
i $Stutter 7.25 1 0.25 0.25    100 .  [1 / 12]
i $Stutter 7.5  1 0.25 0.125   100 .  [1 / 12]
i $Stutter 7.75 1 0.25 0.06125 100 .  [1 / 12]

Unlike the randomly generated notes from the previous example, p-field column 5 (amplitude) uses various values for each note event in this phrase. Upon closer inspection, these particular values themselves have a pattern. Each successive amplitude is halved. This p-field pattern, and others like it, can be translated into a behavioral instrument.

The Swell instrument lets users specify a multiplier to change the amplitude values for each successive note in p-field 6. What’s good for amplitude is good for other things, so I applied the same basic principle to the stutter window, expanding the usefulness of the instrument. A swell gesture looks like this in the new score:

i $Swell 7 1 1 0.5 0.5 0.25 12 100 [1 / 12] 1

Flam

In the original score code, I created a flam effect with two Basic events:

i $Basic 11    1 0.5 0.6 100 7
i $Basic 11.02 1 0.5 0.2 100 1

One could miss the intention of these two lines while reading the score. By creating an instrument that creates a flam, the score becomes easier to read. Also, a flam effect is musically interesting enough to justify having an instrument dedicated to it.

i $Flam 11 1 0.25 0.3 100 7 0.02 2 0

Splice and Stutter

Splice and StutterToday on The Csound Blog, we’re going to learn how to build a loop-based sampler out of common household ingredients.

Listen: mp3

Download: splice_and_stutter.csdBT Sample Pack (13.2 MB)

Here’s a brief rundown of today’s example. A drum loop is loaded into an f-table with the instrument LoadSample. The instrument SampleEngine plays back selective parts of the loop. Instruments Basic, Stutter and Random are interface instruments that simplify the process of triggering samples.

The LoadSample instrument loads a sample into an f-table, while storing information about the sample into an ad hoc data structure created from chn busses. Here isn’t the place to go into detail. I will say that it is akin to a C struct, and stores the file name, sample rate, length of file (in samples), the tempo, and the number of beats (quarter notes) in the loop. All the user-defined opcodes are support opcodes for the data structure.

SampleEngine is the heart of this piece. It works by triggering discreet notes from within the loop, with the loop offset being determined by input it receives via p-field 7. The offset unit is in beats. Let’s say the loop is 16 beats long. A passed value of 0 plays the first quarter note. A value of 1 plays the second quarter note of the loop, etc.

This instrument is designed to be played by other instruments, rather than being triggered directly by a score i-event. That is…

Instead of having multiple samplers that do various things, I created a single complex sampler engine that is capable of a wider range of tricks. The problem with complex instruments in Csound is that writing score events can be cumbersome to write and certainly hard to read, especially when dealing with several parameters. This is where the interface instruments come into play.

The interface instruments Basic, Stutter and Random help us tame the complexity of SampleEngine by reducing the number of p-fields needed by the score, and by defining clear behaviors.  Basic is a no thrills controller that simply triggers part of the loop. Stutter, well, stutters. Random randomly picks a beat and plays it.

A greatly added benefit to this approach is that the score is much easier to read. Instead of trying to figure out if a particular i-event stutters or not by scanning a row of numbers, one can just casually look at the name of the instrument used. To put it another way, does this stutter?

i 5 7 1 0.25 0.5 100 12 0.083 1 0

How about this?

i $Stutter 7 1 0.25 0.5 100 12 [1 / 12]

There are a lots of ins and outs to today’s example. And I admit, I skipped over most of them. If there is a particular issue or issues you wish for me to expand on, comment below, and I’ll make it a priority to blog about it in the future.

This sampler is a derivitive work based on an instrument co-developed by Jean-Luc Sinclair (aka Jean-Luc Cohen) and myself back in 2006. The loop in today’s example is by BT (aka Brian Transeau), released under a Creative Commons attribution license, and released as part of the OLPC Sample Library. You can obtain this sample and others here. (13.2 MB) You will need the loop “105 Blanketed Lama.wav” in order to run the csd file.

Update: There is an issue with the CSD file not running on Windows Windows Me (or earlier). There is now a fix. Download the new splice_and_stutter.csd, and see line 269 for details. This only applies to users of Windows Windows Me (or earlier versions).

Thanks to everyone on the Csound Mailing List who helped straighten this out.

Csound Vs. Music V — FIGHT! Pt. 4

The Technology of Computer Music Mathews

Let’s look again at the code presented on page 45 of The Technology of Computer Music:

INS 0 1 ;
OSC P5 P6 B2 F2 P30 ;
OUT B2 B1 ;
END ;
GEN 0 1 2 0 0 .999 50 .999 205 -.999 306 -.999 461 0 511 ;
NOT 0 1 .50 125 8.45 ;
NOT .75 1 .17 250 8.45 ;
NOT 1.00 1 .50 500 8.45 ;
NOT 1.75 1 .17 1000 8.93 ;
NOT 2.00 1 .95 2000 10.04 ;
NOT 3.00 1 .95 1000 8.45 ;
NOT 4.00 1 .50 500 8.93 ;
NOT 4.75 1 .17 500 8.93 ;
NOT 5.00 1 .50 700 8.93 ;
NOT 5.75 1 .17 1000 13.39 ;
NOT 6.00 1 1.95 2000 12.65 ;
TER 8.00 ;

The following list compares Music V mneumonics to their Csound opcode equivalents in the order in which they appear the previous example:

INS => instr
OSC => oscil
OUT => out
END => endin
GEN => f
NOT => i
TER => e

A Music V instrument definition begins with INS and and is terminated with END. This is nearly identical to Csound’s instr and endin. Music V does allow one capability that Csound lacks, as instrument defininitions in Music V have a parameter that lets users specify the time in which they come into being. Hopefully someday, Csound will include a more mature version of this feature, where users can dynamically create and destroy instruments in the middle of a score or performance. *crosses fingers*

Though OSC and oscil are similar, there are some noticeable differences in their implementations. In part 3, I mentioned that Csound supports control signals, and Music V does not. The oscil opcode is an example of a unit generator that can output both audio rate and control rate signals.

Both support a simple amplitude value, but diverge on frequency. Instead of a frequency paramenter, Music V has an increment parameter that determines the frequency of the unit generator. The increment parameter is used to specify the number of samples that are sequentially skipped in a stored function for each pass. The following equation is used to convert increment values to frequencies (Mathews, pg. 127):

frequency = sampling rate * increment / (function length in samples - 1)

The first note played in the example specifies an increment value of 8.45. Let’s do the math:

330.72407045009783 = 20000 * 8.45 / (512 - 1)

An increment value of 8.45 translates to roughly 330Hz.  Provided that the sampling rate is 20kHz and the table size is 512.

The problem with Music V’s method is that any changes made to the size of the stored function table or the sampling rate greatly affects the frequencies of the oscillators. The reason why I modified my copy of Music V is because the examples in the book were designed to render with a sampling rate of 20kHz, not 44.1kHz. Thus, frequencies were higher than they were supposed to be, and the LFOs were right out.

This is where Csound is much more user-friendly as users specify a frequency, and all the increment calculations are done automatically, making life a little easier for us Csounders.

Thanks for reading this first Csound Blog mini series. Hopefully you’ll stay tuned.  A lot of great topics are coming up, including coding techniques, composition, sound design, etc..

Let’s look again at the code presented on page 45 of The Technology of Computer Music:
INS 0 1 ;
OSC P5 P6 B2 F2 P30 ;
OUT B2 B1 ;
END ;
GEN 0 1 2 0 0 .999 50 .999 205 -.999 306 -.999 461 0 511 ;
NOT 0 1 .50 125 8.45 ;
NOT .75 1 .17 250 8.45 ;
NOT 1.00 1 .50 500 8.45 ;
NOT 1.75 1 .17 1000 8.93 ;
NOT 2.00 1 .95 2000 10.04 ;
NOT 3.00 1 .95 1000 8.45 ;
NOT 4.00 1 .50 500 8.93 ;
NOT 4.75 1 .17 500 8.93 ;
NOT 5.00 1 .50 700 8.93 ;
NOT 5.75 1 .17 1000 13.39 ;
NOT 6.00 1 1.95 2000 12.65 ;
TER 8.00 ;
The following list compares Music V mneumonics to their Csound opcode equivalents in the order in which they appear the previous example:
INS => instr
OSC => oscil
OUT => out
END => endin
GEN => f
NOT => i
TER => e
A Music V instrument defintion begins with INS and and is terminated with END. This is nearly identical to Csound’s instr and endin. Music V does allow one capability that Csound lacks, as instrument defininitions in Music V have a parameter that lets users specify the time in which they come into being. Hopefully someday, Csound will include a more mature version of this feature, where users can dynamically create and destroy instruments in the middle of a score or performance. *crosses fingers*
Though OSC and oscil are very similar, there are some noticeable differences in their implementations. Yesterday, I mentioned that Csound supports control signals, and Music V does not. The oscil opcode is an example of a unit generator that can output both audio rate and control rate signals.
Both support a simple amplitude value, but diverge on frequency. Instead of a frequency paramenter, Music V has an increment parameter that determines the frequency of the unit generator. The increment parameter is used to specify the number of samples that are sequentially skipped in a stored function for each pass. The following equation is used to convert increment values to frequencies (Mathews, pg. 127):
frequency = sampling rate * increment / (function length in samples – 1)
The first note played in the example specifies an increment value of 8.45. Let’s do the math:
330.72407045009783 = 20000 * 8.45 / (512 – 1)
An increment value of 8.45 translates to roughly 330Hz.  Provided that the sampling rate is 20kHz and the table size is 512.
The problem with Music V’s method is that any changes made to the size of the stored function table or the sampling rate greatly affects the frequencies of the oscillators. The reason why I modified my copy of Music V is because the examples in the book were designed to render with a sampling rate of 20kHz, not 44.1kHz. Thus, frequencies were higher than they were supposed to be, and the LFOs were right out.
This is where Csound is much more user-friendly as users specify a frequency, and all the increment calculations are done automatically, making life a little easier for us Csounders.
Thanks for reading this first Csound Blog mini series. Hopefully you’ll stay tuned.  A lot of great topics are coming up, including coding techniques, composition, sound design, etc.

Csound Vs. Music V — FIGHT! Pt. 3

Thanks to Dr. Victor Lazzarini, I now have a working copy of Music V in my possession. I’ve been tinkering with it all weekend to try and get a better understanding of the system, temporarily frying my brain in the process. All in all, it’s been great experience so far.
Sampling Rate. Just as I had mentioned in part 2 that the bit depth of the dynamic range was installation specific, the sampling rate of Music V also depended greatly on the hardware on hand.
In Csound, the sampling rate is determined in the header of the orchestra, and can be overridden with a command-line flag. In Music V, the sampling rate is hard-wired into the Fortran code.
The version of Music V that was sent to me uses 44.1kHz. I created a modified version to run at 20kHz to coincide with sampling rate specified in The Technology of Computer Music. (Mathews, pg. 43) Had to get my hands a little dirty with the Fortran code, though the procedure turned out to be easier than expected — just replaced all instances of 44100 with 20000.
Sampling rate alone doesn’t paint the full picture, as there is also the control rate to consider. Csound fully supports control rates, signals of a usually lower sampling rate used to control / modulate other unit-generators.
Back in the day when computers were *really* slow, and real-time performable digital synths were more fantasy than reality, rendering a short piece of music could take hours. To help reduce this time, control signals were introduced to greatly reduce the number of required computations.
Even today, using a secondary lower sampling rate to control aspects of a digital synth engine or DSP is standard practice. In live situations, a control rate allows musicians to run more effects and instruments than would be possible if everything was computed at the audio rate.  So there is great reason why software like Reaktor fully embraces this older, yet highly effective concept.
In the Csound manual, Berry Vercoe describes how control rates came to be in Csound: “With Music 11 (1973) I took a different tack: the two distinct networks of control and audio signal processing stemmed from my intensive involvement in the preceding years in hardware synthesizer concepts and design. This division has been retained in Csound.”
Unfortunately for Music V, there is no such thing as a control Thanks to Dr. Victor Lazzarini, I now have a working copy of Music V in my possession. I’ve been tinkering with it all weekend to try and get a better understanding of the system, temporarily frying my brain in the process. All in all, it’s been great experience so far.

The Technology of Computer Music MathewsThanks to Dr. Victor Lazzarini, I now have a working copy of Music V in my possession. I tinkered with it all weekend to get a better understanding of the system, temporarily frying my brain in the process.

Sampling Rate. Just as I had mentioned in part 2 that the bit depth of the dynamic range was installation specific, the sampling rate of Music V also depended greatly on the hardware on hand.

In Csound, the sampling rate is determined in the header of the orchestra, and can be overridden with a command-line flag. In Music V, the sampling rate is hard-wired into the Fortran code.

The version of Music V provided to me uses a rate of 44.1kHz. I created a modified version to run at 20kHz to coincide with sampling rate specified in The Technology of Computer Music. (Mathews, pg. 43) Had to get my hands a little dirty with the Fortran code, though the procedure turned out to be easier than expected — just replaced all instances of 44100 with 20000.

Sampling rate alone doesn’t paint the full picture, as there is also the control rate to consider. Csound fully supports control rates, signals of a usually lower sampling rate used to control / modulate other unit-generators.

Back in the day when computers were *really* slow, and real-time performable digital synths were more fantasy than reality, rendering a short piece of music could take hours. To help reduce this time, control signals were introduced to greatly reduce the number of required computations.

Today, using a secondary lower sampling rate to control aspects of a digital synth engine or DSP is standard practice. In live situations, control rates allow musicians to run more effects and instruments at any given time than would be possible if everything was computed at the audio rate.  So there is great reason why software like Reaktor fully embraces this older, yet highly effective concept.

Unfortunately for Music V, there is no such thing as a control rate.

In the preface of Csound manual, Barry Vercoe briefly describes how control rates came to be in Csound:

With Music 11 (1973) I took a different tack: the two distinct networks of control and audio signal processing stemmed from my intensive involvement in the preceding years in hardware synthesizer concepts and design. This division has been retained in Csound.