Csound Journal — Issue 12

Csound Journal Issue 12 Cover ArtIssue 12 of the Csound Journal is now online, just in time for the holidays. Special thanks to editors James Hearon and Steven Yi for putting together another great issue.

Articles

140 Characters

Inspired by the article Hear Free Generative Music, in Archaic Twitter Haiku, made with SuperCollider at CDM, as well as this Csound Mailing List thread, I had to give 140 characters or less a try.

Listen: 140.mp3
Download: 140.csd

In order to make it work in 140 characters or less, I had to cheat. The minimum size for a CSD file in Csound is 109 characters long. So I only count characters that are embedded in the orchestra and the score. It also didn’t tweet very well. :)

Here’s what the code looks like, or least how it would look if the tweet didn’t chew it up:

instr 1
a2 expon 1,p3,.0001
a1 oscils 8000,88*(p4%9+5),1
out a1*a2
if p2<60 then
event_i "i",1,rnd(.6)+.1,4,p4+rnd(2)
endif
endin
i 1 0 1 8

To all of the artists involved with sc140, superb job!

Making Records

Not of the vinyl variety, but the computer science data structure.

Csound comes with a few ways of generating, storing and retrieving data, with function tables being the primary data structure. Data can also be placed into global variables and chn busses. In terms of data abstractions, there isn’t a whole lot of choice. However, custom data abstractions and structures can be built from within a Csound orchestra.

In my blog Simple Keys — Modularized Key Mapping, I toyed with storing multiple key bindings, frequencies, amplitudes and function table numbers within a single ftable by treating the ftable as if it were an array of records, albeit a fairly clumsy one. Over the weekend, I continued along this line and distilled it even further.

Download: simple_record.csd

So far, I’m very happy with the results. Though the implementation maybe a bit hackish, the user-interface isn’t bad. This new system of making records is composed of five user-defined opcodes: RecordType, RecordField, Record, RecordSet and RecordGet. I additionally wrote wrapper GEN instruments, so they could be accessed from the score.

The first two, RecordType and RecordField, are used to create record abstractions. RecordType creates a new record type, while RecordField appends data fields to the record type. A record type is built from an ftable, thus shares the function table number space. The following score snippet creates a record type as ftable 100, and appends three fields: amp, freq and ftable.

i $RecordType  0 1 100           ; Record type stored at fn 100
i $RecordField 0 1 100 "amp"     ; Append field "amp"
i $RecordField 0 1 100 "freq"    ; Append field "freq"
i $RecordField 0 1 100 "ftable"  ; Append field "ftable"

For a record type to be useful, an instance of it must be created with Record. A record is also an ftable, and requires its own unique function table index. The following line creates a record from record type 100, and stores it in ftable 200:

i $Record 0 1 200 100  ; Instantiate record 200 from type 100

Once a record is created, the values of each field can be set with RecordSet:

i $RecordSet 0 1 200 "amp"    0.5  ; Set field "amp"
i $RecordSet 0 1 200 "freq"   440  ; Set field "freq"
i $RecordSet 0 1 200 "ftable" 1    ; Set field "ftable"

That’s just one record created from a single record type. In the csd, you’ll see I create two more records from the same record type.

Example

The Synth instrument is provided to show a simple example on how to use a record. Instead of passing amplitude, frequency and the function number of a stored single cycle wave as pfield data, a record number is passed to Synth with pfield 4. The user-defined opcode RecordGet is used to pull data from the record. The data is then passed to oscil.

instr $Synth
    irecord = p4  ; Record function number
    iamp RecordGet "amp", irecord        ; Get amplitude from record
    ifreq RecordGet "freq", irecord      ; Get frequency from record
    iftable RecordGet "ftable", irecord  ; Get ftable from record
    a1 oscil iamp, ifreq, iftable, -1
    out a1
endin
...
i $Synth 0 1 200  ; Feed synth record 200

More on all of this later.

Simple Keys — Modularized Key Mapping

Csound can read real-time input from an ASCII keyboard with the sensekey opcode. This is a convenient solution for doing things such as emulating a musical keyboard, or triggering loops.

Since I’m always looking for ways of modularizing code, I came up with a solution that allows me to map ASCII keys using i-events instead of hard coding bindings directly to an instrument. My solution still needs a little more work, though I’m confident that the final product will be fairly elegant. Well, as elegant as things get with Csound.

You can give it a try by downloading and running simple_keys.csd. It’s a one octave sinusoidal piano, that uses the following key map (lowercase only):

 s d  g h j
z x cv b n m,

Technical Overview

Instead of explicitly mapping each key from within an instrument, I worked a little Csound ftable trickery to make it so that I could bind keys using score events. A new key map is created like this with an i-event inside the orchestra:

event_i "i", $NewKeyMap, 0, 1, i_z, cpspch(8.00), 2

The score equivalent would be:

i $NewKeyMap 0 1 122 261.62556 2

The i-variable i_z holds the value of the ASCII code for the letter “z”, which is 122. The fifth parameter is the frequency that is to be associated with “z”. The last parameter is the ftable number of a stored single cycle wave.

When instrument NewKeyMap is called, it appends these parameters to an f-table. This f-table, called record_table, acts as an array of records, where each record stores an ASCII key code, frequency and f-table number.

The Listen instrument waits and listens for key presses. Whenever a key is pressed, the ASCII code of that key is checked against every record inside of record_table. If a match is found, then an event to instrument Synth is generated. The frequency and ftable number from the record is passed along the event as pfields.

I like this approach because I can reuse these three instruments in many ways without modification. For example, I could design a microtonal version of the keyboard just by creating a different set of events to instrument NewKeyMap, and I could use different timbres by generating different wave tables.

Looper Prototype

I built a performable looper prototype this past Sunday night. At this point, all it can do is start and stop two loops and a metronome. Rough around the edges is an understatement, though there are a few points of interest worth mentioning.

Rather than use Csound’s built-in tempo engine, I built one from a phasor. Players can trigger, retrigger and stop loops from the ascii keyboard. Triggering is also quantized to the beat, similar to how clips in Ableton Live are handled.

All in all, the whole thing came together in about two hours, plus another hour to tidy things up. Sure, there are some glaring flaws to the current design, but these things will improve with revision. For example, only two loops are supported, and they are hardwired into the engine. Samples and their mappings need greater levels of flexibility if this thing is going to be useful to anyone.

If you’re curious to try it out, download looper_prototype.csd. You will also need two samples from the OLPC sound library. The first is “110 Kool Skool II.wav” by BT, included in BT44.zip. The second is “BNGABetterArpeggio01.wav” by Flavio Gaete, included in FlavioGaete44.zip. Just drop these two samples into the same folder as the csd before running. You can replace these with your own loops, just make sure that you also update the values of the tempo loop macros, right above where the samples are loaded in the orchestra.

Here are the keys (lowercase only):

a – trigger loop a
z – stop loop a
s – trigger loop s
x – stop loop s
d – turn on metronome
c – turn off metronome

I’ll post a newer version once improvements to the design have been made.

Deep Synth — Dynamically Generated Oscillators

The situation — You want an instrument that can play any number of oscillators, determined by a p-field value in the score. The problem — Unit generators cannot be dynamically created in an instrument with a simple loop. One possible solution — Multiple events can be generated in a loop, with each event triggering an oscillator-based instrument.

Download: Deep_Synth.csd
Listen: Deep_Synth.mp3

The Csound file Deep_Synth.csd provides an example of how to dynamically generate oscillators using the compound instrument technique. A compound instrument is two or more instruments that operate as a single functioning unit. This particular compound instrument is built from two instruments: DeepSynth and SynthEngine. SynthEngine is, you guessed it, the synth engine, while DeepSynth is a player instrument that generates multiple events for SynthEngine using the opcodes loop_lt and event_i:

i_index = 0
loop_start:
    ...
    event_i "i", $SynthEngine, 0, idur, iamp, ipch, iattack, idecay, ipan,
            irange, icps_min, icps_max, ifn
loop_lt i_index, 1, ivoices, loop_start

If you are wondering why we can’t just place a unit generator, such as oscil, inside of a loop, read Steven Yi’s articles Control Flow Pt I and Pt II. Pay special attention to the section IV. Recursion – Tecnical Explanation near the end of Pt. II. Not only does Mr. Yi do an excellent job explaining these technical reasons, but he also provides another applicable solution for creating multiple unit generator instances utilizing recursion and user-defined opcodes.

Sound Design

The instrument SynthEngine uses a single wavetable oscillator, an amplitude envelope and the jitter opcode to randomly modulate frequency. A single instance of DeepSynth can generate multiple instances of SynthEngine. DeepSynth can generate a single instance, or 10,000+. Users have control over the depth of frequency modulation, as well as the rate in which jitter ramps from one random value to the next. Panning between instances of SynthEngine is evenly distributed.

“Turn it up!” – Abe Simpson

The name DeepSynth is a homage to Dr. James A. Moorer‘s piece Deep Note, also known as the infamous THX Logo Theme. Very early in the design, it became evident that DeepSynth is capable of making very Deep Note like drones. This is due to the fact that it does utilize some of the defining techniques used in Dr. Moorer’s piece.

I highly recommend reading Recreating the THX Deep Note by Batuhan Bozkurt at EarSlap. The author conveniently walks readers through each step of the process, providing both audio and Supercollider code examples. If you have ever yearned to create that amazing sound for yourself, here’s your opportunity.

Sensing Angels

Sensing Angels

by Charles Gran
in three movements
for solo clarinet and electronics with improvisation

Performed by Jesse Krebs
Recorded Sunday, November 8, 2009

…Sensing Angeles was conceived as a solo piece in which the acoustic sound of the clarinet would be manipulated in real-time by a computer. Today, this isn’t so remarkable and can be achieved by a variety of means. For the project I chose to use Csound primarily as a challenge, but also for a certain kind of historical connection.

If the clarinet is an instrument with history so is Csound. A direct descendant of software created at Bell Labs in the 1950s[2], many would consider it antique in the same way the clarinet might be seen. Of course, it turns out they are both quite modern. Like most things, modernity is about use. Today, the use of a command-line interface and computer code for the creation of music is as much a point-of-view as a practicality…

Listen

Normalize Your Render

Here’s a somewhat controversial hack that let’s you normalize a Csound composition during the rendering process, scaling the amplitudes using Csound’s 32-bit or 64-bit resolutions.

This hack requires two renders. The first time you render a composition, look for the overall amps value at the end of the output message. It will look something similar to this:

end of score.           overall amps:  0.64963

Next, you need to calculate the +3dB value above the overall amps value. Here is a function that does the trick, where x is overall amps:

f(x) = 10 ** (3.0 / 20.0) * x

Or if you want to skip most of the math involved:

f(x) = 1.4125375446227544 * x

Plug 0.64963 into x, and you get 0.91762676511328001. Set 0dbfs to this:

sr = 44100
kr = 4410
ksmps = 10
nchnls = 1
;0dbfs = 1.0
0dbfs = 0.91762676511328001  ; 3dB normalization

Render again. Done.

This is a Hack

Though I do love this technique, I highly recommend that you use it only for a few situations, and certainly advise that you don’t make a habit of it.

I like to use this when I’m finished with a composition, and I’m ready to make a master audio file. This allows me to squeeze out a tiny bit of extra sound quality by scaling the audio inside csound64’s internal 64-bit float resolution. Which in theory, will do a better job that if I did the same process to a rendered sample of a lower bit resolution.

This can also be a last ditch effort for fixing problems with a final project that is due in 15 minutes. You have some minor clipping, and no time to globally adjust all your amplitudes? This could save you in a pinch.

Warning!!

This only works with absolute amplitudes. If there are any relative values or opcodes in play, then this technique will fail. For example, these two lines of instrument code will not work:

a1 oscils p4 * 0dbfs, 440, 0  ; Scales the amplitude relative to 0dbfs
a2 diskin2 "foo.wav", 1       ; Scales the sample relative to 0dbfs

There are some other risks involved. If random elements exist in the composition, then the overall amps may vary between renders, which could lead to innaccurate normalization. I also highly suspect that the overall amps value at the end of the render is truncated, thus, it not pinpoint accurate — close enough for most situations.

Are you setting 0dbfs to 1.0?

Csound’s default output range is +/- 32767. Setting amplitudes with these numbers is, more or less, the hard way. The easy way is to use a normalized range of +/- 1. You can alter Csound’s output range with the 0dbfs header statement by placing it beneath the standard orchestra header, like this:

sr = 44100
kr = 4410
ksmps = 10
nchnls = 1
0dbfs = 1.0

The issue with the default 16-bit range is that it makes little sense to do so in a world of multiple bit depths (8, 16, 24, 32, 64, etc). If one is rendering a 16-bit file, the argument could be made in favor of the default range since there is a one-to-one mapping of input to output values. However, once one leaves the realm of 16-bit, values of +/- 32767 become arbitrary. On the other hand, a normalized range is not married to any single bit-depth, and translates well to other resolutions.

Besides being easier to compose and design instruments with, there are other practical reasons to adopt this good programming practice. The Csound manual says, “Using 0dbfs=1 is in accordance to industry practice, as ranges from -1 to 1 are used in most commercial plugin formats and in most other synthesis systems like Pure Data.” If you are ever to use Csound in conjunction with PD, MaxMSP, etc, this is the range you will use.

Make it habit. Start every new orchestra as if 0dbfs is every bit as important as sr, kr, ksmps and nchnls. And always set it to 1 (there are exceptions).

In case you are wondering why more orchestras out in the wild haven’t adopted this practice… 0dbfs first came into play in Csound version 4.10 in 2002. Most of existing knowledge base, such as books and tutorials, were written prior to this.

Patterns, Gestures and Behaviors

Patterns, Gestures and BehaviorsThink if an electric guitar as if it were a Csound instrument, metaphorically speaking. Potential ways in which someone can interact with this guitar include: pick, fingers, ebow, power drill, slide, capo, etc. While the guitar is always a guitar, the output changes with the way a person interfaces with it. This concept is every bit as true for digital instruments as physical ones.

The original Splice and Stutter came with a single loop-based sample engine, aptly named SampleEngine. This engine is played with three interface instruments (Basic, Stutter and Random) each with its own unique musical behavior. Today’s example leaves SampleEngine exactly as is, and continues to demonstrate interface versatility with three new instruments: RandomPhrase, Swell and Flam.

The designs of these new instruments were influenced by the original Splice and Stutter score. After I had completed the demo, I noticed some gestures/phrases/effects that were translatable into instrument behaviors.

Download: splice_and_stutter_v2.csd. BT Sample Pack (13.2 MB)

RandomPhrase

If you look at the end of the score in the original splice_and_stutter.csd, you’ll see a 32 line long phrase using the Random instrument. With the exception of the start times, the p-fields for each line are identical.

i $Random 32.25 1 0.25 0.333 100
i $Random +     . .    .     .
...
i $Random +     . .    .     .

That is a pattern, and patterns are translatable into behaviors. RandomPhrase achieves this by generating multiple events of over an interval of time, specified in p-field 4, with events spaced evenly apart by the value in p-field 5.

i $RandomPhrase 32.25 1 32 1 0.25 0.2 100

In the new score, the original 32 lines of code are consolidated into a single call to RandomPhrase. This means less code to maintain, while giving the loop-based sampler a new behavior for me to play with.

Important note. RandomPhrase generates events for instrument Random, which generates events for instrument SampleEngine, which produces the sound. You can create new interfaces out of other interfaces.

Swell

Another pattern revealed itself with this gesture:

i $Stutter 7    1 0.25 0.5     100 12 [1 / 12]
i $Stutter 7.25 1 0.25 0.25    100 .  [1 / 12]
i $Stutter 7.5  1 0.25 0.125   100 .  [1 / 12]
i $Stutter 7.75 1 0.25 0.06125 100 .  [1 / 12]

Unlike the randomly generated notes from the previous example, p-field column 5 (amplitude) uses various values for each note event in this phrase. Upon closer inspection, these particular values themselves have a pattern. Each successive amplitude is halved. This p-field pattern, and others like it, can be translated into a behavioral instrument.

The Swell instrument lets users specify a multiplier to change the amplitude values for each successive note in p-field 6. What’s good for amplitude is good for other things, so I applied the same basic principle to the stutter window, expanding the usefulness of the instrument. A swell gesture looks like this in the new score:

i $Swell 7 1 1 0.5 0.5 0.25 12 100 [1 / 12] 1

Flam

In the original score code, I created a flam effect with two Basic events:

i $Basic 11    1 0.5 0.6 100 7
i $Basic 11.02 1 0.5 0.2 100 1

One could miss the intention of these two lines while reading the score. By creating an instrument that creates a flam, the score becomes easier to read. Also, a flam effect is musically interesting enough to justify having an instrument dedicated to it.

i $Flam 11 1 0.25 0.3 100 7 0.02 2 0