When Does That Happen?

Look at the following code and try to answer these two questions: When is foo() scheduled to play? When is bar() scheduled to play? Be sure to read about the @ scheduler if you haven’t already done so.

@0 x = 0
@1 x = 3

@(5 + x) foo()
@5:
    @x bar()

The answers: foo() is scheduled to play at beat 5, while bar() plays at beat 8.

Why is this?

For foo(), the expression for the scheduler @(5 + x) is evaluated at beat 0. At beat 0, x equals 0. This is a single schedule.

For bar(), there are two separate schedules that happen in tandem: @5 and then @x. The @x schedule isn’t evaluated until the @5 schedule is triggered. You might say that the @x schedule is a task of the @5 schedule. At the time @x is evaluated, x equals 3.

Of course, this is all theoretical at the moment.

Patching — An Early Mockup

Being able to patch together units is a fundamental principle of a modular environment. Though I’m far from figuring out what the syntax should look like in my faux music language, I have been writing some mock up code just to get a sense of it.

Just a warning, the following example is ignorant of i/k/a-rates, along with pass-by-reference vs pass-by-value:

import FX
import Master
import Mixer
from Envelope import line
from TestLibrary.Instruments import SineTone

# Create instances of objects and patch together
st = SineTone()                             # Simple sine instrument
reverb = FX.Reverb(input=st.out, time=3.1)  # Reverb unit
mix = Mixer.pan(0, st.out, reverb.out)      # Dry/wet: value, sig 1, sig 2
output = Master.DAC(mix)                    # Main output

# Score
@0 turnon(reverb, mix, output)  # Turn on selected instances

@0 st.play(20, 1, 440)        # Play for 20 seconds, amp = 1, frequency = 440
@0 st.amp *= line(0, 10, 1)   # Amplitude rise
@10 st.amp *= line(1, 10, 0)  # Amplitude fall
@20 mix.pan = line(1, 20, 0)  # Dry to wet over 20 seconds

@10 reverb.time += line(0, 5, 8.1)  # Increase reverb time starting at 10
@20:
    @reverb.time turnoff(reverb, mix, output)  # Turnoff selected instances

The import section loads classes from existing instrument/unit generator libraries.

In the orchestra, instances are created from the imported classes, and patched together into a simple instrument graph. There’s a simple sine instrument, which is plugged into a reverb unit. Next, there is the pan mixer, which has the dry sine instrument plugged into one side, and the wet reverb signal plugged into the other; It is initially set to 100% dry. The pan mixer is then patched into the output, which sends the audio to the DAC.

The first line in the score turns on three instruments: reverb, mix and output. There will be times when the duration is unknown. The ability to start and stop machines is a must.

The sine instrument starts with a duration of 20, an amplitude of 1 and a frequency of 440. The amplitude of the sine is modulated by two envelopes, creating a rise/fall shape. Line envelopes also modulate the dry/wet mixer and reverb time.

At the end, the turnoff function shuts down reverb, mix and output.

Bonus round: Why do you suppose I wrote,

@20:
    @reverb.time turnoff(reverb, mix, output)  # Turnoff selected instances

instead of this?

@(20 + reverb.time) turnoff(reverb, mix, output)  # Turnoff selected instances

On the Importance of Docstrings

sine_arp visual object

Docstrings can do wondrous things. Wikipedia describes a docstring as “a string literal specified in source code that is used, like a comment, to document a specific segment of code.” I’ve rewritten sine_arp() to demonstrate a theoretical docstring example:

def sine_arp(dur, amp, pitch, lfo_freq):
    '''Generates an arpeggiated sine tone.
    
    dur - Duration
    amp - Amp
    pitch - Pitch, in pitch-class format
    lfo_freq - Frequency of lfo arpeggiator    
    output - Audio signal
    '''
    
    self['dur'] = dur                            # Set life of instance
    notes = [0, 3, 7, 8]                         # List of half-steps
    arp = Wavetable.osc(1, lfo_freq, notes)      # Cycle through note list
    pitch = cpspch(pitch) * 2 ** (arp / 12.0)    # Modulate pitch with arp    
    osc = Wavetable.osc(amp, pitch, sine(1000))  # Generate audio
    output osc                                   # Return audio

The docstring is the string block between the matching triple quotes. It gives a basic description of what the function/opcode/unit generator does and descriptions for the inputs and output.

What are the advantages of building docstring capabilities directly into a computer music language?

One. A proper description of a function and its interface will allow other users to import and reuse code with ease, propagating a remix culture within the community.

Two. With a utility like the Sphinx Python Documentation Generator, complete documentation can be auto-generated in the form of HTML, PDF, LaTeX, etc. This gives users the opportunity to browse a library of synths, patterns, note generators without ever having to browse the code.

Three. They can provide interactive help from within an integrated development environment. For example, if your cursor is resting in the middle of a function, the description can automatically be displayed from somewhere within the IDE.

Four. Imagine if a visual GUI environment, such as Max or PD or Reaktor, was built on top of with slipmat. Docstring data could automatically be relayed to the visual object, as seen in the picture above. Furthermore, I have a hunch that if slipmat is designed properly, than all text function definitions could be visual objects, and vice versa, without modificaiton.

Question: Are there any other music languages that utilize a Docstring-like system. If so, I want to study them.

Coding in Time with the @ Scheduler

One idea I have for my theoretical computer music language is having scheduling built right into the syntax, with the hopes that it will add the right balance of functionality and clarity.

I like the idea of having a score language separate from the orchestra language, though I’ve learned over the years that this approach acts as a bottle neck. The @ scheduler is a potential solution to bring both together, without losing the purpose of the score.

Instead of going into great detail on how the @ scheduler might work, I’ll just present the following four examples.

Example 1 — Nested Time:

do_something()     # Do something at beat 0, (@0 assumed)
@2 do_something()  # Do something at beat 2

@5:
    do_something()     # Do something at beat 5: 5 + 0, (@0 assumed)
    @3 do_something()  # Do something at beat 8: 5 + 3
    
    @4:                    # Block starts at beat 9: 5 + 4
        do_something()     # Do something at beat 9: 5 + 4 + 0, (@0 assumed)
        @1 do_something()  # Do something at beat 10: 5 + 4 + 1

Example 2 — Changing values mid-event:

def foo():
    freq = 440                                 # Initial frequency
    @1 freq *= 2                               # Frequncy doubles at time 1
    output Wavetable.osc(1, freq, sine(8192))  # Output signal

Example 3 — Scheduler error:

def foo():
    @1 freq = 440
    output Wavetable.osc(1, freq, sine(8192))  # Broken, freq doesn't exist

Example 4 — Organized score + generated events:

def hat_eights():
    for i in range(0, 8):
        @(i / 2.0) hat()

@0:
    hat_eights()
    @0 kick()
    @1 snare()
    @2 kick()
    @3 snare()
    
@4:
    hat_eights()
    @0 kick()
    @1 snare()
    @2 kick()
    @2.5 kick()
    @3 snare()

That last example reminds me of Max Mathews’ Radio Baton Conductor language.

Importing Modules and Reusing Code

I want to begin discussing the implications of yesterday’s Python-Csound mockup code (which I’ll refer to as slipmat for the time being), starting with with imports:

import Wavetable
from Gen import sine
from Pitch import cpspch

All of Csound’s 1400+ opcodes are available at all times. Great for convenience, perhaps not so great for organization. In contrast, the Python language starts out with only the basics, a clean slate. To extend functionality, users import modules. This is a cleaner approach than having it all hang out. There are some other advantages, too.

First, let’s look at a hypothetical import block. Let’s say you were to design a “computer network music” ensemble inspired by The Hub. Some communication modules you might include:

import Jack
import MIDI
import Network
import OSC

A computer network music ensemble sounds like it might be a complex piece of software. Complex enough where doing all your work in one file would be tedious. So you decide to start a new file, my_network.slip, where you store your own custom opcode/unit generator function definitions. In your main file, you write this to import:

import my_network

Not only can you use my_network for this project, but that code can be reused in any number of future projects. Code reusability is a beautiful thing. In fact, this would apply to any properly written slipmat document. For example, a composition would double as a library of synthesizers that you could plug into your own work:

import Trapped  # Trapped in Convert by Dr. Richard Boulanger
...
signal = Trapped.blue(22.13, 4, 0, 9.01, 600, 0.5, 20, 6, 0.66)

See trapped.csd.

What if Python DNA was Injected into Csound

Last year, I finally weened myself completely off of Perl and learned Python in its place. Colors have never been brighter. There is such an elegance to Python, and I would love to see this in a computer music language.

The following mockup code is what you would get if your combined the awesome powers of Csound with the beauty of Python. I’m taking some liberties. The example is ignorant of i, k and a-rate. Instead of an orchestra/score pair, everything is combined as one file. And I’m introducing a concept for scheduling events, @, which tells when to do something.

#!/usr/bin/env slipmat

import Wavetable
from Gen import sine
from Pitch import cpspch

def sine_arp(dur, amp, pitch, lfo_freq):
    this['dur'] = dur    
    pitch = cpspch(pitch)

    notes = [0, 3, 7, 8]
    arp = Wavetable.osc(1, lfo_freq, notes)
    freq = pitch + pitch * 2 ** (arp / 12.0)
    
    osc = Wavetable.osc(amp, pitch, sine(1000))
    
    output osc

def ping(amp=1.0, freq=262):
    this['dur'] = 0.5
    
    output Wavetable.osc(amp, freq, sine(32))
    
if __name__ == '__main__':
    def harmonic_pattern(freq=100):
        @0 ping(1.0, freq)
        @1 ping(0.8, freq * 2)
        @2 ping(0.6, freq * 3)
        @3 ping(0.4, freq * 4)
        @4 ping(0.2, freq * 5)
        
    sine_arp(4, 0.5, 8.00, 1 / 4)
    @4 sine_arp(4, 0.5, 7.07, 1 / 4)
    
    @8 harmonic_pattern()
    @13 harmonic_patten(cpspch(7.00))
    
    @18 ping()

I’ll comeback with a commented version of this in a few days. Though unlikely, it is my hope that some of you will take the time to try to figure out what is going on. My philosophy is that code should be human readable, and that syntax can help reinforce this.

What if Python DNA was Injected into Csound

I started a new computer music blog called Slipmat, which will cover broad topics related to musical language design, injected with my own personal theories and philosophies, rather than focusing on any particular language. On occasion, and only if appropriate, I’ll cross post between The Csound Blog and Slipmat. For example, this:

What if Python DNA was Injected into Csound

The following mockup code is what you would get if your combined the awesome powers of Csound with the beauty of Python. I’m taking some liberties. The example is ignorant of i, k and a-rate. Instead of an orchestra/score pair, everything is combined as one file. And I’m introducing a concept for scheduling events, @, which tells when to do something.

A Computer Music Language Design Scratch Pad

If I designed a computer music language, what would it be like?

I’ve always wanted to get my hands dirty by writing a simple computer music language, and have picked up many ideas over the years that I would love to implement.

I do a lot of Csounding, I have written and performed with SuperCollider, experimented with ChucK, and even spent time with SAOL. And Max; Even if visual, I still consider it a computer music language. Not that this directly relates to music, but I’ve also generated visuals with Processing. Aside from art languages, I’ve spent time with Perl, Python, Java and C.

Since I don’t have time in the near future to actually begin this project, I’m instead going to blog about any random thoughts I have on computer music language design.

Stay tuned.