Drag and drop Csound in Ableton Live from Enrico de Trizio on Vimeo.
Dr. Richard Boulanger’s Bohlen-Pierce Csound instruments played in Ableton Live with Enrico de Trizio’s Csound drag and drop Max Instrument. (via Piscoff’s twitter stream)
Drag and drop Csound in Ableton Live from Enrico de Trizio on Vimeo.
Dr. Richard Boulanger’s Bohlen-Pierce Csound instruments played in Ableton Live with Enrico de Trizio’s Csound drag and drop Max Instrument. (via Piscoff’s twitter stream)

Docstrings can do wondrous things. Wikipedia describes a docstring as “a string literal specified in source code that is used, like a comment, to document a specific segment of code.” I’ve rewritten sine_arp() to demonstrate a theoretical docstring example:
def sine_arp(dur, amp, pitch, lfo_freq):
'''Generates an arpeggiated sine tone.
dur - Duration
amp - Amp
pitch - Pitch, in pitch-class format
lfo_freq - Frequency of lfo arpeggiator
output - Audio signal
'''
self['dur'] = dur # Set life of instance
notes = [0, 3, 7, 8] # List of half-steps
arp = Wavetable.osc(1, lfo_freq, notes) # Cycle through note list
pitch = cpspch(pitch) * 2 ** (arp / 12.0) # Modulate pitch with arp
osc = Wavetable.osc(amp, pitch, sine(1000)) # Generate audio
output osc # Return audio
The docstring is the string block between the matching triple quotes. It gives a basic description of what the function/opcode/unit generator does and descriptions for the inputs and output.
What are the advantages of building docstring capabilities directly into a computer music language?
One. A proper description of a function and its interface will allow other users to import and reuse code with ease, propagating a remix culture within the community.
Two. With a utility like the Sphinx Python Documentation Generator, complete documentation can be auto-generated in the form of HTML, PDF, LaTeX, etc. This gives users the opportunity to browse a library of synths, patterns, note generators without ever having to browse the code.
Three. They can provide interactive help from within an integrated development environment. For example, if your cursor is resting in the middle of a function, the description can automatically be displayed from somewhere within the IDE.
Four. Imagine if a visual GUI environment, such as Max or PD or Reaktor, was built on top of with slipmat. Docstring data could automatically be relayed to the visual object, as seen in the picture above. Furthermore, I have a hunch that if slipmat is designed properly, than all text function definitions could be visual objects, and vice versa, without modificaiton.
Question: Are there any other music languages that utilize a Docstring-like system. If so, I want to study them.
One idea I have for my theoretical computer music language is having scheduling built right into the syntax, with the hopes that it will add the right balance of functionality and clarity.
I like the idea of having a score language separate from the orchestra language, though I’ve learned over the years that this approach acts as a bottle neck. The @ scheduler is a potential solution to bring both together, without losing the purpose of the score.
Instead of going into great detail on how the @ scheduler might work, I’ll just present the following four examples.
Example 1 — Nested Time:
do_something() # Do something at beat 0, (@0 assumed)
@2 do_something() # Do something at beat 2
@5:
do_something() # Do something at beat 5: 5 + 0, (@0 assumed)
@3 do_something() # Do something at beat 8: 5 + 3
@4: # Block starts at beat 9: 5 + 4
do_something() # Do something at beat 9: 5 + 4 + 0, (@0 assumed)
@1 do_something() # Do something at beat 10: 5 + 4 + 1
Example 2 — Changing values mid-event:
def foo():
freq = 440 # Initial frequency
@1 freq *= 2 # Frequncy doubles at time 1
output Wavetable.osc(1, freq, sine(8192)) # Output signal
Example 3 — Scheduler error:
def foo():
@1 freq = 440
output Wavetable.osc(1, freq, sine(8192)) # Broken, freq doesn't exist
Example 4 — Organized score + generated events:
def hat_eights():
for i in range(0, 8):
@(i / 2.0) hat()
@0:
hat_eights()
@0 kick()
@1 snare()
@2 kick()
@3 snare()
@4:
hat_eights()
@0 kick()
@1 snare()
@2 kick()
@2.5 kick()
@3 snare()
That last example reminds me of Max Mathews’ Radio Baton Conductor language.
I want to begin discussing the implications of yesterday’s Python-Csound mockup code (which I’ll refer to as slipmat for the time being), starting with with imports:
import Wavetable from Gen import sine from Pitch import cpspch
All of Csound’s 1400+ opcodes are available at all times. Great for convenience, perhaps not so great for organization. In contrast, the Python language starts out with only the basics, a clean slate. To extend functionality, users import modules. This is a cleaner approach than having it all hang out. There are some other advantages, too.
First, let’s look at a hypothetical import block. Let’s say you were to design a “computer network music” ensemble inspired by The Hub. Some communication modules you might include:
import Jack import MIDI import Network import OSC
A computer network music ensemble sounds like it might be a complex piece of software. Complex enough where doing all your work in one file would be tedious. So you decide to start a new file, my_network.slip, where you store your own custom opcode/unit generator function definitions. In your main file, you write this to import:
import my_network
Not only can you use my_network for this project, but that code can be reused in any number of future projects. Code reusability is a beautiful thing. In fact, this would apply to any properly written slipmat document. For example, a composition would double as a library of synthesizers that you could plug into your own work:
import Trapped # Trapped in Convert by Dr. Richard Boulanger ... signal = Trapped.blue(22.13, 4, 0, 9.01, 600, 0.5, 20, 6, 0.66)
See trapped.csd.
Last year, I finally weened myself completely off of Perl and learned Python in its place. Colors have never been brighter. There is such an elegance to Python, and I would love to see this in a computer music language.
The following mockup code is what you would get if your combined the awesome powers of Csound with the beauty of Python. I’m taking some liberties. The example is ignorant of i, k and a-rate. Instead of an orchestra/score pair, everything is combined as one file. And I’m introducing a concept for scheduling events, @, which tells when to do something.
#!/usr/bin/env slipmat
import Wavetable
from Gen import sine
from Pitch import cpspch
def sine_arp(dur, amp, pitch, lfo_freq):
this['dur'] = dur
pitch = cpspch(pitch)
notes = [0, 3, 7, 8]
arp = Wavetable.osc(1, lfo_freq, notes)
freq = pitch + pitch * 2 ** (arp / 12.0)
osc = Wavetable.osc(amp, pitch, sine(1000))
output osc
def ping(amp=1.0, freq=262):
this['dur'] = 0.5
output Wavetable.osc(amp, freq, sine(32))
if __name__ == '__main__':
def harmonic_pattern(freq=100):
@0 ping(1.0, freq)
@1 ping(0.8, freq * 2)
@2 ping(0.6, freq * 3)
@3 ping(0.4, freq * 4)
@4 ping(0.2, freq * 5)
sine_arp(4, 0.5, 8.00, 1 / 4)
@4 sine_arp(4, 0.5, 7.07, 1 / 4)
@8 harmonic_pattern()
@13 harmonic_patten(cpspch(7.00))
@18 ping()
I’ll comeback with a commented version of this in a few days. Though unlikely, it is my hope that some of you will take the time to try to figure out what is going on. My philosophy is that code should be human readable, and that syntax can help reinforce this.
I started a new computer music blog called Slipmat, which will cover broad topics related to musical language design, injected with my own personal theories and philosophies, rather than focusing on any particular language. On occasion, and only if appropriate, I’ll cross post between The Csound Blog and Slipmat. For example, this:
What if Python DNA was Injected into Csound
The following mockup code is what you would get if your combined the awesome powers of Csound with the beauty of Python. I’m taking some liberties. The example is ignorant of i, k and a-rate. Instead of an orchestra/score pair, everything is combined as one file. And I’m introducing a concept for scheduling events, @, which tells when to do something.
If I designed a computer music language, what would it be like?
I’ve always wanted to get my hands dirty by writing a simple computer music language, and have picked up many ideas over the years that I would love to implement.
I do a lot of Csounding, I have written and performed with SuperCollider, experimented with ChucK, and even spent time with SAOL. And Max; Even if visual, I still consider it a computer music language. Not that this directly relates to music, but I’ve also generated visuals with Processing. Aside from art languages, I’ve spent time with Perl, Python, Java and C.
Since I don’t have time in the near future to actually begin this project, I’m instead going to blog about any random thoughts I have on computer music language design.
Stay tuned.
Alex Hofmann has created four videos demonstrating how to get up and running with QuteCsound, the new front-end editor for Csound. This first video demonstrates the built-in help system and included tutorials. Here are the other three: A 58-second intro, Using MIDI and Configuration.
This is a demonstration of the software i wrote for my MSc In Music Technology at DKIT under the supervision of Rory Walsh. The software is used to create custom multitouch user interfaces for controlling Csound Instruments. It allows users to define gui elements, such as sliders and buttons, in their csound file. The .csd file is parsed by CsMultitouch to retrieve this information. Said information is then used to create the multitouch interface using the PyMT framework.
I want this.
Let’s set the way back machine to January 2001. This is around the time I took my first steps into designing an additive synthesizer. I’m not sure when user-defined opcodes were introduced, though there is a good chance they had not existed then. And if they did, I had no idea of their existence. Same goes for the event series of opcodes. In my legacy code, each overtone, along with supporting envelopes and transfer functions, were written explicitly. I was a perl junky at the time, so I wrote scripts that would generate the instruments for me.
The example I’m posting today is the legacy code from 2001. I did not change the code, except for converting tabs to spaces and placing the orc/sco pair into a csd.
Download: add_synth_legacy.csd
In my new additive synth, I’m employing a recursive user-defined opcode technique, which I first read about in Steven Yi’s Csound Journal articles Control Flow Part I and Part II.
If you look at part 2, Steven actually demos an additive synth, which is eerily similar to the core design of the one I’m in the process of making. Which means I either independently came up with a similar design, or more likely, I’m suffering from a bout of cryptomnesia. Either way, if you haven’t studied up on these two articles, then perhaps it’s time you make a weekend project out of it; They are pure gold.