Live Coding and Capturing a Perfomance

It’s the latest fad that’s sweeping computer music. And I would love for Slipmat to have this ability in its arsenal of tools. Without having to sacrifice non-realtime rendering for computationally expensive processes, of course.

The following conceptual live coding prototype shows what a simple session would look like if it was modeled on the Python interpreter:

$ slipmat --capture_performance my_session.txt
>>> from LiveCodeSeq import seq
>>> from MyBassLibrary import rad_rezzy
>>> from random import random
>>> p[0] = [int(random() * 12) for i in range(0, 16)]
>>> p[1] = [int(random() * 12) for i in range(0, 16)]
>>> p[0]
[5, 9, 11, 8, 7, 8, 5, 1, 10, 7, 4, 4, 6, 4, 4, 2]
>>> p[1]
[6, 6, 5, 3, 5, 7, 8, 4, 0, 0, 8, 7, 9, 7, 2, 4]
>>> r = rad_rezzy()
>>> s = seq(instr=r, pattern=p[0], base_pch=6.00, resolution=1/16, tempo=133)
>>> s.start()
>>> s.change_pattern(pattern=p[1], on_beat=0)
>>> @60 s.stop(onbeat=0)

I have a gut feeling that there are some changes that should be made. Though as a starting point, this isn’t a terrible one.

Being able to capture a live coding performance would be fantastic. Not sure how workable it would be, but perhaps such a feature would produce a file that could be played back later:

$ cat my_session.txt
@0             global.seed(7319991298)
@4.04977535403 from LiveCodeSeq import seq
@8.43528123231 from MyBassLibrary import rad_rezzy
@10.9562488312 from random import random
@15.6027957075 p[0] = [int(random() * 12) for i in range(0, 16)]
@20.7757632586 p[1] = [int(random() * 12) for i in range(0, 16)]
@26.2462371683 p[0]
@29.3961696828 p[1]
@34.0424988199 r = rad_rezzy()
@40.3211374075 s = seq(instr=r, pattern=p[0], base_pch=6.00, resolution=1/16, 
                   tempo=133)
@45.5491938514 s.start()
@47.8991166715 s.change_pattern(pattern=p[1], onbeat=0)
@52.6267958091 @60 s.stop(onbeat=0)

The @ schedules are the times in which return was originally pressed for each event. Looks like I’ll be spending some time with ChucK soon.

2 thoughts on “Live Coding and Capturing a Perfomance

  1. You’d also need to capture in a sample accurate way when each of the events were triggered (or know at the start of which control block they were processed). So maybe it’s better to count in samples or control blocks rather than seconds. You could convert into seconds if you then wanted to render a hi-res version of your live performance.

     

    Cheers,

    Andrés

  2. @Andrés:

    I ran some tests on Csound awhile back in terms of converting seconds into samples. Providing my test didn’t have any logic errors, it showed that converting back and forth between samples and seconds still produced sample accurate values. I did this for both the 32 and 64 bit versions. The same will hopefully be true for Slipmat.

    Let’s take a look a hypothetical situation. Assuming the time at which an interpreter event is logged happens in the middle of a k-block, the event would be quantized to start at the next k-rate pass.

    There’s another possibility for recording live coding sessions, which I thought about immediatly posting this blog, is that instead of recording the events as shown above, every key stroke is logged. It would probably work like a MIDI file, where every key stroke event is serialized as a [delta time, ascii code] pair. This wouldn’t necessarily be limited to key strokes, either.

    People could then trade live coding sessions like they trade mp3s and code, and then watch “the show” at their own convenience.

    “hi-res version of your live performance”

    I like this idea. A lot.