Csoundo Progress Report

I think I leveled up in terms of my understanding of the Csound API since last weekend, thanks to prompt and insightful help from Csounders on the mailing list. This translates into being able to move Csoundo along faster than I have been. Which is probably a necessity since interest in Csoundo unexpectedly starting gaining momentum last week, and I want to capitalize on it.

The point to my rambling is that this project has been moving forward since I released Csoundo 9 days ago. It may still be awhile until Csoundo works flawless on all computers, but many issues have now been identified, and I’ve been researching solutions.

That said, there are many bugs that have not been reported. Feel free to email them to me (jacobjoaquin@gmail.com), or just post a comment here. Furthermore, if you come up with any examples you’re proud of, pass it along. I’m *very* interested to see your projects.

On a related note, I picked up one of these.

See Arduino. And perhaps Scott Pilgrim Vs. The World (loved that movie.)

A Milestone for The Csound Blog

Over the weekend, The Csound Blog surpassed 100 RSS subscribers (a combo of two feeds) according to Google Reader. The recent article at CDM was a big help in pushing that number over. Thank you everyone!

And even though Csoundo has only been up a little more than a week, it has already been downloaded 135 times, far exceeding my hopes of 15. Thanks to everyone who has given this earliest of ALPHA releases a chance. And double thanks to everyone who has been emailing me with bugs. Your efforts are helping in building a better next release.

The Future of Csoundo

Now that I’ve pushed Csoundo out the door, it’s time for me to take a step back and look at where it’s at and where it’s going. As of this moment, Csoundo is nothing more than the bare Csound essentials. It can run a CSD, read and write to tables, generate score events, and push/pull information via chn busses. Even with near minimum functionality, so much is already possible, but that’s no reason to stay content.

Over the last 3-4 weeks, I’ve been learning the Csound API while trying to identify some of the big issues that need to be overcome in order for Csound to be a viable platform for Processing. And even more importantly, becoming intimately familiar with the Processing philosophy. If Csoundo had a mission statement, it would go something like this:

“Do everything the Processing way (or as close as possible)”

My vision for the design of Csoundo is that it will feel like Processing when you use it. A simple example of something I can do to achieve this goal is to study the naming conventions of Processing, and apply what I learn to Csoundo’s methods. Little details like this add up.

On the other side of the equation is Csound. I don’t want to hide that it’s Csound under the hood, or pile on unnecessary constraints. I want the full power of Csound to remain in the hands of users.

I’m going to post some notes I’ve made. They may not fully make sense, but it’ll hopefully give you the sense of where this project is heading.

  • Support for 32 and 64 bit Csound, though only 32 bit support initially.
  • Support for real time and non-real time Csound operations.
  • Support synchronized and non-synchronized audio.
  • Being able to defer redraw() to Csound. (optional)
  • In instances in which more than one Csound kblock is performed per Processing frame, have a method for collecting krate data into an array that can be scanned by Processing. For examples, scanning an rms signal for a peaked threshold value.
  • Protect Csound data so that memory isn’t being read and written to at the same time.
  • Ensure that a Csound operation is completed before moving onto the next.
  • Plays well with MIDI and OSC, including existing Processing libraries themidibus, proMidi and oscP5.
  • Set Csound command strings.
  • Include pre-built synths and class interfaces for plug-n-play support.
  • Get a list of f-tables in Csound’s memory.
  • Proper documentation.
  • Keep it simple.

What’s going on with QuteCsound?

A lot, actually. Jim Aikin recently wrote an excellent column for CDM called Deep Synthesis Made Free, Easy: QuteCsound. In it, he makes this observation, “With the release of QuteCsound 0.6.0, developer Andres Cabrera has made Csound about as easy to use as it’s ever likely to be.” Back in the day, it took me six hours to get Csound up and running the first time. With QuteCsound, it’s as simple as loading up an example and clicking Run. Easy it is.

Also, six new screencasts have made it onto the web, covering the latest features for this Csound frontend editor.

[kml_flashembed movie="http://www.youtube.com/v/KKlCTxmzcS0" width="425" height="350" wmode="transparent" /]

Presets Tutorial [Part 1, Part 2]
Live Events [Part 1, Part 2, Part 3]
New editing features in 0.6.0

Announcing Csoundo

I’m happy to announce the first ALPHA release of Csoundo, a Csound library for Processing.

Download Csoundo

Csoundo is in very early development, but like they say, release early and often. Csoundo has only been tested on OS X Leopard and Snow Leopard. It comes with three examples: A mouse theremin, f-table-to-graph and a visualization experiment. Video of the last example can be seen here.

A special thanks to everyone in the Csound community who has been helping me figure out the Csound API, making Csoundo a reality.

update: I’m getting some initial bug reports. Which is expected. I’m on it!

update 2: Reports of Csoundo working on Windows, while others are working to get it up and running on Linux.

Csound Tables to SVG with Processing

I just discovered that I can easily export tables from Csound to a scalable vector graphic (SVG) format thanks to Processing. I’m really excited about this since I’ll be able to generate nice looking graphs in the future, both for this blog and for The Csound Journal. This all comes from work I’m doing in my spare time; I’m exploring the idea of creating a proper Csound library to use with Processing.

The picture above showcases two tables generated using two different methods. The top wave simply draws the contours of a table storing a band-limited sawtooth wave. The bottom displays discrete points in a table storing randomly generated noise. Nothing too fancy, yet, but the possibilities are there.

Csound + Processing Experiment I

What started as a series of small experiments in musical Markov chains has led me back to creating Processing sketches with audio; Something I’ve been meaning to do for ages.

You see, one the issues I’ve been struggling with here at the Csound Blog is to make it, well, interesting. Code and synth tricks are all dandy, but ultimately not very accesible since one has to do homework to really get what’s going on. So as a way to reconcile this fact is to incorporate more visual elements that can be passively enjoyed rather than studied. Processing makes the perfect vessel. This doesn’t mean everything I do here will now get a short movie. The fact is, these things will probably be few and far between.

As for the piece, I wanted to incorporate some real-time visuals showing the current node in a musical Markov chain. Though I got side tracked a bit, and went with this morphing Markov design. The network structure itself is based on the de facto Processing algorithm, where edges are drawn between overlapping circles. Just look at the header of the Processing website to see what I mean. All I had to implement that design, animate it, and add audio. The final piece is something between a lava lamp and a wind chime.

I did run into two issues while preparing this video:

1. I have no idea what I’m doing when compressing video for the web. Thus, the low quality. I’ll get better.
2. There is a bug that causes the Csound engine to stop mid-performance for reasons I have not been able to isolate. Thus, I’m limited to showing only two minutes worth. In this case, not a big deal, but I’ll have to overcome this for longer works.

And the code is not yet available, though will be within a couple of weeks.

Lisp Revelation

Algorithms are Thoughts, Chainsaws are Tools from Stephen Ramsay on Vimeo.

CreateDigitalMusic.com recently posted this video. Though the focus is on Live Coding, what jumped out was this comment on the syntax of Lisp:

“Rule number 1. Everything is a list. Rule number 2. Some lists are functions in which the first item of the list is a function name and the remaining items are parameters. You have now learned the entire syntax of Lisp.”

This is a bit of a revelation to me. Too be honest, I’ve sort of avoided lisp for the most superficial of reasons; There are just to many parenthesis.

And now that my mind is open again, I’m going to take a serious look into Lisp and especially Impromptu. I’m not saying that I’m shifting from a Python-based environment to a Lisp-based system, only that I will continue to remain open to other possibilities.

Markov Experiment II

Markov Experiment 2

Now that I have the basic Markov engine happening, it’s play time. Instead of a simple three node chain, today’s example has eight. And each node does not lead to every other node, which adds a layer of melodic phrasing.

Source code: markov_experiment_2.csd

Listen at SoundCloud

Nothing more to say really, other than it’s time for me to start working on my orchestrations to make these things sound more musical than these tech demos.

Three Node Markov Chain

The graph above shows a three node musical Markov chain. And the source code/audio below is this Markov chain realized in Csound.

Download: 3_node_markov_chain.csd
Listen at SoundCloud

What is a Markov chain? According to wikipedia, “A Markov chain is a discrete random process with the property that the next state depends only on the current state.”

This Markov melody starts with state_0, playing an E-flat for an 8th note duration. A new state is then chosen for the next course of action. There is an equal chance (1 in 3) that state_0, state_1 or state_2 will be chosen. Assuming state_2 is selected, G below middle C is played for a 16th note duration. From state_2, there is a 1 in 10 chance state_0 will be chosen at random, 2 in 10 chance for state_1, and a 7 in 10 chance for state_2. I’m guessing you can figure out state_1. This chain plays until time runs out.

Though there are various ways of creating Markov chains in Csound, I landed on this design, which utilizes Csound instruments as Markov nodes. My first draft used a lot of gotos and jumps, which has its place, but I really wanted to encapsulate each state in it’s own block of code. Instruments do the trick, though I believe user-defined opcodes would work as well. Node connections, along with their weighted probabilities, are defined as f-tables in a centralized block of code in the global space of the orchestra; This allows a user to quickly change the probabilities, and to quickly create/modify new Markov networks.

In order to allow multiple instances of the Markov melody to play simultaneously, I’ve equipped each running Markov thread with a string identifier. You can hear two chains playing in the last 4 bars, with the second chain offset by a 32nd note.

As an alternative Markov chain design, Andrés Cabrera has a Csound example in his Csound Journal article Using Python Inside Csound. Python is actually better suited to the task than pure Csound, so definitely take a look.