There is a simple, but highly useful hack I discovered a few years ago in which it’s possible to pass the current tempo from the score to an instrument using p3. And since dubstep is all the rage these days, I’ve designed a practical example in which a tempo-synced LFO modulates the cutoff of a low pass filter to create that “wobbly” bass effect.
Passing the tempo of the score to an instrument is a fairly straight forward process. First, always set p3 to 1; This value of 1 is translated into seconds-per-beat (SPB) when the score is preprocessed. Second, use p4 for the desired duration. Third, in the instrument code, the following two lines obtain the tempo in SPB from p3 and then resets the duration of the instrument to the desired length:
ispb = p3 ; Seconds-per-beat. Must specify "1" in score
p3 = ispb * p4 ; Reset the duration
Now that we have the tempo, we can create a low frequeny oscillator that is synced to the tempo. To create that dubsteb wobble effect, p7 is designated as the division of the note. We need to take this p7 value and translate it into Hz utilizing the SPB value:
idivision = 1 / (p7 * ispb) ; Division of Wobble
Plugging this value in the frequency parameter of an oscillator yields a tempo-synced LFO. (some drift may occur over long periods of time)
klfo oscil 1, idivision, itable
Try changing the tempo in the score, and you’ll notice that the wobbles stay consistent relative to the tempo.
“Just a few years before Michael Ladd Orr was born, Alvin Toffler introduced the concept of “future shock”. The concept, and the book of the same name, proposed we were entering an age when the future was arriving “prematurely” – when one could stay in one place and the culture around him/her would change so rapidly that it would have the same disorienting effect as simply moving to a foreign culture – when the rate of technological advancement would increase exponentially until the average person simply wouldn’t be able to keep up. Orr’s work often hints at these notions, whether he intends this or not is up for discussion. His art studio and tools are completely mobile at all times so he can create on the spot and in the moment. He may post a newly created image on the web or drop it in the mail. There is no time to dictate meaning, only to reinterpret the numerous images and impressions as they go whizzing by. Consumerist culture’s infinite supply of marketing images collides with arbitrary hand-drawn patterns. Ordinarily warm and vibrant yellows and oranges become jarring against nonsensical shapes and random household items. There is the sense of an artist trying to inject a touch of restraint and familiarity into a machine that is insatiable and alien. Collage becomes collision, but all the while a childlike kind of wonder and playfulness seeks to burst through the surface. Orr, 35, was born and raised in and around Atlanta, where he continues to live, work, and ply his craft.”
I created a group at Noisepages for anyone interesting in The Audio Programming Book. It’ll be a place where we can share code, ask/answer questions, and take the journey together. Hope to see many of you there!
“This is not just a book; it is an encyclopedia of mathematical and programming techniques for audio signal processing. It is an encyclopedia focused on the future, but built upon the massive foundations of past mathematical, signal processing, and programming sciences.” – Max Mathews.
The Audio Programming Book, edited by Richard Boulanger and Victor Lazzarini and published by MIT Press, showed up at my doorstep last friday. Since receiving my copy, I’ve been thumbing through the pages at random, reading every little excerpt that caught my eye, while taking hard long looks at the various C programming examples. My initial impression, wow.
I suspect I’ll be covering some C programming here in the near future.
The event opcodes are some of my favorites in all of Csound. They are highly versatile in their application, as they can be used for algorithmically generating notes, transforming scores events, creating multiple interfaces to a single instrument, for triggering grains in a granular patch, etc. The list goes on and on.
Today’s example uses the event_i opcode to generate multiple score events from a single score event. The basic run down is this: Each instr 1 event generates 5 events for instr 2. Each of these 5 events are staggered in time, with the delay between notes set by the idelay parameter. The pitches also have a fixed half-step sequence of [0, 3, 7, 5, 10] in relation to the base pitch specified in the ipch parameter. Take a listen.
Here are a couple of exercises to help get you going:
Create a new note pattern sequence. The last parameter of event_i is used to set the relative pitch. Set the value for x in ‘ipch * 2 ^ (x / 12)’.
Alter the rhythm. The third parameter controls how far into the future a particular event is trigger. The example uses integers, though using floats such 1.5 and 3.25 can completely transform the characteristics of the output.
Write a short etude with your modified instrument.
This is a block diagram of all recent activity of this blog. Though seriously, I’ve been wicked busy lately, and things are just now starting to slow down just enough so that I can pick up where we left off with Jean-Luc’s NYU Synthesis class. Jean-Luc and I been in perpetual contact the whole time, and we’ve devised a plan to make up for the missing weeks. Starting tomorrow, I’ll start blogging again about the current topics being covered in class. Once the semester is over, JL and I plan to go back and fill in the blanks. There is still plenty of Csound information in the works.
In the post “Low Frequency Oscillator,” an oscillator is used to modulate the frequency of a second oscillator; This is known as frequency modulation. By substituting a low frequency oscillator with a high frequency oscillator, we get Frequency modulation synthesis, which produces harmonically rich spectra with as few as two sine wave oscillators.
This technique was first applied to music by American composer John Chowning at Stanford University in 1967. FM was a real game changer. Since FM could produce a wide range of musically interesting sounds with very little computation, it helped pave the way for computer music to transition from a institutional commodity to a viable mainstream technology; Primitive digital devices could fiscally and audibly compete with physical and analog instruments. The Yamaha DX7, an FM synthesizer, was released in 1983 and became the “first commercially successful digital synthesizer.” If memory serves me correctly, I once heard that long before I was a student at The Berklee College of Music, their computer music program was FM Synthesis.
A Csound port of Chowning’s most famous musical work “Stria” is included with QuteCsound as one of the examples.
FM synthesis is a broad topic that would be impossible to cover in a single blog post. I encourage you to read chapter 9 of the Csound Book, “FM Synthesis and Morphing in Csound: from Percussion to Brass” by Brian Evens.
To get you started with some basic FM programming, I created an example CSD that uses the foscil opcode, a self contained FM synthesizer. You can immediately start plugging in various values to hear the results. Here’s a quick run down of the pfields for the instument:
p4 — Amplitude
p5 — Pitch
p6 — Carrier ratio. Changing this will generally affect the base frequency of the note played, as well as the timbre. This works in tandem with the modulator ratio.
p7 — Modulator ratio. Changing this will affect the timbre.
p8 — Index of modulation. This determines how much modulation is applied. A value of 0 will apply no modulation, resulting in a sine wave. The higher the value, the more spectra there is in the sound, producing a brighter timbre. The index is modulated by an envelope, so each note will start with an index supplied here, then fade to 0 by the time the note reaches the end.
I personally have a strong association in which every time I hear a certain classes of FM sounds I can’t help but think of the Atari arcade classic Marble Madness. The sound chip inside the machine was produced by Yamaha and “is similar to a Yamaha DX7 synthesizer.” It was also the first Atari game to use it, which probably explains why I think of this particular game; I spent much of my childhood in various arcades.
One of the the easiest ways to play a sound file in Csound is to use the diskin2 opcode. With it, you can loop samples, modulate the pitch, apply filtering and ring modulation, etc. You go really far with just this mini-sampler. It does have its limitations. The approach of the Beat Mangler X example overcomes them, but requires a more advanced design, which we’ll cover at some later point. Though for many situations, you’ll find that diskin2 is the perfect solution.
The listening example uses the amen loop used in the Beat Mangler X example, though you can easily use your own mono files by making changes to the score. Diskin2 supports stereo files, though this example only works with mono samples.
There are few resources that approach introductory Csound compostion as elegantly as Cascone’s chapter from The Csound Book. In it, Cascone provides a bit of history and personal background, techniques for composing in a computer music environment, and six instrument designs. There is one particualr passage that I want to emphasize, as I think it can be of great use:
“I started studying instrument design by taking other composers’ instruments and drawing them out on paper in flowchart form. I took the scores and isolated a particular instrument by commenting out all other instruments except the one I wanted to listen to. I would then start to modify that instrument in various ways so I could hear the effect my code was having.”