FM Synthesis with foscil

In the post “Low Frequency Oscillator,” an oscillator is used to modulate the frequency of a second oscillator; This is known as frequency modulation. By substituting a low frequency oscillator with a high frequency oscillator, we get Frequency modulation synthesis, which produces harmonically rich spectra with as few as two sine wave oscillators.

This technique was first applied to music by American composer John Chowning at Stanford University in 1967. FM was a real game changer. Since FM could produce a wide range of musically interesting sounds with very little computation, it helped pave the way for computer music to transition from a institutional commodity to a viable mainstream technology; Primitive digital devices could fiscally and audibly compete with physical and analog instruments. The Yamaha DX7, an FM synthesizer, was released in 1983 and became the “first commercially successful digital synthesizer.” If memory serves me correctly, I once heard that long before I was a student at The Berklee College of Music, their computer music program was FM Synthesis.

A Csound port of Chowning’s most famous musical work “Stria” is included with QuteCsound as one of the examples.

FM synthesis is a broad topic that would be impossible to cover in a single blog post. I encourage you to read chapter 9 of the Csound Book, “FM Synthesis and Morphing in Csound: from Percussion to Brass” by Brian Evens.

To get you started with some basic FM programming, I created an example CSD that uses the foscil opcode, a self contained FM synthesizer. You can immediately start plugging in various values to hear the results. Here’s a quick run down of the pfields for the instument:

  • p4 — Amplitude
  • p5 — Pitch
  • p6 — Carrier ratio. Changing this will generally affect the base frequency of the note played, as well as the timbre. This works in tandem with the modulator ratio.
  • p7 — Modulator ratio. Changing this will affect the timbre.
  • p8 — Index of modulation. This determines how much modulation is applied. A value of 0 will apply no modulation, resulting in a sine wave. The higher the value, the more spectra there is in the sound, producing a brighter timbre. The index is modulated by an envelope, so each note will start with an index supplied here, then fade to 0 by the time the note reaches the end.

[kml_flashembed movie="http://player.soundcloud.com/player.swf?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F6219223&show_comments=true&auto_play=false&color=cc0000" width="550" height="81" wmode="transparent" /]Listen @ SoundCloud

Download foscil.csd here.

I personally have a strong association in which every time I hear a certain classes of FM sounds I can’t help but think of the Atari arcade classic Marble Madness. The sound chip inside the machine was produced by Yamaha and “is similar to a Yamaha DX7 synthesizer.” It was also the first Atari game to use it, which probably explains why I think of this particular game; I spent much of my childhood in various arcades.

[youtube]http://www.youtube.com/watch?v=N_NMQT1G_S8[/youtube]

Synthesis Fall 2010

Organizing Sounds: Sonata Form

When working with Csound, or in the computer music genre in general, organizing the sounds and grooves we create into finished compositions can sometimes be a bit tricky, or perhaps more difficult than when working with more traditional musical material. We don’ t always, if at all, deal with melodic, harmonic or rhythmic material in the classic, Western, sense of these terms. Therefore we sometimes lack a framework, or context within which to develop our material from ideas to complete pieces. In this series of articles we will discuss various approaches to composition and form specific to the context of electronic and computer music and explore various approaches to the organization of musical data.

Whether we think or ourselves as artists, composers, sound designers or researchers, we all at some point are confronted with this issue. But working with new media, and cutting edge techniques does not mean we should forget our legacy, and the issue of form is certainly not a new one. A tried and true classic is the Sonata form, which as we shall see, can be of tremendous help.

“I alter some things, eliminate and try again until I am satisfied. Then begins the mental working out of this material in its breadth, height and depth.” — Ludwig van Beethoven

I. The Sonata Form

The word Sonata comes from the Italian suonare, ‘ to sound’ , which implied music to be ‘ sounded’ through instruments, distinct from cantata, a piece to be sung. While it is difficult to establish with certainty when the Sonata form was invented, it became very popular in the 18th century as the predominant form used in instrumental music.

A sonata consists of three sections:

1. Exposition
2. Development
3. Recapitulation

Traditionally, composers will introduce two opposing or complimentary themes in the exposition, the first theme establishes the home key and a transitional bridge leads us to the second theme, often in the dominant key.

The development tends to vary in format and length, but is used to build tension, or interest by developing the themes in the exposition along with new material. Traditionally the development begins the key the exposition ended, only to go through a number of modulations while building up melodic complexity before going onto the last part of the development: the retransition, a section often in the dominant key intended to prepare us for the next section and the return to the tonic.

The recapitulation is a modified version of the exposition, which follows a similar structure, theme 1, bridge, theme 2 and coda, but some variations are introduced to differentiate it from the exposition. The main purpose of the recapitulation is to release the tension from the development and bring a sense of closure and continuity.

Or in condensed form:

AB – C – AB’

As it turns out, the sonata form is still very much alive today, even and perhaps especially in electronic music, which tends to put less of a focus on vocals and is therefore ideally suited to the form.

II. Case Study: Digitalism

Let’ s take a listen to the track ‘ Idealistic’ by the band Digitalism released on their album Idealism on May 9th 2007. The German duo has enjoyed worldwide success and this track is fairly representative of their work.

‘Idealistic’ by Digitalism:

[youtube]http://www.youtube.com/watch?v=lFYVK2IAXiw[/youtube]

It is also, a perfect example of the sonata form being used in electronic music.

1. Exposition

The track opens with a strong beat, supported by a steady guitar riff and soon after an abrasive bass line is introduced (bar 9 or 0:21 into the track) which goes on until 0:59, when a keyboard riff is introduced underneath the beat at first but alone soon thereafter during the breakdown at 1:06. After being stated a few times, vocals are then brought in at 1:25 at which point the main elements of the piece have been all introduced. By the time the next section begins, at 1:50, the band starts developing them.

2. Development

The development begins with a paired down version of the first riff, adding rhythmic complexity to the drum beat and bass line riff, until after being hinted at a few times before that point, the keyboard riff comes back in full at 2:22. The two themes are played together until 2:52, when a new variation is introduced, this time it is a play between the vocals and a simplified version of the keyboard riff, which goes on until 3:05 when they stop and only the beat and the bass holding a single note come in. This is the retransition, tension is built up also by introducing shortly after a busy hi-hat pattern in the main beat until 3:20 which marks the beginning of the recapitulation.

3. Recapitulation

The recapitulation begins with much like the exposition only this time the bass line comes in before the guitars, introduced at 3:35, supported by additional sound design elements. The keyboard riff is hinted at many times as a motif that comes in at the end of the four bar phrases until the end of the track, at 4:04 where everything drops out except for the guitar riff before stopping altogether at 4:13.

III. In Closing

There is a lot of information to be found on line on the Sonata form in great detail, and while I encourage you to look it up and find out as much as you can, we’ re not too concerned with the details here, as a lot of it simply wouldn’ t apply to what we’ re doing. The basic ideas behind the Sonata form and how themes are introduced and developed however is still as relevant today than it was 300 years ago, and provides us with a solid frame of reference to structure and organize our ideas. Keep in mind that you can apply this form not only to musical themes and ideas, but also to elements of a composition such as dynamics, timbral development, rhythmic elements etc… The possibilities are literally endless.

Synthesis Fall 2010

diskin2

One of the the easiest ways to play a sound file in Csound is to use the diskin2 opcode. With it, you can loop samples, modulate the pitch, apply filtering and ring modulation, etc. You go really far with just this mini-sampler. It does have its limitations. The approach of the Beat Mangler X example overcomes them, but requires a more advanced design, which we’ll cover at some later point. Though for many situations, you’ll find that diskin2 is the perfect solution.

The listening example uses the amen loop used in the Beat Mangler X example, though you can easily use your own mono files by making changes to the score. Diskin2 supports stereo files, though this example only works with mono samples.

Download diskin2.csd here.

Synthesis Fall 2010

Recontextualizing Ambient Music in Csound

For the first four weeks, we’ve covered much ground in terms of Csound basics and synthesizer fundamentals. Today, we begin transitioning into elements of computer music composition, starting with Recontextualizing Ambient Music in Csound by Kim Cascone.

There are few resources that approach introductory Csound compostion as elegantly as Cascone’s chapter from The Csound Book. In it, Cascone provides a bit of history and personal background, techniques for composing in a computer music environment, and six instrument designs. There is one particualr passage that I want to emphasize, as I think it can be of great use:

“I started studying instrument design by taking other composers’ instruments and drawing them out on paper in flowchart form. I took the scores and isolated a particular instrument by commenting out all other instruments except the one I wanted to listen to. I would then start to modify that instrument in various ways so I could hear the effect my code was having.”

Synthesis Fall 2010

Flat Drum


(click image for large version)

After two weeks of synthesizer fundamentals, now is a good time to do a quick review of what has been covered so far by putting these techniques into an actual working instrument. I designed Flat Drum for just this purpose. With the Flat Drum block diagram and code handy, go back and look through the material from the last couple of weeks. Which concepts are used in this instrument? Which are not? Are there any new opcodes? Do you see any familiar synthesizer design patterns? Is there anything in the code that you haven’t seen or don’t understand? etc.

Perhaps the most important thing you can do is to make a copy of the file, make changes to the values in the code, and listen to the results. Theory is useless without practice and hands-on experience.

Download flat_drum.csd here.

Synthesis Fall 2010

Random

The random number generator is perhaps one of the most useful and versatile tools in your synthesis tool kit. It can be used for everything from sound design to algorithmic composition. Beat Mangler X makes use of random by generating different splice and stutter patterns each time it is rendered. In Markov Experiment II, note nodes are chosen based on weighted odds, producing variations of a melody.

The listening example is fairly straight forward. The frequency of a triangle wave oscillator is modulated by a random unit generator that updates its value 4 times a second.

Download random.csd here.

“Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin”John von Neumann (source)

Note. Computers don’t really produce random numbers, they produce pseudorandom numbers.

Synthesis Fall 2010

Detuned Oscillators

A quick and easy way to create a thicker sounding synthesizer patch is to use two oscillators whose frequencies are slightly out of tune. With a minute amount of detuning, you can hear a bit of flange in the voice, as various frequencies are phased in and out through phase cancellation and phase reenforcement. A bit more detuning, and it can create a chorus like effect. Excessive amounts of detuning will cause the listener to hear two separate voices rather than a unified voice. This all depends greatly on the audio source(s), as different waveforms and samples will get different mileage out of this classic technique.

Download detuned_oscillators.csd here.

Synthesis Fall 2010

Filters

A filter in audio synthesis is “a device or process that removes from a signal some unwanted component or feature”. Filters are used all over the place, and have the ability to fully transform audio in a variety of ways.

Csound comes with several types of filters, though this example focuses on the four most common: low pass, high pass, band pass and band reject. A low pass filter removes/attenuates frequencies above a specified cutoff frequency, while allowing frequencies below the cutoff to “pass through.” A high pass does the opposite, allowing the highs to pass, while removing/attenuating frequencies below the cutoff. A band pass filter can be thought of as a combination of the two, as is it removes high and low frequencies, leaving a band of frequencies at the cutoff. The band pass has an additional parameter; The band width of a band pass determines how large a band will be. A wide band will allow more frequency information to pass through, while a very narrow band will nearly produce a sine wave. A band reject filter is the opposite of a band pass, as it removes frequencies at the cutoff.

Download filters.csd here.

The listening example plays bursts of white noise through these filters with various settings to the cutoff or band width parameters of the filters. The first group of notes starts with a low pass fully open, with each new burst being filtered lower than the previous. The second group of bursts are processed with a high pass. The filter starts open, until only the high end is left.

The second two groups of notes maintain a cutoff of 440Hz, while the width of the bands are changed. The band pass starts open, then narrows until only a dirty sine is left in the signal. Due to the loss of gain from this process, I’ve used the balance opcode to make up for lost amplitude so you can hear it. Finally, the band reject removes frequencies at the cutoff, creating a hole in the white noise. In the last couple of notes, you can hear a distinction between the high noise and the low noise.

Synthesis Fall 2010

Pulse Wave and PWM

The square wave is one of the previously mentioned fundamental waveform. Though it also belongs to another class of waveforms: The pulse wave. The two defining characteristics of a pulse wave are its shape and its width. The shape refers to the fact that it has a flat top and flat bottom. The width of a pulse wave refers to the difference in size between the high level and the low level. A width of 50% means the upper and lower sides are equal, and is known as the square wave. Anything other than 50%, and it is no longer a square wave, but still qualifies as a pulse wave.

Different pulse widths produce difference timbres. A square wave produces a big round sound. As the pulse width moves towards 0% or 100%, it takes on a nasal-like quality. It personally reminds me of picking a string on a guitar; You get a bigger, rounder sound if you pick the middle of the string, and a much lighter, mid-rangy sound near the bridge. In the first part of the listening example, the width starts at 50% and is decremented 5% for each new note until it reaches a width of 5%.

Furthermore, the width can be modulated in time just as we’ve done with amplitude and frequency. This is known as Pulse Width Modulation. At the end of the listening example, the width is modulated by a triangle LFO, producing a timbre that changes over time.

Download pulse_wave.csd here.

We’ve also encountered a special variant of the pulse wave in the 2600 Synthesis example from week two; The polypulse wave.

Note. The vco2 opcode used to generate the pulse wave in this examples produces a band-limited version of the pulse wave. If you we’re to open up the example in an audio editor, the tops and and bottoms would not be flat. We’ll get into the concept of band-limited later.

Synthesis Fall 2010