Introducing “Python Score” for Csound

PYTHON SCORE is a Python script for Csound score generation and processing. With it you will compose music easily and efficiently. No computer should be without one!

This effort has been weeks in the making, if you disregard the 10+ years I’ve spent creating various Csound score utilities I’ve coded up in Perl, Python, C and Java. Much of the focus on this project is directed at keeping the score familiar to Csounders, integrating Python with the Csound score using the CsScore bin feature, and removing as much unnecessary clutter. For example, you won’t need to open and close any files in your programs, Python Score handles it for you. And I’m attempting to keep the point of entry easy as possible for all; You’ll only need to know tiny bit of Python to get started.

Python Score is currently part of the Csound CSD Python library, though it will receive its own repository in the future. Here are the highlights. More information will follow.

The score() function

The first priority of Python Score is to allow traditional Csound score composers to transition into this new score environment in a near seamless fashion. Csounders will be able to use the full breadth of their knowledge of the traditional score they’ve acquired over the years. With the score() function, Csounders accomplish just that. The following example (test.csd) is a complete CsScore section in a Csound csd file:

<CsScore bin="python pysco.py">

score('''
f 1 0 8192 10 1
t 0 189

i 1 0 0.5 0.707 9.02
i 1 + .   .     8.07
i 1 + .   .     8.09
i 1 + .   .     8.11
i 1 + .   .     9.00
i 1 + .   .     8.09
i 1 + .   .     8.11
i 1 + .   .     8.07
''')

</CsScore>

A Csounder has successfully transitioned into this new environment once they’ve learned how to setup the call in the CsScore tag using the bin feature and embed their traditional Csound score code in a score() function. Once this has taken place, the composer is now only a step away from a plethora of new features to include in their arsenal of computer music tools.

The “cue” Object

Where as the score() function makes it easy for Csounders to make the leap into Python Score, the “cue” object is what makes it worthwhile. That and all the goodness Python brings to the mix.

The “cue” object enables composers to utilize nested programming techniques for placing events in time, similar to for-loops and if-then conditional branching. This allows composers to do things such as think-in-time localized to a measure rather than the absolute time of the global score.

The “cue” object works by translating the start values in pfield 2. Consider the following line of code:

score('i 1 0 4 0.707 8.00')

Looking at p-field 2, the event takes place at beat 0. By utilizing the “cue” object using the Python “with” statement, we can move the start time of the event without ever touching p-field 2. The follow block of code plays the same event at beat 64 in the score.

with cue(64):
    score('i 1 0 4 0.707 8.00')

The “cue” is a bit like a flux capacitor, as it makes time travel possible. Or at minimum, it saves a composer time, and lots of it, since they can easily move small and/or large sections of score, in time, without changing each and every value in the p-field 2 column. Notes, licks, phrases, bars, sections, entire compositions, etc… All of these time-based musical concepts benefit from the organizational power of the “cue”.

The following example from test10.csd shows the first three measures of Bach’s Invention 1. The beginning of each measure is designated by the use of a “with cue(t)” statement. Since the native time of the Csound score is in beats, and the fact that the piece is in 4/4, the values used with the “cue” object are multiples of 4.

with cue(0):
    score('''
    i 1 0.5 0.5 0.5 8.00
    i 1 +   .   .   8.02
    i 1 +   .   .   8.04
    i 1 +   .   .   8.05
    i 1 +   .   .   8.02
    i 1 +   .   .   8.04
    i 1 +   .   .   8.00
    ''')

with cue(4):
    score('''
    i 1 0 1    0.5 8.07
    i 1 + .    .   9.00
    i 1 + 0.25 .   8.11
    i 1 + 0.25 .   8.09
    i 1 + 0.5  .   8.11
    i 1 + .    .   9.00

    i 1 0.5 0.5 0.5 7.00
    i 1 +   .   .   7.02
    i 1 +   .   .   7.04
    i 1 +   .   .   7.05
    i 1 +   .   .   7.02
    i 1 +   .   .   7.04
    i 1 +   .   .   7.00
    ''')

with cue(8):
    score('''
    i 1 0 0.5 0.5 9.02
    i 1 + .   .   8.07
    i 1 + .   .   8.09
    i 1 + .   .   8.11
    i 1 + .   .   9.00
    i 1 + .   .   8.09
    i 1 + .   .   8.11
    i 1 + .   .   8.07

    i 1 0 1 0.5 7.07
    i 1 + . 0.5 6.07
    ''')

The “cue” also acts as a stack, meaning you can nest multiple “with cue(t)” statements. The following score event happens at 21.05. That is 16 + 4 + 1 + 0.05.

with cue(16):
    with cue(4):
        with cue(1):
            with cue(0.05):
                score('i 1 0 1 0.707 8.00')

P-field Converters

Csound is loaded with value converters. Though all of these exist in the orchestra side of Csound and there is currently no Csound mechanism to apply value converters to the score. Unless you count macros, but these are limiting. Two functions have been created to allow composers to apply pfield converters, either from an existing library or to create their own. These functions are p_callback() and pmap().

The p_callback() function registers a user-selected function that is applied to a selected pfield for a selected instrument. This registered function is applied when the score() function is called.

The pmap() function works similarly, except that it applies a user-selected function to everything already written to the score. Think of it is a post-score processor, while p_callback() is a pre-score processor.

The example (test2.csd) demonstrates two functions: conv_to_hz() which translates conventional notation into hz, and dB() which translates decibel values into standard amplitude values. Both of these are located in convert.py.

<CsScore bin="./pysco.py">

p_callback('i', 1, 5, conv_to_hz)

score('''
f 1 0 8192 10 1
t 0 120

i 1 0 0.5 -3 D5
i 1 + .   .  G4
i 1 + .   .  A4
i 1 + .   .  B4
i 1 + .   .  C5
i 1 + .   .  A4
i 1 + .   .  B4
i 1 + .   .  G5
''')

pmap('i', 1, 4, dB)

</CsScore>

What else?

The functions presented so far are just the basic mechanisms included in Python Score to help solve specific score related issues. Beyond these there is Python. Having a true mature scripting language at your disposal opens up score creation an processing in ways that Csound alone could never do on it’s own. What Python offers will be the topic of many follow up posts and examples to come.

Sampler Concrete


Photo by Carbon Arc. Licensed under Creative Commons.

First, I want to welcome aboard Jean-Luc Sinclair. As part of his NYU Software Synthesis class, he has graciously decided to share the articles he is writing for his students. His first contribution Organizing Sounds: Musique Concrete, Part I has already proven to be the most popular post here at CodeHop. Last year, before The Csound Blog became CodeHop, Jean-Luc had written another amazing piece, Organizing Sounds: Sonata Form, which I highly recommend. Thank you, Jean-Luc!

Now on to today’s example. (Get sampler_concrete.csd)

Many tape techniques are simplistic in nature and are easily mimic-able in the digital domain. After all, a sampler can be thought of as a high-tech featured-endowed tape machine. A more apt comparison would be that of a waveform editor such as Peak, WaveLab or Audacity.

I’ve designed a Csound instrument called “splice” that is about as basic as it gets when it comes to samplers. My hope is that the simplicity of the instrument will bring attention to the fact that many of the tape concrete techniques mentioned in Jean-Luc’s article are themselves simple.

Let’s take a look at the score interface to “splice”:

i "splice" start_time duration amplitude begin_splice end_splice

The start time and duration are both default parameters of a score instrument event. Three additional parameters are included for setting the amplitude, specifying the beginning time (in seconds) of the sample and specifying the end time (in seconds) of the sample to be played.

With this short list of instrument parameters, the following techniques are showcased in the Csound example: Splicing, Vari-speed, Reversal, “Tape” Loop, Layering, Delay and Comb Filtering.

Continuing Schaeffer’s tradition of using recordings of train, I’m using a found sound that I found on SoundCloud of the Manhattan subway. The recording is approximately 30 seconds in length. Most of the splicing in the examples take place between 17 and 26 seconds into the recording. Here are the results.

With this one simple instrument, it is entirely conceivable to compose a complete piece in the style of classic tape music.

Organizing Sounds: Musique Concrete, Part I

 

I.               Musique Concrete and the emergence of electronic music

Musique Concrete is nothing new. It was pioneered by Pierre Schaeffer and his team in the early 1940’s at the Studio d’Essai de la Radiodiffusion Nationale, an experimental studio created initially to serve as a resistance hub for radio broadcasters in occupied France, in Paris. While Musique Concrete might not be anything new today, at the time however, it represented a major departure from the traditional musical paradigms. By relying entirely on recorded sounds (hence the name “concrete”, as in ‘real’) as a means of musical creation, Schaeffer opened the door to an entirely new way of not only making, but also thinking music. It represented a major push towards a number of new directions.

Timbre was all of a sudden an equally important musical dimension as pitch had been up until then, something that composers like Edgard Varese had long been thinking and writing about. It also paved the way for the emergence of new compositional forms and strucutures. As Pierre Boulez pointed out in “Penser la musique aujourd’hui”, musical structures were traditionally perceived by the listener as a product of the melody. By removing the melody altogether, and working instead with sound objects, Schaeffer became a bit of an iconoclast. As he himself pointed out in his “Traité des objects musicaux”, the composer is never really free. The choice of his or her notes is based upon the musical code that he himself and his audience have in common. When Musique Concrete was invented, the composer had moved one step ahead of his audience, and was, to some extent, liberated.

One of the earliest pieces of the genre, and perhaps the most famous to this day is Schaeffer’s own “étude aux chemins de fer”, where the composer mixed a number of sounds recorded from railroads such as engines, whistles and others, in order to create a unique, and truly original composition.

You can listen to the piece here: http://www.synthtopia.com/content/2009/11/28/pierre-schaeffer-etude-aux-chemins-de-fer/

By today’s standards, the techniques used by the pioneer of the genre were rudimentary at best, yet they were, and remain, crucial tools of electronic music creation to this day. By taking a look at these techniques, and applying them to a computer music language such as Csound, we can not only gain a better understanding of this pivotal moment in music history, but also deepen our knowledge of sound and composition.

II.            Concrete Techniques

The early composers of Musique Concrete mostly worked with records, tape decks, tone generators, mixers, reverbs and delays. Compared to the tools available to the computer musician today, it’s a rather limited palette indeed. This however forced the composer to be much more careful in the selection of the source materials, mostly recordings of course, and far more judicious with the use of the processes to be applied. Using recordings as the main source of sounds confronts the composers with decisions early on in the compositional process, which will have profound consequences on the final piece.

1.     Material Selection

While perhaps a bit reductive, the compositional process could be thought of as the selection and combination of various materials. When working with sound objects, the selection process is maybe even more crucial.

It could easily be argued that this process begins at the recording stage. If you happen to be recording your own material, the auditory perspective you chose will have a profound impact on the outcome of the sound. As Jean-Claude Risset pointed out in the analysis of his 1985 piece “Sud”, the placement and choice of the microphone will hugely change the sound itself. For instance, placing the microphone very close to the source will have a magnifying effect on the sound, while moving it back a bit will give a broader view of the context within the sound is recorded, allowing more ambient sounds and atmospheres to seep in. This is something audio engineers have been very aware of for a long time, but that often gets overlooked by computer musicians. I’m quite fond of small microphones, such as Lavalier mics for instance, which allow the engineer to place them in places where a traditional microphone will not fit, very close to the sound source. This makes for some very interesting results. For instance, a lav mic placed right below a rotating fan will make it sound like a giant machine, shaking and rattling as if it were 60 feet tall inside a giant wind tunnel. As always, experimentation and careful listening is key.

If you are working with already recorded material, an interesting approach is to work with different sounds, but that evoke similar emotions. This approach was favored by the American composer Tod Dockstader, who in his 1963 piece “Apocalypse” used a recording of Gregorian chant as a vocalization to the slowed down sound of a creaky door opening and closing.  Dockstader came from a post-production background, and perhaps it is no accident that Schaeffer had a background in broadcasting and engineering as well.

You can listen to an excerpt of Apocalypse here: http://www.youtube.com/watch?v=TYabnQctxpo

This technique, of using very different sounds that evoke similar or complementary emotions, is also often used by film sound designers.

Star Wars’ sound designer Ben Burtt often speaks of this in his process. By working with familiar sounds, combining them in unexpected ways and putting them to picture, he has been able to create some of the most successful and iconic sounds in the history of film.

2.    Sound techniques and manipulations

While the technology available to the pioneers of musique concrete was fairly primitive, composers managed to come up with a number of creative methods for sound manipulation and creation. A non exhaustive but comprehensive list of these would include:

–       Vari-speed: changing the speed of the tape to change the pitch of the sound.

–       Reversal: playing the tape backwards

–       Comb Filtering: by playing a sound against a slightly delayed version of itself various resonant frequencies are brought in or out.

–       Tape loops: in order to create loops, and grooves out of otherwise non rhythmic material, composers would repeat certain portions of a recording.

–       Splicing: to change the order of the material, or insert new sounds within a recording

–       Filtering: to bring in or out different frequencies of a sound and change its quality and texture

–       Layering: Either done by recording multiple sources down to a new reel or by mixing them in real time via a mixing board.

–       Reverberation, delay: used to create a sense of unity, or fusion between sound sources coming from different origins, and a great way of superimposing a new sense of space on an existing recording.

–       Expanded-compressed time: by slowing down, or speeding up then reversing the direction of a sound.

–       Panning: allowing the composer to place the sound within a stereo or multichannel environment

–       Analog Synthesis: Although the genre was based on recorded sounds mostly, composers sometimes inserted tones and sweeps from oscillators in their compositions.

–       Amplitude modulation: Often done by periodically varying the amplitude of a sound or applying a different amplitude envelope over it.

–       Frequency Modulation: Although frequency modulation as a synthesis technique was discovered long after the beginnings of tape music, vibrato was a well-known technique long before then.

In the next installment of this article, we will look at practical ways to apply these techniques to Csound, and create our own musique conrete etude. In the meantime I encourage you to listen to more music by the composers mentioned above, and as always, experiment, experiment, experiment.


Csound Group at SoundCloud

The Csound Group at SoundCloud is a collection of user submitted tracks that are created with the Csound computer music language.

At the time of posting this, there are 39 tracks listed. I’d like to see this number grow. So if you have anything of interest, whether it be a composition, improvisation, or patch, do add it to the group. Providing it’s built with Csound, of course.

Tempo-Synced LFO Wobble Bass

There is a simple, but highly useful hack I discovered a few years ago in which it’s possible to pass the current tempo from the score to an instrument using p3. And since dubstep is all the rage these days, I’ve designed a practical example in which a tempo-synced LFO modulates the cutoff of a low pass filter to create that “wobbly” bass effect.

Download the code here.

Passing the tempo of the score to an instrument is a fairly straight forward process. First, always set p3 to 1; This value of 1 is translated into seconds-per-beat (SPB) when the score is preprocessed. Second, use p4 for the desired duration. Third, in the instrument code, the following two lines obtain the tempo in SPB from p3 and then resets the duration of the instrument to the desired length:

ispb = p3 ; Seconds-per-beat. Must specify "1" in score
p3 = ispb * p4 ; Reset the duration

Now that we have the tempo, we can create a low frequeny oscillator that is synced to the tempo. To create that dubsteb wobble effect, p7 is designated as the division of the note. We need to take this p7 value and translate it into Hz utilizing the SPB value:

idivision = 1 / (p7 * ispb) ; Division of Wobble

Plugging this value in the frequency parameter of an oscillator yields a tempo-synced LFO. (some drift may occur over long periods of time)

klfo oscil 1, idivision, itable

Try changing the tempo in the score, and you’ll notice that the wobbles stay consistent relative to the tempo.

Csound Inspired Art by Michael Orr

By Michael Orr
Artist’s Website

Here is Michael’s profile from his August Art Opening:

“Just a few years before Michael Ladd Orr was born, Alvin Toffler introduced the concept of “future shock”. The concept, and the book of the same name, proposed we were entering an age when the future was arriving “prematurely” – when one could stay in one place and the culture around him/her would change so rapidly that it would have the same disorienting effect as simply moving to a foreign culture – when the rate of technological advancement would increase exponentially until the average person simply wouldn’t be able to keep up. Orr’s work often hints at these notions, whether he intends this or not is up for discussion. His art studio and tools are completely mobile at all times so he can create on the spot and in the moment. He may post a newly created image on the web or drop it in the mail. There is no time to dictate meaning, only to reinterpret the numerous images and impressions as they go whizzing by. Consumerist culture’s infinite supply of marketing images collides with arbitrary hand-drawn patterns. Ordinarily warm and vibrant yellows and oranges become jarring against nonsensical shapes and random household items. There is the sense of an artist trying to inject a touch of restraint and familiarity into a machine that is insatiable and alien. Collage becomes collision, but all the while a childlike kind of wonder and playfulness seeks to burst through the surface. Orr, 35, was born and raised in and around Atlanta, where he continues to live, work, and ply his craft.”

Event_i

The event opcodes are some of my favorites in all of Csound. They are highly versatile in their application, as they can be used for algorithmically generating notes, transforming scores events, creating multiple interfaces to a single instrument, for triggering grains in a granular patch, etc. The list goes on and on.

Today’s example uses the event_i opcode to generate multiple score events from a single score event. The basic run down is this: Each instr 1 event generates 5 events for instr 2. Each of these 5 events are staggered in time, with the delay between notes set by the idelay parameter. The pitches also have a fixed half-step sequence of [0, 3, 7, 5, 10] in relation to the base pitch specified in the ipch parameter. Take a listen.

Download event_i.csd

Here are a couple of exercises to help get you going:

Exercise: 1

Create a new note pattern sequence. The last parameter of event_i is used to set the relative pitch. Set the value for x in ‘ipch * 2 ^ (x / 12)’.

Exercise: 2

Alter the rhythm. The third parameter controls how far into the future a particular event is trigger. The example uses integers, though using floats such 1.5 and 3.25 can completely transform the characteristics of the output.

Exercise: 3

Write a short etude with your modified instrument.

Silence

This is a block diagram of all recent activity of this blog. Though seriously, I’ve been wicked busy lately, and things are just now starting to slow down just enough so that I can pick up where we left off with Jean-Luc’s NYU Synthesis class. Jean-Luc and I been in perpetual contact the whole time, and we’ve devised a plan to make up for the missing weeks. Starting tomorrow, I’ll start blogging again about the current topics being covered in class. Once the semester is over, JL and I plan to go back and fill in the blanks. There is still plenty of Csound information in the works.

FM Synthesis with foscil

In the post “Low Frequency Oscillator,” an oscillator is used to modulate the frequency of a second oscillator; This is known as frequency modulation. By substituting a low frequency oscillator with a high frequency oscillator, we get Frequency modulation synthesis, which produces harmonically rich spectra with as few as two sine wave oscillators.

This technique was first applied to music by American composer John Chowning at Stanford University in 1967. FM was a real game changer. Since FM could produce a wide range of musically interesting sounds with very little computation, it helped pave the way for computer music to transition from a institutional commodity to a viable mainstream technology; Primitive digital devices could fiscally and audibly compete with physical and analog instruments. The Yamaha DX7, an FM synthesizer, was released in 1983 and became the “first commercially successful digital synthesizer.” If memory serves me correctly, I once heard that long before I was a student at The Berklee College of Music, their computer music program was FM Synthesis.

A Csound port of Chowning’s most famous musical work “Stria” is included with QuteCsound as one of the examples.

FM synthesis is a broad topic that would be impossible to cover in a single blog post. I encourage you to read chapter 9 of the Csound Book, “FM Synthesis and Morphing in Csound: from Percussion to Brass” by Brian Evens.

To get you started with some basic FM programming, I created an example CSD that uses the foscil opcode, a self contained FM synthesizer. You can immediately start plugging in various values to hear the results. Here’s a quick run down of the pfields for the instument:

  • p4 — Amplitude
  • p5 — Pitch
  • p6 — Carrier ratio. Changing this will generally affect the base frequency of the note played, as well as the timbre. This works in tandem with the modulator ratio.
  • p7 — Modulator ratio. Changing this will affect the timbre.
  • p8 — Index of modulation. This determines how much modulation is applied. A value of 0 will apply no modulation, resulting in a sine wave. The higher the value, the more spectra there is in the sound, producing a brighter timbre. The index is modulated by an envelope, so each note will start with an index supplied here, then fade to 0 by the time the note reaches the end.

[kml_flashembed movie="http://player.soundcloud.com/player.swf?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F6219223&show_comments=true&auto_play=false&color=cc0000" width="550" height="81" wmode="transparent" /]Listen @ SoundCloud

Download foscil.csd here.

I personally have a strong association in which every time I hear a certain classes of FM sounds I can’t help but think of the Atari arcade classic Marble Madness. The sound chip inside the machine was produced by Yamaha and “is similar to a Yamaha DX7 synthesizer.” It was also the first Atari game to use it, which probably explains why I think of this particular game; I spent much of my childhood in various arcades.

[youtube]http://www.youtube.com/watch?v=N_NMQT1G_S8[/youtube]

Synthesis Fall 2010

Organizing Sounds: Sonata Form

When working with Csound, or in the computer music genre in general, organizing the sounds and grooves we create into finished compositions can sometimes be a bit tricky, or perhaps more difficult than when working with more traditional musical material. We don’ t always, if at all, deal with melodic, harmonic or rhythmic material in the classic, Western, sense of these terms. Therefore we sometimes lack a framework, or context within which to develop our material from ideas to complete pieces. In this series of articles we will discuss various approaches to composition and form specific to the context of electronic and computer music and explore various approaches to the organization of musical data.

Whether we think or ourselves as artists, composers, sound designers or researchers, we all at some point are confronted with this issue. But working with new media, and cutting edge techniques does not mean we should forget our legacy, and the issue of form is certainly not a new one. A tried and true classic is the Sonata form, which as we shall see, can be of tremendous help.

“I alter some things, eliminate and try again until I am satisfied. Then begins the mental working out of this material in its breadth, height and depth.” — Ludwig van Beethoven

I. The Sonata Form

The word Sonata comes from the Italian suonare, ‘ to sound’ , which implied music to be ‘ sounded’ through instruments, distinct from cantata, a piece to be sung. While it is difficult to establish with certainty when the Sonata form was invented, it became very popular in the 18th century as the predominant form used in instrumental music.

A sonata consists of three sections:

1. Exposition
2. Development
3. Recapitulation

Traditionally, composers will introduce two opposing or complimentary themes in the exposition, the first theme establishes the home key and a transitional bridge leads us to the second theme, often in the dominant key.

The development tends to vary in format and length, but is used to build tension, or interest by developing the themes in the exposition along with new material. Traditionally the development begins the key the exposition ended, only to go through a number of modulations while building up melodic complexity before going onto the last part of the development: the retransition, a section often in the dominant key intended to prepare us for the next section and the return to the tonic.

The recapitulation is a modified version of the exposition, which follows a similar structure, theme 1, bridge, theme 2 and coda, but some variations are introduced to differentiate it from the exposition. The main purpose of the recapitulation is to release the tension from the development and bring a sense of closure and continuity.

Or in condensed form:

AB – C – AB’

As it turns out, the sonata form is still very much alive today, even and perhaps especially in electronic music, which tends to put less of a focus on vocals and is therefore ideally suited to the form.

II. Case Study: Digitalism

Let’ s take a listen to the track ‘ Idealistic’ by the band Digitalism released on their album Idealism on May 9th 2007. The German duo has enjoyed worldwide success and this track is fairly representative of their work.

‘Idealistic’ by Digitalism:

[youtube]http://www.youtube.com/watch?v=lFYVK2IAXiw[/youtube]

It is also, a perfect example of the sonata form being used in electronic music.

1. Exposition

The track opens with a strong beat, supported by a steady guitar riff and soon after an abrasive bass line is introduced (bar 9 or 0:21 into the track) which goes on until 0:59, when a keyboard riff is introduced underneath the beat at first but alone soon thereafter during the breakdown at 1:06. After being stated a few times, vocals are then brought in at 1:25 at which point the main elements of the piece have been all introduced. By the time the next section begins, at 1:50, the band starts developing them.

2. Development

The development begins with a paired down version of the first riff, adding rhythmic complexity to the drum beat and bass line riff, until after being hinted at a few times before that point, the keyboard riff comes back in full at 2:22. The two themes are played together until 2:52, when a new variation is introduced, this time it is a play between the vocals and a simplified version of the keyboard riff, which goes on until 3:05 when they stop and only the beat and the bass holding a single note come in. This is the retransition, tension is built up also by introducing shortly after a busy hi-hat pattern in the main beat until 3:20 which marks the beginning of the recapitulation.

3. Recapitulation

The recapitulation begins with much like the exposition only this time the bass line comes in before the guitars, introduced at 3:35, supported by additional sound design elements. The keyboard riff is hinted at many times as a motif that comes in at the end of the four bar phrases until the end of the track, at 4:04 where everything drops out except for the guitar riff before stopping altogether at 4:13.

III. In Closing

There is a lot of information to be found on line on the Sonata form in great detail, and while I encourage you to look it up and find out as much as you can, we’ re not too concerned with the details here, as a lot of it simply wouldn’ t apply to what we’ re doing. The basic ideas behind the Sonata form and how themes are introduced and developed however is still as relevant today than it was 300 years ago, and provides us with a solid frame of reference to structure and organize our ideas. Keep in mind that you can apply this form not only to musical themes and ideas, but also to elements of a composition such as dynamics, timbral development, rhythmic elements etc… The possibilities are literally endless.

Synthesis Fall 2010