I’ll call this piece “Stereo Spectacular.” It uses a bunch of clips from stereo demonstration records from the 1950’s and 1960’s playing off a Disting Mk3 and going into a Dervish DSP with Beatstep Pro and Prizma voices.
My name is Henry Birdseye, and I love sound. I have found that young minds are receptive to seeing, hearing and learning about electronic music or “synthesis,” for short. Using my rack of contemporary gear, an oscilloscope and a pair of speakers I’ll show off the relationship between the waveforms we will create and the sounds that result, and harness the power of the various parts to make music from scratch. Students are invited and encouraged to join me in twiddling synthesizer patches to make new sounds and melodies.
Oh, and this is free.
I love to teach and speak publicly, and share knowledge about something interesting to opened minds. Adults and students have enjoyed my demos at Henry Ford Museum and at Summer Camp.
It is my love of sound that I seek to share.
Of course, I, and you, hear sounds all the time, every second of every day. Unless, you are deaf.
Those sounds we hear are made from sine waves. Here’s one now:
The space between the peaks, determines the sine wave’s frequency.
Here a low frequency sine wave:
And here is one with a higher frequency, or pitch:
In the real world, you would rarely hear a single sine wave by itself, unless you were in a lab. What you do hear all the time are dozens, maybe hundreds of sine waves all blended together into a much more complex waveform, some sines lasting for a fraction of a second, others becoming louder and changing volume slowly.
When we hear a waveform made of a few sine waves whose frequencies are in certain mathematical ratios, we humans find them pleasing. When we arrange those waveforms into different pitches, we have made music.
So much do I love making those sounds with those specific mathematical relationships, that I have equipment to do it from scratch. This is no big deal. If I banged a stick on a bucket, that would count as “making it from scratch.” So, to narrow it down a little bit, I have electronic devices that make waveforms that translate to sound. I can make musical sequences that people find pleasing, or I can synthesize the sound of someone banging on a bucket with a stick. I can also take the sound of a banged bucket, and transform it further into something that sounds completely different.
That pile of gear over there is my synthesizer. Specifically a voltage-controlled, analog modular synthesizer.
Modular, because there are quite a few modules that I can connect to other modules with those little colored patchcords. Analog, because the signals that route between the modules are simple voltages, unlike a computer. Voltage-controlled, because practically every knob in it, can be kind of remote-controlled using voltages from other modules.
I love modular synthesizers. They have been my interest and my passion ever since I heard the classical record “Switched-on Bach.” The most popular classical recording of all time, “S-O-B” demonstrated that synthesizers could be played to make “real” music, and not a bunch of bloopy beepy sounds.
My own system has been growing for a couple of years. With a variety of modules that generate, modify, control and otherwise mangle sound, I am interested in finding audiences for my class and demonstration, which fuses mathematics, physics and music. In addition, I bring printed handouts with notes similar to those at the bottom of this page and show them how they can enjoy this technology at home, on their own computers, for free. I can do this in a single class period.
I am somewhat prolific making pieces of original music and covers of classical pieces. My Soundcloud page is where to find the mother lode of synthesizer music. https://soundcloud.com/dibcadbu.
The podcast “Electronic Fusion” recently featured my music.
You can see several examples of my performances on my Youtube page. Here’s one of my favorite works:
I use oscilloscopes liberally in my demonstrations. Students have enjoyed seeing a waveform in its raw, unfiltered state, undergo changes to its shape and thus the sound it generates. Students are invited to come up and make changes to sounds using a Theremin-like controller.
The classes begin with examples of pivotal recordings, Wendy Carlos’ “Switched-On Bach,” Morton Subotnick’s “Until Spring,” and Stockhausen’s “Hymnen,” composed of recordings of national anthems recorded off of shortwave radio and then I’ll play a quick piece on the modular. I’ll explain what the individual components, called “modules,” do, and how they interact when they are patched together using cables, and then explain how they contribute to the song.
If you are an educator in math, science or music I think you will find this an offbeat and unexpected departure from the usual beat of a “normal” class.
When machines of this kind were first invented, there were two very different attitudes about their use. One school of thought was that we had a new instrument that would open up new frontiers for music as we had always known it. This “real” approach to electronic music was called “East Coast,” because the machine on the cover above was invented by Robert Moog in New York state. Across the country, at about the same time, 1965 or so, Don Buchla wanted to make a synthesizer that redefined music altogether… bloopy beepy sounds allowed. This “West Coast” music was championed by such fellows as Morton Subotnick on his “Silver Apples of the Moon.” The Moog and the Buchla synthesizers were “modular,” meaning that users could buy modules according to their needs. Some modules generated sounds, others modified sounds, still others generated voltages used to control pitches and durations. They were “voltage controlled.”
But, the Moogs and the Buchlas were not compatible. They used differing standards for their control voltages. The Buchla synthesizers didn’t even have traditional keyboard controllers.
An original Buchla “Electric Music Box.”
And in 1960’s money, these things were expensive! Due to the high cost, $10,000 to $50,000 for a completed system, very few private individuals owned one. The main customers at the time were universities, recording studios and a few musical groups.
In time, more inventors created new synthesizer designs and new types of modules, and an industry was born.
So, what were these monsters actually doing, you ask? Modules were interconnected with cables and sounds and control voltages made sonic changes to other modules resulting in a “patch.” Patches could be extremely simple or devilishly complex.
All of this relates to the nature of sound. Everything you hear can be broken into a spectrum of sine waves. A single sine wave sounds as a single pitch, it has no harmonics, or overtones.
Modules called “oscillators” generate waveforms, and control voltages would control their pitch.
The more complex the waveform from these oscillators, the more harmonics. These VCO’s (voltage controlled oscillators) usually output a variety of different waveforms, each with different harmonics.
In this graph, we’re looking at the spectrum of sound for a few waveforms. The large spike on the left is the fundamental frequency, and the harmonics fall off in amplitude as they increase in frequency. The top waveform, for example, is called a triangle wave. Above the fundamental (f), there are harmonics at three times the frequency of the fundamental (3*f), 5*f, 7*f, 9*f and so on to infinity. Human ears cannot detect frequencies above about 20,000 cycles per second. The second waveform is a square wave, very easy to make electronically. Third is a sawtooth wave, with more harmonics than a square wave, and the bottom one is a pulse wave with a similar harmonic profile to a sawtooth, but a different sound.
A simple patch
The simplest patch would be a keyboard sending its pitch control voltage to a VCO and the VCO connected to a speaker. This would be a very boring sound. First off, oscillators are always on. Playing the keyboard would certainly change the pitch of our sound, but it always be there. We wouldn’t know when a note begins or ends.
Adding Complexity to the Patch
Keyboards don’t just send out pitch control voltages. They can also send out a signal called a gate, that’s on (5 to 10 volts) when we press a key, and zero volts when we let go. Here’s a patch using a VCA, voltage controlled amplifier to shape the sound, turning it on and off, a couple Envelope Generators that output voltages to the VCA and to a VCF, a voltage controlled filter, whose job is to remove some of those harmonics from the oscillators.
In this patch, touching a key on our keyboard makes a few things happen all at once.
the pitch CV changes the pitch on two VCOs (for a rich full sound) The two VCOs are mixed and sent to a Voltage Controlled Filter.
The keyboard sends a gate as long as we hold down a key. That gate is fed to two envelope generators, that reshape that gate into a slow signal that controls the cutoff of the filter and a signal that opens up and turns off the Voltage Controlled Amplifier.
The output from the VCF goes into the VCA.
Over the past 50+ years, circuit designers have come up with many hundreds of different modules, and integrating digital technology has allowed prices to fall and features to grow.
The modular synthesizer community has grown to thousands, if not tens of thousands, of users across the globe.
The class demonstration will go into detail about individual tools and how they connect to make something new. And to end the class, I’ll pass out information about how get synthesizing at home, inexpensively, or for no money at all.
I’m excited to do more “synthesizer outreach,” and I will try to work around your openings. I need only a small classroom with good acoustics and electricity. I’ll bring a white board for illustrations.
When I do these I’ll rehearse a while and then record and improvise. The Beatstep Pro is the center of this universe, and as you can see in the video, I bounce between sequences, making changes when it feels right.
I started “january” the same way I start all of my pieces. I got a sequence stuck in my head, so I turned on the synthesizer and programmed the notes into a sequencer, then added accompaniment on two more sequencers and supplemented those melody lines with some keyboard.
I record EVERYTHING into my digital work station, Reaper, and these recordings can go on for the better part of an hour, grabbing individual lines from each of the synth voices separately and recording the MIDI info that comes out the sequencers and the keyboard.
The first draft of “january” is here: https://www.youtube.com/watch?v=DkfCn… So, with all the info I needed to make this song, I started fiddling with the tons of MIDI tracks I had, duplicating parts, redoing others, and tweaking the keyboard lines. I also sped it up to 140 bpm from 120.
When I finished rewriting the melodies in the DAW, I recorded them separately. The cool thing about my method is I can preview how the synth voices will sound and can experiment easily.
The modular voices are as follows:
Sequence 1 (lowest on screen) is a Braids Macro Oscillator going into a VCA with a second ADSR sweeping the Timbre parameter
Sequence 2 (the lines in the middle) is an E350 Morphing Terrarium and a Z3000 going into an E440 Filter and then a VCA, with a second ADSR changing the filter cutoff
Sequence 3 (top line of the video) is a vintage ARP2600 (1973) as the main bass line and simultaneously a Rings is getting the control voltage, and because the way Rings resonates, pitch only changes when there’s a NEW note, so the same note hit over and over, only makes a sound the first time it gets the signal
Sequence 4 (also appearing on the bottom) is a Braids and an Erica Varishape VCO going into an SVVCF filter with cutoff controlled by both velocity coming from Reaper, and my hand moving over a Koma Kommander.
So, the whole piece is 4 lines of music at a time. I figured this thing would have more appeal with a video. So, with a very handy piece of ExtendScript from http://omino.com/pixelblog/2011/12/26…, I was able to import the MIDI tracks into After Effects and derive pitch and velocity information on a note by note basis. Also, I imported the individual audio tracks and used the amplitude information to drive things like particular size. Trapcode Particular did most of the heavy lifting. More keyframe tweaking to compensate for the delays in particle generation.