Thursday, September 20, 2012

The Contingent Workstyle

There is a trend in business I’ve been watching closely. Personally, I don’t like this trend and consider it another example of the deterioration of the work place. When I started with IBM they had a full-employment practice … which meant they didn’t lay-off workers. Instead they would retrain, remission, and move workers to keep them employed. I was personally involved with that practice when I ran the Programmer Retraining Program.

In addition, in those days, all the cafeteria workers, maintenance workers, security, warehouse personnel, secretaries, etc. were all full-time IBM employees. These were starting level jobs and led to many careers and successful and well-paid employees.

Then things started to change. I don’t blame IBM. They held out the longest with good pay and marvelous benefits. But slowly things changed. IBM hired vendors to run the cafeteria. Contractors did security. Day labor filled the warehouses. Slowly technical work was outsourced, and even in-house, IBM depended more and more on contractors and temporary workers. Secretaries came from Manpower and instructors were retired IBMers coming back as consultants and contractors. IBM had to compete in a global marketplace, and their competitors such as HP or Microsoft did not have the great benefit packages that IBM provided for so long. As the saying went, IBM had to become "lean and mean." Often "lean and mean" meant less benefits for employees and even "less" employees.

My own son worked several jobs at IBM as a contractor. It was a way to increase IBM’s flexibility and lower costs. Contractors typically didn’t get benefits and their salary tended to decrease from year-to-year as new contracts were written with providers. Not only didn’t these contractors partake in the excellent IBM education program, but IBM had to be careful not to provide training for fear that the government would declare them IBM employees. Microsoft got in expensive trouble that way. The government declared many of Microsoft’s contractors actual employees and both fined MS and required payment to these workers.

There is a positive side to this trend. Some workers like the freedom of part-time work and working on individual projects with an end date. They can then take an extended vacation before looking for the next work project. That is one big difference between my generation and today’s x-gen and millennium generations. They value their time off, while my generation sought the stability of a life-long job with one company.

In fact, among my generation, it was not unusual to have workers that built up unused vacation, saving months fro future use, and often were forced to take time off to prevent loosing these saved days. I recall a few years ago when we were required to take a week off without pay (to save on the budget and prevent layoffs). We had a strict rule that we could not log on and do any work. Apparently, most IBM employees log on and work — even while on vacation.

It has actually become an issue in business to force employees to take vacation and not bank large numbers of days since that creates a tax obligation for the employer. Virtually every large corporation in America now requires employees to take a certain number of their vacation days or to limit the banking of unused days. I have to admit that that was not an issue for me. I not only took all my vacation days, but I would extend my vacation time and travels by working several days remotely while on vacation. Maybe that was actually my own personal version of this new trend of work. Maybe I was ahead of the trend.

According to a recent article in Harvard Business Review, more and more people are choosing a contingent work style — that is, temporary work that may be project-based or time-based — over full-time or part-time work. Temporary placement service providers predict that the rate of growth in contingent workers will be three to four times the growth rate among traditional workforces, and that they eventually will make up about 25% of the global workforce.

One reason for the increasing popularity of contingent work is involuntary: not everyone can find full-time employment. But, intriguingly, more and more people are choosing a contingent work style.

Some contingent workers say they are seeking better work / life balance; others want to create or design their own careers by choosing the kind of work or projects that create a unique set of skills, making them more desirable prospective employees. Contingent employment can expose individuals to a broad variety of challenges, demanding constant learning and new skills, which make work more interesting for them.

Often, contingent workers say that it was their full-time employment experience that convinced them to strike out on their own. Research published in 2010 indicates, among workers who voluntarily chose to become independent, that 74% of respondents cited a lack of employer engagement as their principal reason for leaving.

New technologies and services for contingent workers make it easier and less painful to make the choice to go independent. New types of talent brokers such as online networks of retired and veteran scientists and engineers, or organizations which offers crowdsourcing services to companies with innovation challenges, connect free agents with project-based work in virtual marketplaces. The lack of benefits such as health and life insurance and disability benefits has been an ongoing major deterrent to contingent work, but even that situation is changing. Insurance and other benefits can be obtained from organizations such as the National Association for the Self-Employed (NASE) at highly competitive rates. Some professional temporary agencies offer there members continuity of benefits when they are between assignments.

Contingent workers can add to an organization's intellectual capacity and provide instant expertise as needed. Going forward, employers are likely to incorporate contingent workers into their talent strategies. The use of contingent workers by corporations offer several advantages:

Cost flexibility: Not only can organizations derive a cost savings from adjusting staff sizes up and down based on business requirements, but they are also able to control the wages paid for particular tasks by using contingent talent on a project basis.

Speed and agility: Talent needs can change on a dime. New technology or new competitors can expose talent gaps in any organization. Employing a contingent talent strategy enables a company to access the right talent to meet specific skill or competitive challenges quickly, without incurring longer-term costs or disrupting the organization. "Virtual talent" is much easier to find than it was even a few years ago, and can be brought on board rapidly.

A boost to innovation: Contingent talent brings in new knowledge and fresh ideas based on experiences outside of the company or even the industry. Companies that have programs or processes in place to facilitate knowledge and expertise transfer from contingent workers to full-time workers capture that knowledge on a permanent basis. If contingent workers' roles involve moving across the organization, they can also share best practices across organizational boundaries more easily than do internal employees.

To take full advantage of this emerging cadre of workers, employers will need to change the common perception of contingent workers as somehow less important, less skilled, or less committed than "permanent" employees, and must abandon the idea that contingent workers are simply an economic play. Contingent workers bring unique experiences, fresh thinking, and new approaches to problem-solving. Today, the growing contingent workforce provides opportunities for talent-hungry corporations.

Is this progress or is this a decline. Empowered by technology, working from home — or elsewhere, adjusting work schedules to match your personal needs, mixing periods of work with periods of extended vacation, focusing on maintaining your own career goals rather than depending on your boss to manage your career, these are all positive prospects of this new way of work-life.

Of course, many participating in this new trend are from my generation and are using the contingent work-style as a way to ease into retirement. Rather that quit a 40-hour a week job "cold turkey," many are keeping some income cash flow as well as a foot in the business world while enjoying "partial" retirement.

To me the question is whether this is really what today’s workers want, or is it what they have to settle for? My personal experience with many younger workers is that this is what they want. Maybe when the kids start growing and it’s time to pay for college or retirement looms, they will change their minds. But work / life balance with plenty of emphasis on “life” seems to be what the younger generations are seeking.

Monday, September 17, 2012

In the Business ... Not Famous ... But Important

Plenty of people would like to be rich and famous. They want to be a movie star, or a model, or a top athlete, or a rock star, or something like that. Fun to think about that, and nothing wrong with aiming high and having dreams.

I missed most of that attitude growing up. Maybe it was because I always wanted to be a scientist of some kind. But I’ve known plenty of people who hoped to make it big. That would be nice. But, you know, in the entertainment business, there are plenty of people working hard and successfully that don’t have the fame. They are in supporting roles. Not everyone can be a star. For every Jimi Hendrix, there is a Noel Redding or a Mitch Mitchell or a Billy Cox. (Got you on the last one — didn’t I? He was bassist and backup singer with Jimi on both the Experience and the Band of Gypsys.)

Who played along with Elton John? Even the Beatles had studio musicians. They didn’t play the violins on Eleanor Rigby. And what about the producers? You’ve heard of George Martin, but who produced the Rolling Stones? The Who?

Even the famous may not be well known. I’m sure you’ve heard of Styx, but what about Dennis DeYoung? … ok, you’re a Styx expert. And what about the engineers and the studio interns and the roadies and all the others that make it all work. Maybe it’s because I’m a studio engineer and sometimes producer that I think about those people.

Many are more well known than is first apparent. I always was envious of the career of Al Kooper or Alan Parsons. Song writers … musicians … producers … integral to the process, but maybe not so famous.

Or are they? They had albums.

Remember, even Bob Dylan started out being covered by the more famous: Byrds; Peter, Paul and Mary; and Sonny and Cher before getting his own place in front of the mike.

So, who has heard of Joe South? Raise your hands. I suspect some of you guitarists out there know Joe.

Joe South was a singer / songwriter and guitarist, best known for his songwriting. South won the Grammy for writing the “Song of the Year” in 1970 for “Games People Play” and was again nominated in 1972 for “Rose Garden.”

Besides those two hits, he wrote “The Purple People Eater Meets the Witchdoctor” (everyone has to start somewhere), “Birds of a Feather,” “Down in the Boondocks,” and “Hush” (love the Deep Purple version). In total he penned eight hits.

He was a studio musician and played with Aretha Franklin, Tommy Roe, and Bob Dylan. More important to my personal view is that he was a session guitarist in Muscle Shoals at both FAME and the Muscle Shoals Recording Studio.

Sadly he halted his career in 1971 upon the death of his brother who committed suicide. So we haven’t heard from him musically for a lot of years. He passed away this month at the age of 72. Not famous, but influential, well-loved, well-known, and a part of musical history. May the laborers in the field not be forgotten.

Just ask those that have recorded his songs: Lynn Anderson, Elvis Presley, Johnny Cash, Glen Campbell, Loretta Lynn, Carol Burnett, Andy Williams, Kitty Wells, Dottie West, Jim Nabors, k.d. lang, the Raiders, Dixie Flyers, Coldplay, Gene Vincent, Billie Jo Royal, the Osmands … and don’t forget Aretha’s "Chain of Fools" with Joe as a sideman.

Now that’s what I call a successful career. He’s famous to me.

The Science of Music -- Part Seven

Noise! Who wants to retire to the country? No traffic, no neighbors, no noise!! I always wanted a place in the country. It was so I could make some noise. I remember a time, over thirty years ago. We had a garage band -- a real garage band -- we were playing in the garage. We had the garage door open to let in fresh air. We were hitting our highs and hitting our lows, and the drummer was providing the transients. I looked up and saw my neighbor’s porch light -- the neighbor across the street. The light was going on and off and on and off. It was about 4:00 in the afternoon on a Saturday. I don’t think anyone was sleeping. But I think my neighbor was signaling that he (or she) would like us to turn it down ... or just quit altogether.

I don’t suppose we sounded that good. Like a lot of bands with more equipment than talent, we were much louder than we were melodious -- at least yours truly. We had the amps turned up to be louder than the drums, and the drummer was bangin’ hard to be louder than the guitars. No-one could hear the singers -- P.A.’s are usually less capable than guitar amps.

I guess our music was just noise to the neighbor. That’s when I decided I needed to live in the country ... way out in the country. Never happened. We just closed the garage door, cut down the volume, drank more beer, and that was the end of that ... and the band, as far as that is concerned.

Noise -- one man’s treasure is another man’s garbage. One man’s meat is another’s poison. Well, in music and amplifiers and recording, there is noise. It can be the sound of a big truck driving by during the capture of a great vocal, or it can be added by the equipment. That’s what this is all about: Noise.


Noise, in general, refers to unwanted sound, often loud, like that noisy person at the restaurant table next to yours talking on their cell phone. In audio systems, noise is the low-level hiss or buzz that intrudes on quiet passages that is usually the problem. All recordings will contain some background noise that was picked up by microphones, such as the rumble of air conditioning, or the shuffling of an audience, but in addition to that, every piece of equipment which the recorded signal subsequently passes through will add a certain amount of electronic noise, which ideally should be so low as to contribute insignificantly to what is heard. These internal noises generated by the equipment can include hum, buzz, and hiss.

In addition, there can be “interference,” which are signals from other sources being amplified. These could be radio or TV or other sources of electronic radiation. Hum could be classified as interference since it’s source is from an outside agency: the power lines, which could include the fluorescent lamps in the room or the power supply in the amplifier itself.

Hopefully, the noise in the audio system is relatively low level, and, for that reason, it tends to cause the most problems when listening to quiet or “soft” passages of  music.

Many classification of noise are based on the frequency of the noise. For example, hum and buzz refer to low frequency noise, while hiss is mostly high frequencies.

Measurements of noise can be made by comparing the power of the desired signals to the power of the noise. Power is related to voltage; it is proportional to the square of the voltage or signal amplitude. The most common and useful measurement of noise is a ratio between the signal and the noise called, quite naturally, the “signal to noise ratio” or S/N. It is calculated by the Power(Signal) / Power(Noise) or [(Amplitude(Signal) / Amplitude(Noise)] quantity squared where amplitude is in volts.

The S/N is usually given in decibels or db and the formula is 20 log (base 10) (A signal / A noise), but enough of that math stuff. The higher the signal to noise ratio, the better. If the amplitude of the signal is small (quiet), then the S/N will be worse -- assuming noise is constant which is typically true.

Electronic noise exists in all circuits and devices as a result of thermal noise, referred to as Johnson Noise. This noise is caused by random variations in current or voltage created by the random movement of charge carriers (usually electrons) carrying the current as they are jolted around by thermal energy. Thermal noise can be reduced by reducing the temperature of the circuit. Noise limits the minimum signal level that any radio receiver can usefully respond to, because there will always be a small but significant amount of thermal noise arising in its input circuits. This is why radio telescopes, which search for very low levels of signal from space, use low noise "front-end" amplifier circuits cooled with liquid nitrogen.

Home satellite television systems place a special, low-noise amplifier right in the dish antenna, but no liquid nitrogen! As satellite signals have increased in amplitude, the dishes got smaller and the low noise preamp was less critical. Still Dish Network and Direct TV locate a preamp in the antenna before sending the signal down the long cable to the set top box, thereby improving the signal to noise ratio.

Since noise generated in the first stage of any amplifier system is amplified by all the following stages, reducing noise requires the first amplifier circuit contribute the minimum amount of noise. This first stage is often called the “preamp.” Long cables from a weak signal source such as a microphone or phono cartridge can contribute noise and interference. A good solution is to install a preamp very near the signal source. Then the noise added by the long cable to the amplifier won’t affect the "S/N" ratio as much because the preamp boosted the "S." Some modern microphones have built in preamps exactly for this reason. I've mentioned before that some turntables have a low noise preamp with RIAA equalization. That way the cables from the turntable to the amplifier can be longer without increasing the total S/N. Otherwise, keep the cable from turntable to amplifier three feet long or even shorter.

There are several other sources of noise in electronic circuits such as shot noise, seen in very low-level signals where the finite number of energy-carrying particles becomes significant, or flicker noise (1/f noise) in semiconductor devices.

There is a special type of noise used to measure and adjust audio systems. It is called “white noise.” White noise is a random signal with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency.

White noise draws its name from white light in which the power spectral density of the light is distributed over the visible band in a similar manner. White noise is a statistical phenomenon, since it is random. But it follows statistical expectations. White noise has the quality of being independent and identically distributed. If the white noise follows certain characteristics and values for mean and standard deviation, including being normally distributed, it is called a Gaussian white noise.

Oh wait, I said no more math. OK, white noise is just noise that is “equal power” at all audio frequency bands. There are white noise generators which generate this special signal. It is quite useful for testing and adjusting audio equipment since it can be considered a signal with, basically, all the audio frequencies at once. Recall that graphic or multi-band equalizers worked with individual frequency bands, so white noise is good for adjusting the equalizer settings -- either full octave or one-third octave. To see how that is done, we’re going to enter the “frequency domain.”

Up until now we’ve viewed waveforms in what is called the “time domain.” Our charts and graphs (or oscilloscope screens) showed time along the x-axis and amplitude on the y-axis. We saw waveforms as cycles of crests and troughs. We’ve spoken about converting these periods (since the measure is time) or equivalent wavelengths into component frequencies using Fourier analysis. But it is also possible to display the frequencies on a chart with an x-axis of frequencies, say from 20 Hz to 20 kHz (or higher). The y-axis remains amplitude.

There are instruments, called spectrum analyzers, which are similar to oscilloscopes, only they show the frequency domain instead of the time domain. A pure sine wave of 1 kHz would show up on a frequency domain chart or a spectrum analyzer as a single spike at the 1 kHz point. A square wave would show 1 kHz and 3 kHz and 5 kHz and 7 kHz, etc. Each harmonic spike would be smaller.

Square Wave Spectrum

White noise on a spectrum analyzer shows a constant set of frequencies all across the band.

White Noise Spectrum

You can use a white noise source, a graphic equalizer, a calibrated microphone, and an audio frequency spectrum analyzer to adjust a sound system to compensate for auditorium acoustics.  I’ve done this often. The spectrum analyzer I use has a built in white noise generator. I connect the generator output to the mixer board and P.A. system at the church. I put a microphone in the center of the auditorium and connect it to the input of the analyzer. With the white noise playing through the speakers and captured by the special, very flat-frequency response microphone, the white noise shows on the spectrum analyzer. Peaks in the analyzer display represent room resonances increasing the amplitude, and valleys are areas of frequency absorption or frequency distortion.

I then adjust the graphic analyzer to attenuate the peaks and boost the valleys. After that tune up has been performed, you can turn the P.A. volume up much higher without any feedback or squeal and the sound is clearer and less muddy. Our church also has delay lines to synchronize the phase of the speakers along the wall. I set them by using pulses fed to the speakers and viewed through the microphone on an oscilloscope. A tune up of a sound system in a large auditorium, or in your music room, can do wonders for sound clarity and reducing feedback. It can also compensate for the furnishings in the listening room.

In addition to white noise, there is also something called “pink noise.” Pink noise or 1/f noise (sometimes also called flicker noise) is a signal with a frequency spectrum such that the power spectral density (energy or power per Hz) is inversely proportional to the frequency -- it decreases as frequency increases. In pink noise, each octave carries an equal amount of noise power. The name arises from being intermediate between white noise (1/ƒ0) and red noise (1/ƒ2) which is commonly known as Brownian noise.

Most hiss is actually Johnson noise from the thermal activity in the electronics, but, as explained in a previous article, it can also come from irregularities and dust in record grooves or tape hiss from the magnetic particles on a tape.

Noise problems can be made worse if the audio system has impedance mismatches. Certain cables, called low impedance cables, contribute less noise and interference than high impedance cables. But the cable impedance must match the source impedance. Impedance matching transformers can be used to match the connection to improve noise performance. Also, cable shielding and even the grounding of the equipment can all be selected in such a way as to reduce noise. The best cables have two, balanced conductors (twisted pair) surrounded by a third electrical shield. This is sometimes called twin-ax as opposed to co-ax. The characteristic impedance of this microphone cable is 300 ohms.

Since noise causes the greatest listening problems with low level signals, many systems of noise reduction compensate by increasing the amplitude of soft signals. The trick is to increase this amplitude only for weak or low signals, and don’t increase the loud signals during recording. Then, in playback, if the system recognizes low level signals, it can attenuate them, thereby attenuating the noise. With loud signals, no change is made. This is also called "compression / expansion" and the expansion actually lowers the noise level. It is similar to the pre-emphasis in the RIAA equalization, but much more sophisticated and complicated.

This is really a nonlinear amplifier design and special care must be taken to avoid introduction of distortion. The Dolby and dbx brands are most recognized for this dynamic noise control, and they were in very common use in high quality tape recording systems. This is because magnetic recording tape introduced hiss due to magnetic issues on the tape surface. These noise reduction techniques can also be used with FM radio broadcasts.

Ray Dolby originally developed what is called “Dolby A” for professional use in studios and professional recording. The “Dolby B” system was later developed in conjunction with Henry Kloss for use with home equipment. Dolby B, in my opinion, made cassette tapes a viable "reasonably high" fidelity recording medium.

There are techniques to reduce noise from open microphones, called noise gates. They basically shut off the microphone if someone isn't actually singing into it. Muting unused inputs and proper adjustment of all gain controls in an amplification chain is also an important techniques to both reduce noise and to reduce distortion from over driven inputs. Just some miscellaneous examples of sophisticated solutions to the noise problem. Best advice: treat noise at its source, don't try to mask it -- unless that's the last option available. Also remember, you can improve the S/N by either raising the "S" or lowering the "N" or both.

Modern digital systems allow very powerful methods to reduce noise. With the recording software I use (Pro Tools and Adobe Audition), I can sample some output without any signal to create a noise profile. That recording is "pure noise." I then use it as a noise sample to my software which analyzes the noise digitally and then uses this analysis results to reduce noise in the signal section of the recording. It is an amazing process, and I once used it to recover an interview that, after doing the recording, I learned that the audio had very bad artifacts and problems that can be called noise. After using the software programmed with a sample of the sound with no speech, the algorithms produced a marvelous result, and I was able to use the badly recorded interview for a web program. I consider the result nothing less than a miracle!

Noise can also be reduced by filtering. For example, there can be possible interference from the 19 kHz sub carrier used in FM stereo broadcasts. It can modulate with the AC bias used in tape players and produce audible tones. High quality tape recorders often contained a low-pass filter to remove the FM sub carrier before recording. Since FM only contains audio frequencies up to 15 kHz, it does not reduce the signal to filter out all frequencies above 15 kHz, but it will reduce noise -- or, in this case, a frequency that can interact and produce noise.

A signal with a lot of high frequency noise or hiss might sound better with a low-pass filter set at 10 kHz or even 5 kHz. The music would have less fidelity, but possibly still sound better without the high frequency noise. There are many computer programs available designed specifically to remove the clicks and pops from records. The repetitive nature of the noise from a bad scratch on a record can be removed. It may be that a momentary gap in all the sound would be less of a problem that a loud pop or click from a scratch. I've done well using a DAW (Digital Audio Workstation) to find a sudden noisy peak in the music and just cancel it out with a short drop in the gain for the few milliseconds the noise spike occurs.

Other types of noise and interference can be removed with smart use of equipment. I used to live by an Air Force base in Great Falls, Montana. The nearby, and very high power radar would make a buzz in all the electronic gear each time the antenna turned in our direction. Better grounding of equipment chassis and shielded cables would have fixed this problem, but the simple (and cheap) equipment I used back then (I was only 19) did not provide adequate shielding. My audio amplifier was very inexpensive and used high impedance microphone cables which were more sensitive to the noise. Sometimes it just costs more to do it right. Or, as Judge Milian says, "Sometimes the cheap comes out expensive."

Troubleshooting noise problems can be quite a detective tale. In the case of the Air Force base, it didn’t take Sherlock Holmes to figure out the source of the noise, but I couldn’t afford the fix. Today, I’m better equipped to eliminate the noise.

One audio engineer I knew back in the 70’s told me there was no noise problem he couldn’t solve with the proper used of the D-filter. I asked what the D-filter was. He said "Dinero." Anyone want to fund the construction of my new studio. It will have a sand floated floor to reduce ground vibration and a Faraday shield to prevent radio interference and sound proofing of all the walls and ceiling with acoustic treatments covering the walls. I won’t have to move to the country, I’ll just need the Dinero.

Originally written on Feb. 21, 2012 during a visit to my Dad's home in Hillsboro, Oregon and posted on Facebook. During my two week visit with my dad, I wrote an article a day. I started with a long series on the Science of Photography which had thirteen individual articles. I then started this series on the Science of Music. It isn't finished and I have a lot more to say. I hope to add to this series in the future.

The Science of Music -- Part Six

With a title like “The Science of Music,” there are a lot of directions the narrative could take. We started off pretty general, describing sinusoidal waves and explaining why they are essential to understanding sound and music. From there we learned about complex waves and harmonics, still pretty general. The next discussion focused on the frequency range of human hearing. All very logical foundation topics for almost any scientific discussion of music.

Then along came the last installment, and it focused on filters and even touched on RIAA equalization. Where do you go from there? We could jump track and talk about the physics of how the violin bow excites the string vibrations when drawn or how woodwinds change resonant frequency depending on which holes are covered with which fingers. But you know that is not the direction I’m headed. I’ve mentioned a couple of times that my long term goal is to study various music recording mediums and ultimately compare phonograph records to CDs. Perhaps this series should be called “The Technology of Music.” But we’re stuck with the title now, and there’s no going back!

With that as our final destination, then it should be no surprise that we will start focusing on various electronic devices, amplifiers and what-not, that are involved in the music recording and music reproduction systems. We might call these systems by the good old name of “stereo” as in, “Can we play some music on your stereo?” even if “stereo” is a little quaint in this age of Dolby surround sound and iPods.

And that leads us to this installment where we will talk about distortion. That is, something the “stereo” does wrong that changes the music. For a formal definition of “distortion,” I offer “an alteration of the original shape (or other characteristic) of sound or music  waveform.” Distortion is usually unwanted, and many methods are employed to minimize it in practice. In some cases, however, distortion may be desirable, such as with electric guitar distortion. I will discuss both unwanted and wanted.

The addition of noise or other extraneous signals (such as  hiss, hum, and interference), is also not desired, but it is not considered to be distortion. We will save noise for another day.


There are several different types of distortion that you can get in a “stereo” system, that is when recording and amplifying and playing music. They include (1) Amplitude distortion, (2) Harmonic distortion, (3) Frequency distortion, and (4) Phase distortion. There are few more, but they are not a problem with audio frequency amplifiers and recording reproduction systems although intermodulation mixing or Modulation distortion is considered by some to be another kind of distortion. There are also specific distortions which fit in one of these four (or five) categories, but for certain reasons have special names such as “wow” and “flutter.” Finally, there are special distortions related to quantization which is what you do in order to convert analog to digital -- that is one you may encounter in CDs and DVDs.

The most important characteristic of a quality audio amplifier is something called linearity. That is, the signal is amplified in a linear manner. Small signals need to be amplified the same amount as large signals. Remember, all signals, whether they are loud or soft, are made up of sine waves that go from zero to a positive, maximum value, back to zero, and then to a maximum negative value, and then returns to zero again. If the amplifier increased parts of this wave more or less than other parts, then the wave would be distorted.

A much more serious form of amplitude distortion is a problem called “clipping.” Suppose the largest size waveform the amplifier can handle is 1 volt peak, and the sine wave input to the amplifier goes from zero to 1.2 volts peak at its maximum. The amplifier will probably amplify the wave fine up to where the voltage is 1 volt, and then can’t go higher. The result is that the amplified wave will have a “flat top.” It looks like the tops (and, as far as that goes, the bottoms) of the sine wave are clipped off like a pair of scissors cut them.

Clipped Sine Wave

 Notice that the sine wave now looks a bit like a square wave. Also, recall that a square wave contains a lot of odd numbered harmonics, the 3rd, the 5th, the 7th, etc. So, if you clip a pure sine wave, you will actually create new sine waves that are harmonics.

Although I stated that musical tones are complex waves, this distortion effect still occurs and is a problem. With the music of a symphony, clipping is going to sound very, very bad. Another name for this extreme form of amplitude distortion is called harmonic distortion because the new frequencies are harmonics of the distorted sine wave.

But, if this is a guitar amplifier playing one or two notes, then this clipping will create a very heavy sound called “fuzz tone.” And fuzz tones are devices designed to clip the waves creating this distinctive sound. They are usually found in little boxes that are at the feet of the lead guitar player. A little fuzz helps the heavy metal groove.

Typically, if you play a chord through a fuzz tone, it won’t sound too good. All the new frequencies sort of clash. But a single note or two notes strongly related, like musical fifths, can really sound “heavy” with the addition of a little fuzz -- or a lot of fuzz: “No time left for you ...” Guess Who?  Recall I said distortion is not good with music reproduction, but isn’t bad -- sometimes -- with music production.

(I know you usually play "sevenths" with a fuzz tone, but you musicians know that the seventh half-tone up is the same as the fifth half-tone down. So they're all "fifths." Musical chord theory, meet science. How do you do?)

You can get this fuzz tone effect by overdriving an amplifier. Most musicians prefer the sound of overdriven tube amplifiers rather than transistor amplifiers. I think that is because tube amps go into clipping in a little more gradual manner than solid state guitar amplifiers. When you over drive transistor circuits, it is usually pretty bad sounding, while vacuum tube amplifiers tend to handle overdriving in a more gradual, muddy manner that many prefer and some call “warm.”

Of course, some people prefer tube amplifiers just because they heard they are cool -- well, their actually hot. Any temperature applies.

Measuring Amplitude and Harmonic Distortion

Since the effect of nonlinearity in an amplifier is to create new sine waves at harmonics of the fundamental frequencies being amplified, amplitude and harmonic distortion are essentially the same thing. Amplitude distortion describes the "cause" and harmonic distortion describes the "effect." We can use that fact to measure harmonic distortion, and, thereby, indirectly, measure amplifier linearity.

What we can do is apply a pure sine wave tone to the input of an amplifier. That is where something like that HP 200CD Audio Signal Generator comes in very handy. Recall I said it produces a very pure sine wave. So if we feed in this pure sine wave into the amp, and then measure the harmonics coming out of the amplifier, that would give us a good measurement of the harmonic distortion.

To do that, we could take the output of the amplifier and use a very deep notch filter tuned to the fundamental frequency of the signal generator to attenuate or reject that input frequency. What is left is just the harmonic distortion.

Now is a good time to discuss how you measure a sine wave or complex wave signal. We’ve been looking at these waves on charts and on oscilloscopes, and we’ve been discussing things like the crest and the trough. I’ve also explained that a sine wave as an electronic signal goes from zero to a positive peak, back to zero, then to a negative peak, and back to zero again. Using an oscilloscope we can measure the value of the peak voltage and that is called “peak voltage” -- naturally.

We can also measure the voltage between the two peaks, that is, from the crest to the trough. Sine waves are symmetrical. That is, the positive peak is identical to the negative peak. With complex waves, that may not be true. So, with a sine wave, if you measure the positive peak and add the negative peak (ignoring the negative sign, other wise it would be subtraction), you get the peak-to-peak value. (More Colorado talk.) So a wave with a + 1.0 volt positive peak and a - 1.0 volt negative peak is a 2.0 v. p-p.

These are all measurements you can take with an oscilloscope. But if you are comparing different waveforms, the most useful measurement is the “effective” voltage. That is the voltage that is comparable to a DC voltage. DC is what you get from a battery. It doesn’t change (except as the battery dies out). Since a sine wave, or any wave for that matter, is always changing and has zero volts at one instant and maximum or peak volts at another, it begs the question "What is the best average voltage?" Effective means it has the same heating effect (or power) as a given DC voltage.

Using trigonometry and a little Calculus, we can calculate the effective voltage by taking the “root mean square” value. “Mean” is an average and the squaring has to do with calculating the power which is the square of the voltage and it is power that produces the heat (or -- with a guitar amp -- the volume or LOUDness). This root mean square is usually abbreviated r.m.s.

With a pure sine wave, the r.m.s. value is equal to the peak value multiplied by 0.707. It is not exactly .707. It is actually one half the square root of two (which is also 1/square root(2) , isn't math odd?) For complex waveforms, you would have to do the Fourier analysis and separate each individual sine wave frequency and amplitude and then multiply and add ... and that is what computers are for. There are also volt meters than can read the r.m.s. values of unusual waveforms. For example, the r.m.s. value of a perfect square wave is 1.0 times the peak while a pulse wave can have very low effective voltage even though the peak is high. That's what a radar wave looks like. OK, back to audio.

HP (and others) make nice volt meters that measure r.m.s. So now we can perform the experiment that measures amplifier distortion. Feed in a pure sine wave, say it is 1,000 Hz. Then run the amplifier output through a filter that attenuates the 1 kHz fundamental by over 60 db. That’s ten to the sixth or one millionth its original value. For all practical purposes, assume the 1 kHz fundamental is now gone. Measure the output of the filter and what is left is harmonic distortion. Feed in a 1 v r.m.s sine wave and, if you get .01 v out of the filter, than is called 1% total harmonic distortion since the added harmonics are 1/100 or one percent of the input sine wave. Good audio amplifiers have between .01 and .001 percent total harmonic distortion (measured as r.m.s.). If you want to test distortion that low, get a 120 db notch filter!

So that is how you measure and specify harmonic distortion, and thereby, indirectly, amplitude distortion and amplifier linearity. You may repeat the test at other frequencies including 500 Hz and 100 Hz or even higher than 1 kHz, but remember, the amplifier has its own low-pass filter, so it is the lower fundamental frequency tests that best measure the THD or Total Harmonic Distortion of the amplifier itself.

Nonlinear amplifiers can also have a problem called modulation or intermodulation. Modulation is a good thing in a radio transmitter -- you know, amplitude modulation (AM) or frequency  modulation (FM), but it typically isn’t something you want your music to do. Modulation distortion occurs when two or more sine waves "mix." This mixing in a nonlinear circuit creates "beats." These are new frequencies that are the sum and the difference of the two original frequencies. For example, if 1,000 Hz and 1,100 Hz intermodulate,you get 100 Hz and 2,100 Hz. These are not harmonics and note that one is a low frequency, not an over tone or higher tone. I won’t get into any more details, but simply state that amplifier nonlinearity can cause the creation of new frequencies and you are not going to like the sound. Harmonic overtones from harmonic distortion can be pleasing to the ear, but these modulation produced tones are real bad. That is why two notes next to each other on the piano, like C and C# sound so bad when played together. More deep music theory happening there. However, like all art, breaking the rules can be good. Piano discord on "I Want to Tell You" -- good. Discord of the Beatles themselves -- not good.

Frequency Distortion

Frequency distortion is something we’ve already discussed in the last installment. It simply means that not all the original frequencies are amplified equally. Is this really distortion? If I like my Beethoven symphonies with extra bass and less treble, and I adjust the tone controls accordingly, then I’ve distorted the signal, but I LIKE IT!!

It is a true distortion, however. Certain parts of a “stereo” system are particularly limited in frequency response. That includes the microphone that recorded the music or speech, the loudspeaker in the system, and even the room where either the microphone or loudspeaker is in. In addition, long cables to the speakers can cause a loss of high frequencies. This is frequency distortion.

Again you can compensate with tone controls. Graphic equalizers are particularly useful in adjusting the frequency response to match the room. Most rooms have resonances that will make certain bands of frequencies much louder, and that is not an effect you typically want in your music.

Phase Distortion

The final class of distortion to be discussed is phase distortion. Remember that amplitude and frequency are the key attributes we try to preserve. But there are also phase relationships between the different component sine wave that need to be preserved. This is especially an issue to maintain stereo “separation.” That is, the ability to reproduce a good sound image that is interpreted by the human ear and mind to give a three dimensional perspective to the sound.

There are several theories that explain the ability of humans and animals to locate the source of a sound. In general it is due to having two ears. A sound from the left will reach the left ear before the right ear. It will likely be louder in the left ear too. There are a lot of other issues involved such as the addition of sight and an awareness of surroundings. The ears will hear different sounds in terms of both amplitude and also delay effects. These delays mean the left ear and the right ear may hear a complex wave with a different phase. So timing and phase are involved. Good amplifiers will preserve the relative amplitude of a sound between the two stereo speakers, but what is the impact of phase?

Recall that phase was how much ahead or behind a second wave is compared to the first wave. Also recall that audible frequencies are from 20 Hz to 20 kHz and that wavelength is a reciprocal function of frequency. That is, the 20 Hz wave has the longest wavelength and 20 kHz the shortest. Compare the wavelength of these frequencies in the open air: 20 Hz = 56 ft.; 100 Hz = 10.1 ft.; 1 kHz = 1.1 ft.; and 10 kHz = 0.11 ft. With the human ears separate by less than one foot, even taking Big Head Todd into account, then the phase difference with low or bass tones will be very small. BUT, the phase difference with high frequency tones is quite different at ear separation distances.

For that reason, the human hearing system, ears and brain, only have the ability to locate high frequency tones. That’s why bats and submarines use extremely high pitched sounds to navigate by “sonar.” And stereo systems, those with two or even more speakers, really only produce a stereo image with the high frequency tones.

Sub-Woofers

Further, most of the “power” in the musical spectrum is at the low frequencies. Remember the ear doesn’t hear low tones as well as 1 kHz waves, so bass notes must be amplified much more than treble. So the power requirement for “bass” is much higher than for “treble.” Further, bass reproduction requires bigger, and therefore more expensive speakers. Put all these scientific facts together and it leads to the “sub-woofer.” Since your ear can’t separate the bass on the left from the bass on the right, just combine the two channels into a single channel, equip that channel with a powerful amplifier and big speakers, and you’ve got it. Use a low-pass filter set around 100 Hz to separate the bass signal from the rest of the music, and feed that to this sub-woofer system. Then the “stereo” speakers, which are now only producing the mid-range and high frequency signals, can be smaller. That is the basis of modern “surround sound” systems.

The Dolby corporation coined some terms: “2.1” implies two speakers (stereo) with one sub-woofer; “5.1” is two stereo speakers, one middle speaker located between the stereo speakers, two speakers behind the listener, and one sub-woofer; and “7.1” adds two more speakers opposite the ears. The goal is to reproduce the sound in a movie theater where we simulate movement on the screen with sound moving around the auditorium using multiple speakers. (Some theater surround systems have 10 or 20 pairs of speakers along the walls to produce this effect.) George Lucas founded THX sound systems to allow home theaters to match this movie sound with a lot less speakers. The THX name is from one of his early movies, "THX 1138" with Robert Duvall -- a big name in Hollywood. THX and Dolby are also big names, and big standards, in modern surround sound systems.

So, to get back to distortion, phase distortion would shift the relative phase of the component sine waves, and both change wave shape as well as modify the stereo imaging. We may do this on purpose if we’re trying to create surround sound effects out of normal stereo signals. Shifting phase is done with reactive components such as capacitors and inductors. Recall that those passive components were also used to make filters. It is important in filter design to control any phase shift and very effective filters, such as the one I described for measuring total harmonic distortion, must be designed very carefully to minimize phase shift. You can translate “designed very carefully” to “expensive.”

Other Distortion

I mentioned earlier that vacuum tubes went into overload and clipped in a little “rounder” or “softer” manner than solid state devices. In general, vacuum tubes are more linear than transistors. Modern transistor audio amplifiers are very linear, but they obtain that linearity using something called feedback -- negative feedback. One problem with feedback is it can create certain types of amplitude distortion, especially when the signal is changing rapidly, such as high frequencies and transients, which is the name of quickly changing wave shapes, for example, when you strike a drum or cymbal. One of these transient induced distortions is called SID or slew-induced distortion and the other is called TIM or transient intermodulation distortion.

Slew-induced Distortion (SID, or sometimes: Slew-rate Induced Distortion) is caused when an amplifier or transducer is required to change output (or displacement), that is: "slew," faster than it is able to do so without error. Transducer is the name of a device that converts the mechanical energy of sound or music to or from electrical signals. Some example transducers are guitar pickups and microphones as well as loudspeakers. Being physical, and having mass, they can have problems with transients which are require very rapid movements. Special design of both microphones and speakers is required to capture and reproduce these very high frequency phenomenon.

During these displacement signals or transients, other signals may suffer considerable gain distortion, leading to Intermodulation distortion. A loud snare drum wave may actually distort the guitar sound in a recording when amplified in the "stereo." Transient Intermodulation Distortion (TIM) is a form of modulation of the signal during these rapid changes and also may cause compression or reduction of the peaks which would add harmonics. Basically the sudden changing input causes nonlinearity in the amplifier response that can momentarily create modulation effects. It has to do with the impact of the sudden change on the feedback network and delays in that feedback and phase changes in that feedback and ...  Modern amplifier design is very complex. Limiting high frequency response to 20 kHz helps since the higher frequencies contribute a lot to this complexity, but even 20 kHz response requires some good designs by some smart engineers. That is why the best stereo equipment can be very expensive.

Modern solid state stereo amplifiers minimize these effects, but they can still exist. A lot of purists point to the fact that vacuum tube audio amplifiers have much less negative feedback in their design, and so they are less effected by these transient distortion effects. However, solid state devices, due to their very small size, can handle the high frequencies in a transient or rapidly changing wave better. So, as they say, the jury is still out, and you should sample both with your own ears to decide which is better -- vacuum tube or solid state. Finally, remember, the weakest parts of the chain are things like microphones, phono cartridges, and especially loud speakers, not to mention the furniture and window coverings and carpet in the listening room. Are you ready to rebuild your home entertainment room?

Wow and Flutter, Temolo and Vibrato

Finally there are some effects where frequencies of fixed tones vary. This can be on purpose with vibrato added to an organ tone or tremolo effects. If they are intended, that is fine, but the amplifier should not add them on their own. Tape players such as reel-to-reel, cartridge (remember the eight track) and cassettes can have problems where the tape speed varies and that is called “wow” and “flutter.”

"Flutter" is a rapid variation of signal amplitude, frequency, or phase. In recording and reproducing equipment, the deviation of frequency caused by irregular mechanical motion, such as that of capstan rotation speed in a tape transport mechanism, during operation can be the cause. "Wow" is a relatively slow form of pitch variation which can affect both phonograph records and tape recorders.

We also purposely add these "flutters" to the music. Tremolo and Vibrato are terms you hear (no pun intended). Technically there is a difference between “vibrato” and “tremolo.” Vibrato is a wavering of the amplitude of the signal and tremolo is an actually change of frequency. Violinists introduce tremolo by moving the finger holding down the string rapidly back and forth. This changes the pitch slightly.There are several reasons that violinists do that. First is, since the violin has no frets, it is difficult to finger at exactly the right point on the string to produce a given note. A little tremolo increases the chance you will hear the correct pitch. In addition, violin sounds are very nearly pure sine waves, and we've mentioned that pure sine waves are not very pleasing to the ear. A little tremolo adds variations and that makes the violin note much more musical to the ear. Some singers add a lot of tremolo to their voice; another trick to hit the right pitch.

Similarly, with electronic organs, the notes can be rather pure. So organs often have vibrato and tremolo added to enrich the sound. Vibrato is pretty easy to produce in an electronic organ. Tremolo is a bit harder to do, especially in the old days with only tube circuits, since each circuit cost a lot to build. Current solid state organs have a lot more circuits since transistors and integrated circuits are dirt cheap.

With the old tube organs, this tremolo issue was more of a big deal, so Leslie invented a speaker cabinet that had rotating horns over the speakers. As the horns spin they add a Doppler effect to produce a real tremolo to the musical tones.

Doppler effect is what you hear when a train blows its whistle as it passes by. If you listen to a train whistle as the train approaches, it seems higher pitched than when the train is moving away from you. That is because, when the train was approaching, the wavelength of the sound was shortened by the train's motion toward you. Shorter wavelength is higher frequency. As the train moves away, the opposite effect occurs and the wavelengths are lengthened by the moving of the train, which lowers the pitch.

The rotating horns in the Leslie speaker cabinet have a similar effect and the result is a true tremolo. The best part of the Leslie design is that it has two speeds and the belt that drives the speaker cones has some slippage. So, when the change speeds, the effect is a gradual and noticeable change from slow to fast or from fast to slow. Modern electronic organs can duplicate the Leslie sound (although not perfectly), and can even mimic the gradual change from slow speed to fast. But, give me a real Leslie and a Hammond B-3 any day!!

Finally, quantization noise is part of the analog to digital conversion process. In analog-to-digital conversion, the difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. This error is either due to rounding or truncation of the binary number produced. With digital systems, there can also be "jitter" caused by variations in the clocking circuits in the analog-digital converter. I'll get into that more when we get to digital music. Some consider this quantization error noise and others refer to it as a distortion. And that is all the noise and distortion we need for now. So, until next time, when I’ll “Bring in ‘da Noise,” again, good-bye.


Originally written on Feb. 21, 2012 during a visit to my Dad's home in Hillsboro, Oregon and posted on Facebook. During my two week visit with my dad, I wrote an article a day. I started with a long series on the Science of Photography which had thirteen individual articles. I then started this series on the Science of Music. It isn't finished and I have a lot more to say. I hope to add to this series in the future.

The Science of Music -- Part Five

Previously we learned two very important concepts which we will now combine into a useful overall idea that is key to all music reproduction and recording. Regardless of whether we used vacuum tubes or transistors or integrated circuits to amplify, this idea is important for everything from stereo amplifiers to public address systems to guitar amplifiers (plus a whole lot more including radio and television receivers, radar, and even digital photography). It is also important to all kinds of recording from phonograph records to wire to tape to digital recording -- of both audio and video signals. Why it is even important in astronomy and physics.

We will combine the idea of electronic “filtering,” the selective attenuation or amplification of discrete frequencies, allowing us to separate an electronic signal into frequency bands of low, middle, and high, and even select in or out very specific bands of frequencies as small as 1/3 or an octave and even smaller.

This idea of filtering will be applied to our knowledge that all complex tones, and therefore, all music, is made up of combinations of individual sine waves which have a specific amplitude and frequency (and phase). These ideas, combined with how the ear and the mind “hears” speech and music (something called psycho-acoustic effects), will be used to mold and shape the tones to match our musical preferences and to compensate for the limits and characteristics of all the devices in the amplifying and recording chain, including everything from microphone to amplifier to speaker and all the other devices in-between including the recording media and electronics.

This is an exciting journey and now that we’re past the flatlands of Kansas and see the mountains of Colorado ahead, it will be a most interesting journey. So lean back and enjoy the scenery. Your friendly tour guide, (that’s me), will discuss some of the points of interest along the way as we dive deep into filters. Here comes the low-pass, the high-pass (high passes is what Colorado is famous for), as well as band-pass, band-elimination, and multi-band equalizers. Look over there! That’s the RIAA record equalization. Isn’t it beautiful.


I’m sure you all recall that the range of human hearing is 20 Hz to 20 kHz. You also remember that audio amplifiers usually match that frequency response range since there is no reason to amplify frequencies above the range of human hearing, and little reason to amplify the frequencies below that range, although, just to be on the safe side, some very high quality audio amplifiers have a relatively flat frequency response from 10 Hz or a little lower up to 30 or 40 kHz. That’s just to make sure no good frequency is “attenuated.”

Oh, “attenuated,” that’s a new word, and it’s one we’ll need to talk about filters. Just as a  coffee filter “filters out” the course coffee grounds by blocking them in the paper, electronic filters “filter out” frequencies we don’t want to pass by attenuating them. Wordnetweb at Princeton University defines attenuation as the weakening in force or intensity; for example, "attenuation in the volume of the sound."

Filters can also also amplify and some fancy electronic filters do include transistors and other amplification devices, but most filters just reduce the intensity or amplitude of frequencies you wish to reject from the overall signal -- and that reduction is called "attenuation."

Old fashioned tone controls which we’ve all used our entire lives are just filters. The bass control can turn down (or up) the lower tones by filtering. Treble tone control is the same except it affects the higher frequencies. A multi-band equalizer is just a very fancy tone control that lets you adjust the frequency response of your amplifier by octave or even one-third of an octave. We used these kinds of equalizing filters to fine tune the frequency response of the sound system at church to eliminate certain resonances and lift up some flat spots that are caused by combinations of the room design and the speaker inequalities. Careful equalization of the sound system's frequency response will eliminate feedback and squeals and make the sound easy to hear and understand.

Most filters are made up of Capacitors and Inductors (coils) which are called passive components since they don’t amplify like transistors or tubes (which are called active components). The most basic passive component is a Resistor. It adds resistance to the flow of an electrical current. Resistors, within limits, are not effected by frequency; they respond the same to all frequencies. Capacitors, on the other hand, also have a resistance to current flow, but the resistance decreases as frequency goes up. Capacitors completely block DC which is considered 0 Hz. Inductors are the opposite, their resistance goes up with frequency. So you can combine capacitors (called “C”) with Inductors (called “L” because “I” was already taken by current -- well, that's because “C” was already taken by ...) and Resistors (thankfully called “R”) to make a filter that responds to different frequencies.

(Technically, the resistance that Capacitors and Inductors have to various frequencies is called Reactance, with the symbol "X" since "R" was taken. Further, if you add "X" + "R" you get Impedance which is "Z." Now you know why you have to study this double "E" stuff for so many years. By the way, transistors are often denoted by "Q" since tubes typically were called "U" since "T" was taken for Transformers and V = Volts and ... never mind!)

Simple filters just have a couple of components, while complicated filters can have dozens of components. Some electronics engineers spend their entire careers doing nothing but designing filters. It is a rich area of engineering. By the way, once a signal is digitized, it can be filtered by software without the use of any L or C or R. That is how modern digital editors such as Pro Tools works. But, as usual, I digress.

We describe filters by the effect they have on frequency. A low-pass filter let’s low frequencies “pass through” but it blocks or attenuates high frequencies. Here’s the frequency response curve.

Frequency Response of a Low-Pass Filter with 6 db per octave or 20 db per decade slope or attenuation

High-pass filters block the lows and let the high frequencies pass.

High-pass Filter

You can combine these two filters together and get a band-pass filter which only passes the frequencies within a certain range to pass. That would be the frequencies between the filter cut-off points. There are also band-stop or band-elimination or band-cut -- they all mean the same --  filters which block a small band of frequencies. Most audio amplifiers contain band-pass filters with cut-off frequencies of 20 Hz at the low end and 20 kHz at the high end.

Low-pass, high-pass, band-pass (combined low- and high-pass), and band-cut or "notch" filters.

All of these filters have their use in audio equipment. You can control (or attenuate) the bass tones with a high-pass filter. Conversely you can control treble with a low-pass filter. Fancy, multi-band equalizers are just a bunch of band-pass / band-cut filters. We sometimes call these multi-band equalizers "graphic-equalizers." Note how the controls are designed to represent the frequency response curve controlled by the individual settings.

Graphic equalizers come in octave band controls which typically is 10 or 12 individual controls, duplicated for the left and right channels. One-third octave graphic equalizers have 30 controls per channel most often. I don't know if you need a graphic equalizer on your stereo system, but they are essential in a good sound system to balance out the amplification to match the room acoustics. They're also useful during recording sessions, but mostly it is the software implementation that is used in a recording studio. Equalization is part of most DAW or Digital Audio Workstations.

Multi-band Equalizer

There are also special band-cut filters called “notch” filters that attenuate or "knock out" a very narrow band. For example, one source of noise in music is the 60 cycle hum that can come from the 110 v power source. One way to reduce the hum is to have a very narrow bandwidth filter called a notch set to 60 Hz. (In Europe it would be set to 50 Hz. Do you know why?) Although you may lose a bit of the musical content that is at 60 Hz, it may be worth it to eliminate the annoying hum.

Very low frequencies may also be filtered out by a "rumble filter" which is designed to remove the sounds added by the turntable rotation and motor noise that is coupled through the turntable needle. High quality turntables take great care to remove these mechanical noises through belt drives to reduce the coupling of motor noise and balanced turntables so they turn smoothly.

I said that most filters cut or attenuate, yet equalizers seem to both boost and cut frequency ranges. How is that done? Well, you could amplify all frequencies a certain amount and then set all the band filters to mid-range attenuation. If you adjust for more attenuation, then the level drops, and if you adjust for less attenuation, that seems to be boost. Or, of course, you can combine a filter with an amplifier. It doesn’t make any difference.

By the way, the frequency at which a filter starts to work is called the “cut-off” frequency and it is described as the frequency which the signal is cut by 3 db (which is half power). In a low-pass filter, the 3 db point marks the frequency which the cut starts and all frequencies above that point are attenuated. The curve has a “slope,” and the slope is often measured in decibels of reduction per octave or per decade (times ten).

In addition, some filters are not adjustable, and are used to provide some condition required for engineering reasons. A good example is the RIAA equalization used by phonograph records, also called the "Gramophone." I want to get into more detail about recording, but that will have to wait. For now I’ll just make the simple (and true) statement that the groove in a record is just a representation -- a “picture” if you will -- of the music total waveform. The phonograph needle tracks the “wiggles” in the groove and produces an electrical signal that duplicates the original sound wave.

But, there are problems. First, as we said, the human ear is not very sensitive to bass, so -- frankly -- the bass is recorded LOUD. But that can be a problem. As manufacturers developed modern, LP (for Long Play), 33-1/3 and 45 rpm records, they moved the grooves closer together than they were on 78 rpm records to obtain the longer playing time. But, then the wide excursion or “wiggling” caused by the bass notes made one groove bump into the next groove.

The solution was really quite simple. Reduce the amplitude of the bass frequencies before they are "cut" onto the record. Then have the record player amplifier boost the bass tones back to their original level. That is called equalization. The trick was that all the record companies and all the record player manufacturers had to make all the filters work the same. As I mentioned earlier, that is measured by the cut-off frequency and the slope of the filter curve in db/octave.

So, the Record Industry Association of America or RIAA defined the required bass reduction and labeled it "RIAA Equalization." All the record producers and record player manufactures follow that equalization specification and the net result is that bass is reduced before recording on the record disk to allow grooves to be close together, then the bass frequency levels are restored in the amplifier by equalization.

While the RIAA was at it, they were also able to address another problem called “noise.” Recall that noise is any signal we don’t want. One very common noise in electronic amplifiers is “hiss” which is a high frequency noise. One source of “hiss” is the tiny irregularities in the record groove from manufacturing and also from dust that has settled in the grooves. No matter how smooth the cutting is, there will be tiny imperfections and that will produce noise in the record player output. In addition, these imperfections, dust, and even scratches can cause all kinds of unwanted noise including pops and ticks. All these problems are made up of mostly high frequencies, so a pre-emphasis of highs before recording allows a de-emphasis in playback, thereby reducing most of these sonic artifacts.

The idea is to boost the high frequencies before recording them on the record. This is called "per-emphasis." Then the record player can reduce these high frequencies, and that will restore the music to its normal level, but it will reduce the noise frequencies caused by the record groove irregularities. It is a simple thing to do, and so the RIAA also added treble boost to the RIAA equalization spec. Here is an example of the frequency response of a modern record player to compensate for the bass reduction and treble boost during recording. The equipment that made the record had an equalization curve exactly the opposite, it reduced bass and boosted the treble.

RIAA Equalization Curve

A lot of very modern audio amplifiers do not have this required RIAA equalization circuit because not many people play records any more in this age of tape and CDs. This equalization was applied to an amplifier input usually labeled "phono." These modern amps often don't have a "phono" input.

If you have a turntable, you will need an amplifier with a phono input which includes the RIAA equalization. Or, you can purchase a preamp that has the required equalization. In addition, the output from the needle or phono cartridge on the turntable is very low level, so you need additional amplification and it needs to be high quality, low noise amplification. Some modern turntables have the required amplification and equalization built into the turntable base and provide a high level, properly equalized output that can be connected to you amplifier "tape in" or other inputs. I have an old Dynaco preamp I built back in the 60's that I use with my turntables, and it is still working pretty well even after 50 years of use. I wish I could say the same about my knees!

The RIAA equalization curve has been in use since 1954. Before that there were many standards that did the basically the same thing. It wasn't a new idea. Even 78 rpm records had equalization similar to the RIAA standard, but they had a little different cut-off frequency or a little different filter slope. It doesn’t matter much, but serious high fidelity enthusiasts that play those old records from the 30’s and 40’s use amplifiers with adjustable equalization so they can compensate and that bunch knows what values to use for Columbia or Decca or Victor recordings since each recording company used slightly different equalization. So RIAA wasn't really new, it was just a standardized specification to make the process simpler.

Other organization, such as the National Association of Broadcasters or NAB were also involved in setting record standards. Remember, most radio broadcasts in the 40's and 50's were put on records for distribution to individual radio stations and overseas to the military. If you're a fan of "old time radio," you've been listening to "transcriptions" or recording that were on large records. Tape recorders didn't become popular until well after the end of World War Two.

All record companies and audio equipment manufacturers now follow the 1954 standard. Foreign companies had some variation, but it was small. Since the 70's everyone, world-wide, has used the RIAA standard. You might want to adjust your system for older English or Japanese records. I know some of you out there are collecting those foreign disks. If you want to learn more, then here's some more: check out this website.

http://midimagic.sgc-hosting.com/mixphono.htm

By the by, similar equalization to control amplitude and minimize noise are in use in everything from FM broadcast to video cameras and even in Photoshop (Unsharp filter anyone?)

So, there you have it. A brief course in electronics from R to L to C and beyond from Richard Lee Cheatham -- that's me. We’ve cut-off db’s and octaves and boosted our trebles through pre-emphasis. If you are interested, dig deeper. A few Google searches and a few hours spend on Wikipedia and you’ll know all about Reactance and Impedance or Zobel, Chebyshev, Butterworth, and Elliptic filters, and pi and tee designs. Why you’ll even learn about acoustic filters using crystals. Next thing you know you’ll be reading about Hammond spring reverb units, and you’ll be writing your own “Music of Science” or something like that. I look forward to reading it.


Originally written on Feb. 19, 2012 during a visit to my Dad's home in Hillsboro, Oregon and posted on Facebook. During my two week visit with my dad, I wrote an article a day. I started with a long series on the Science of Photography which had thirteen individual articles. I then started this series on the Science of Music. It isn't finished and I have a lot more to say. I hope to add to this series in the future.

The Science of Music -- Part Four

In the last three articles I’ve discussed waves to the nth degree. I spoke often of the two essential characteristics (or parameters) of waves. That is, frequency and amplitude. There is also a third attribute called phase, and -- in addition -- frequency can also be represented by the reciprocal characteristic called period or wavelength. Amplitude is easily changed either through amplification or just some interesting acoustic designs such as the shape of an auditorium or the shape of a guitar body.

Now it is time to focus (no, not going back to photography) on frequency. That is what I frequently do in my spare time. Actually, the two main parameters of amplitude AND frequency will be the point of this discussion. We’re about to dive deep into frequency response charts and equalization and filter curves. This will require we add another item to our warehouse of knowledge -- a mathematical item -- straight out of high school math -- called “logarithms.”

“Logs,” as they are called by people who find it easy as falling off of the same will not be explored in depth. There’s a lot about logs that was very interesting before the invention of the modern calculator. Before calculators there were log tables and slide rules (which were created with “built-in” logs). Logs are still very important to advanced math, but I’ll bet they don’t get the focus in high school that they got back in the last century when they were invaluable aids to calculation.

We just want to understand logarithmic scales, and we won’t even have to get into the math to do that. For reasons I’ll soon explain, logarithmic scales are often used to compare both sound (and music) amplitude and frequency, and logs are at the core of decibels. That’s a tenth of a bel (deci = tenth), and the "bel" was named after Alexander Graham Bell -- famous for both the telephone and the cracker. (Is there corn in a graham cracker? Because there sure is a lot of corn is this writing!) Interesting that the "bel" is spelled with only one "l." I might have to write an article about that ... but not today.


Today we’re going to talk about the human ear and its sensitivity to both amplitude and frequency. I assume most have heard (no pun intended) about the frequency range of human hearing from 20 Hz to 20 kHz (“kHz” is 1,000 Hz.) That is a wide range of frequencies. In addition, we’ve learned that the harmonics of fundamental tones are multiples of the fundamental frequency of the note. This is also true of the “keys” on a piano, that is, the frequency of the musical notes.

Consider the note "A" that is three octaves below middle C. The frequency of that low note is 55 Hz. The next "A" up on the musical scale (that is, up one octave) is twice that frequency or 110 Hz. The next "A" higher is 220 Hz. After that, A just above middle C, is 440 Hz. It keeps going like that with each note of A up the scale twice the note below.

As I’ve said, that is called an “octave.” “Octave” actually means “eight,” and that is because there are 8 “full” tones between two similar named notes on the scale. For example, in the scale of C major, there is C - D - E - F - G - A - B - C. And these "same named" notes, obviously, are very closely related. That was one of the earliest discoveries in music, and the Greeks, who founded a lot of the math we've been chopping through, noticed this double / half relationship. The precision and order implied by this simple doubling fact influenced their thinking and this is apparent in both their math and their philosophy of science ... and that topic is definitely due for a long series of discussions ... but, again, not today.

I won’t get anymore into the “music” since this is a series on “science,” but I’m sure most musicians out there are very aware of this octave relationship. It is no surprise that all the tones called “A” on the keyboard (or guitar or trumpet) are multiples or harmonics of each other. When you think about what we’ve learned about complex waves containing multiple frequencies, it makes good sense. We will learn more about that later too. Often, when playing music or "jamming," the correct note to play will be an A, and the decision to play a low A or a high A depends on what you want to do with the melody, make it go up or make it go down. Oh, wait, I said I wasn't going to get into music today, just science. Sorry!

So it is natural if you are looking at the frequency of notes on the scale or notes that the human ear can hear, you would use a measurement that has more “resolution” for low frequencies than higher frequencies. Such a measurement is called a “logarithmic” scale.

(Warning: My first use of the word "scale" means the notes on the keyboard or staff in music. My second use of the word "scale" means the numbers on the bottom or side of a graph or chart. Sorry about the confusion.)

One example would be a chart with 20 Hz on the bottom and a given distance to the right of the scale (or “x-axis") is 40 Hz. Assume it is one inch along the bottom of the chart from 20 Hz to 40 Hz. Next is 80 Hz at the same distance, one inch from the 40. Next is 160 Hz at the same distance, etc. Equal distance for each “doubling.”

That would be a logarithmic scale with powers of two. That would work fine for a frequency chart, but most log scales are power of ten. That is they would go from 10 Hz to 100 Hz to 1,000 Hz to 10,000 Hz in equal distance on the graph or scale. This logarithmic method of charting matches the characteristics of frequency in that it is the doubling of frequency that matters most.

In addition to our hearing and discernment of frequency being “logarithmic,” it turns out the ear’s sensitivity to amplitude or loudness is also logarithmic. We note changes in volume that are multiples. Basically, in terms of measured amplitude, the ear responds to a percentage of change, not an absolute amount of change. With most people, the smallest change they can detect in sound volume or amplitude is about a 26% increase in power. (And that 26% change is the value of 1 decibel -- how convenient.)

So, if we are going to make a chart with loudness or amplitude on the “y” axis (up and down) and frequency on the “x” axis (left and right). That is called a log-log scale and most all frequency charts will have this double logarithmic scale. Here is an example of a frequency chart for a stereo amplifier.

Frequency Response Graph

Notice that the frequencies, from 10 Hz to 100,000 Hz is logarithmic and the scale changes as you move to the right. It goes from 10 to 100 to 1,000 to 10,000 to 1000,000. It may not be obvious that the amplitude scale on the left is also logarithmic. That is because, instead of showing signal strength as volts (or milli-volts: thousandths of a volt) it shows amplitude in decibels or “db.”

As I said, decibels are themselves a logarithmic measurement. Ten db increase is time ten. Twenty db increase is times 100. And thirty db increase is times 1,000.  By the way, with a power scale, 3 db is half / double. We will see (or is it "hear") sound (and music) amplitude often given in decibels or db because db’s are a logarithmic measure and the ear’s response to volume or loudness or amplitude is also logarithmic. A small change in the amplitude of a soft sound is detected, but it takes a large change in a loud sound to be noticed, but I repeat myself.

By the way, I’m using “volume,” “loudness,” and “amplitude” as the same thing. They are subtly different, and I’ll explain that too -- when it is time.

The frequency response chart I included shows variations in the amount of amplification through the range of human hearing. This is due to the design of the amplifier. The goal is for an amplifier to have very “flat” response curves. That means the output at different frequencies is about the same, and the curve would be a straight line or “flat.” Such perfection may not be possible, but high quality amplifiers come very close to attaining this flat response. (It is usually the speakers that cause the most variation with frequency.) But all audio amplifiers drop off at the lowest frequencies and the highest frequencies.

There is no point amplifying frequencies above the range of human hearing since most dogs, or dolphins, or bats don't have credit cards and don't buy a lot of stereo equipment. Plus, amplifying high frequencies that can’t be heard would waste power and even cause interference. We don’t want your audio amplifier picking up the local radio or TV station.

On the low end, most amplifiers only go down to about 20 Hz. For one thing, the design of vacuum tube amplifiers required something called “capacitive coupling” which made it difficult to amplify very low audio frequencies that you couldn’t really hear anyway. However, modern movie music has some really low tones fed to “sub-woofers” that you more “feel” than hear. With some solid state amplifiers, called “direct coupled,” can go down to 5 Hz or even lower, possibly all the way down to 0 Hz or DC. And that can really shake the seats.

(Think for a moment what is “all the way down." An octave lower than 20 Hz is 10 Hz. Next down is 5 Hz. Then 2.5 Hz. Then 1.25 Hz. It’s a LONG WAY DOWN.)

There can be problems with these direct coupled amplifiers because they can put out a DC voltage, and that is not what you want in your loudspeaker. It can cause both excessive heating in the speaker and distortion. The DC can be adjusted to zero, but that is a hassle, so there are very few true DC output amplifiers, except in laboratory equipment.

Also, any of these very low frequency response amplifiers can have problems with record turntables and something called “rumble,” which is the very low frequency component added by the rotating turntable. So, if you have a direct coupled amplifier in a modern sound system used for movies and all that jazz, then buy a good quality turntable or you’ll think there’s a Harley in the background when you play records.

Now we’ve seen what a amplifier frequency response curve looks like. What does the frequency response of the human ear look like? You’ll be surprised.

Equal Loudness Contour

Several things to note. First, ears are not manufactured by RCA or Pioneer. So they aren’t all the same. As we know from experience, human ears vary. Some are big. Some are small. Some have a lot of piercings. This chart is an average chart and it is used for certain studies and calibrations and tests. Second, note that the frequency response varies with loudness. Electronic amplifiers usually respond the same to all levels of signals. That is called a “linear” design. The human ear is decidedly non-linear. If you compare the two charts above carefully, you'll see the amplifier system has about the same graph at different sound levels, but the shape of the curves for the human ear are quite different at different amplitudes. Hmmm.

Note that when the line on the chart goes up, it takes a louder signal to hear -- the ear is less sensitive to that frequency. Our hearing is best at the point where the graph is the lowest or nearest the bottom of the graph. With very soft sounds, our best hearing is at about 3 - 4 kHz. At loud sounds it is fairly flat from 500 - 3,000 Hz. No coincidence that that is the primary range of frequencies of speech. (Also note they didn't test high frequencies at 100 db for fear of damaging the ears of the test subjects. So it is just estimated.)

The hearing response of the ear is also affected by age, and old people, like yours truly, can't hear so well in the upper ranges. What's that? ... Never mind!

Also note the relatively low response to bass sounds below 100 Hz and how the ear hears less and less as the volume drops off. I said that there were some differences between some of the terms being used. “Amplitude” is an engineering measurement either in volts or power units -- typically measured in db. “Volume” is what we call the “gain” control on your amplifier and volume is roughly the same as amplitude.

“Loudness,” on the other hand, is about how the ear hears it, and you can see that bass tones must be much louder to be heard as the same level as 1 kHz -- and even more so at low volume levels. That's right, our response to bass does go up with volume. That’s why some old stereos had a “loudness” button. You press that button, and bass is boosted. In theory, that was for listening to music at a soft level. You pressed the “loudness” button to increase the bass to make up for the ear’s non-linear response to bass. Of course, I always turned up the volume AND pressed the “loudness” button because I LOVE BASS. What did you say? I said ... oh never mind.

Some amplifiers will label the gain control "volume" and some will label it "loudness," although the second label tends to imply the amplifier changes its frequency response automatically; as you turn up the volume, it turns down the bass. Or your amplifier may have buttons or knobs to adjust the frequency response or even a row of little controls to adjust the gain for each octave. That will be the next topic, how all these tone controls and equalizers work, as well as the equalization of phonograph records (RIAA) and other topics relating to tape recorders and CD players. At least, we'll get started putting these "wave" and frequency ideas to work.

Well, gentle readers, that’s enough for today. I’ll come back to this topic next time as we talk about equalization, tone controls, and filters. We will learn what they are, how they work, and why we use them. You’ll see some more of these log-log frequency charts. Until then, turn up the volume, press the loudness button, turn on the sub-woofer, and FEEL THE NOISE ...

Come on feel the noise
Girls rock your boys
We'll get wild, wild, wild
Wild, wild, wild.

So you think I got an evil mind
I tell you honey
I don't know why
I don't know why

So you think my singin's out of time
It makes me money
I don't know why
I don't know why
Anymore
Oh no

Come on feel the noise
Girls rock your boys
We'll get wild, wild, wild
Wild, wild, wild.

Now you’ve got it. Get your guitar. Go to the closet. Put on your sister’s dress, maybe a little eye shadow. I’ll make a rock star out of you yet.


Originally written on Feb. 18, 2012 during a visit to my Dad's home in Hillsboro, Oregon and posted on Facebook. During my two week visit with my dad, I wrote an article a day. I started with a long series on the Science of Photography which had thirteen individual articles. I then started this series on the Science of Music. It isn't finished and I have a lot more to say. I hope to add to this series in the future.