The Science Behind Recording Music
If we take the time to study a few principles behind how sound works and the science behind recording music, we better understand what happens when we record music. We will be better equipped to understand why the gear and software we use work the way it does. We will also be able to understand the terminology associated with the gear and software we use in the studio. This insight will help us in our pursuit to record and make music of a higher quality. So let’s look at the science. I will try to keep this as painless as possible.
Sound travels as waves. It is not necessary to read physics and mathematics books In order to understand how sound waves work. A simple example can demonstrate a sound wave’s behavior in motion. Imagine a puddle that you’ve just dropped a small pebble into. The way the waves spread out from the impact point is a good two-dimensional representation of how sound waves travel in air. However, when sound is generated in the studio or live venue it is in three dimensions. The wavelength of a sound wave is the distance between successive compressions or rarefactions.
Compressions: Areas in the wave where the air molecules are pushed close together and so at a slightly higher pressure.
Rarefaction: Areas in the wave where the air molecules are further apart and so at a slightly lower pressure.
Sound travels as distinct waves with two physical characteristics: Amplitude, which is known to us as volume or loudness, and Frequency, which is known as pitch. Amplitude is the height of a sound wave. Frequency represents how often a sound wave goes through a full cycle. Frequency is measured in Hertz. (Hz) As a sound wave travels across distance, the amplitude decreases. This is why we can’t hear a quiet whisper across a room. Sounds capable of being heard by the human ear are called SONICS. The normal hearing range of the Human ear extends from about 20 Hertz to 20,000 Hertz. This measurement of 20,000 Hertz can be abbreviated as 20K Hz.
The range that the human ear can hear frequency is known as the Frequency Spectrum. All instruments that we record will sit somewhere in this Spectrum. In the recording studio, Equalizers also known as EQ’s shape the frequency. Noise Gates, limiters, and compressors handle the amplitude or volume. These tools are known as dynamic effects.
The viscosity and surface tension of liquid are analogous to the density of air at certain temperatures. At around 70 degrees F, sound travels around 1170 meters per second. This is related to why we experience phenomena like the Doppler Effect, delay, and the sonic boom. When a sound wave is born, the particles in the air are forced into compressions and rarefactions caused by a change in air pressure. When temperature changes occur the speed of sound is slightly changed, save for extreme changes like temperatures near absolute zero or those nearing temperatures of the sun. This is evidence of how physics can dictate a sound, which is important in the study of sound.
The Doppler Effect
The Doppler Effect is the apparent change in frequency or wavelength of a wave that is perceived by an observer moving relative to the source of the waves. For waves, such as sound waves that propagate in a wave medium, the velocity of the observer and the source are reckoned relative to the medium in which the waves are transmitted. The total Doppler Effect may therefore result from both motion of the source and motion of the observer. Each of these effects are analyzed separately.
As an object moves through air it must push some of the air out of the way. But also, the object creates sound waves as it moves through the air. As a moving source of sound, the object causes the Doppler Effect. When that object reaches the speed of sound air cannot readily move out of the way and a shock wave is formed. When the object is moving faster than sound the resulting sounds travel behind the object creating a sonic boom.
When studying the technicalities of sound waves, you will be introduced to something abbreviated as SPL which stands for Sound Pressure Level. SPL refers to the difference in air pressure. Your ear drums or microphone will experience different Sound Pressure Levels when you are hearing different amplitudes of sound. When purchasing microphones you will notice that some brands of microphone are rated by a maximum SPL. This is in reference to what the transducer inside the microphone is capable of handling before it is damaged or overloaded. The amplitude or loudness of a sound wave is measured in decibels. (DB) The Louder the sound is the greater the SPL.
In the studio it is important to understand how the sound waves of each signal source will behave given the geometry of the live room. In general, it is good to keep in mind the directional behavior of sounds as well. Audio sound waves are three-dimensional. This is why it is important to understand the directional behavior with certain microphones.
Humans can hear anything between 20 Hz, to 20,000 Hz. A kick drum can be anywhere between 50 and 100 Hz. The airiness in someone’s voice can be in the upper fifth of spectrum. Low frequencies are not very directional. Because of this they have a tendency to deform faster and tend to be harder to represent to the ear than high frequency sounds. High frequency sounds are super directional. The reason for this directionality is because physically the high frequency sounds are exactly what they are titled. They cycle more frequently than their low-frequency counterparts.
So how is this measured? The name Hertz comes from the unit of measurement created by a physicist named Heinrich Hertz. The idea here is how many times the air compressions and rarefactions repeat. So high repetition compressions and rarefactions give us higher pitched sounds and vice versa.
There is more science in audio engineering then what the average musician knows. When you are confronted with acoustic treatments, mixing, and mastering, knowing the science and math becomes invaluable.
Any of the sounds you may hear from any source occur because mechanical energy produced by that source was transferred to your ear through the movement of atomic particles. Sound is a pressure disturbance that moves through a medium in the form of mechanical waves. When a force is exerted on an atom, it moves from its rest or equilibrium position and exerts a force on the adjacent particles. These adjacent particles are moved from their rest position and this continues throughout the medium. This transfer of energy from one particle to the next is how sound travels through a medium. The words “mechanical wave” is used to describe the distribution of energy through a medium by the transfer of energy from one particle to the next.
Waves of sound energy move outward in all directions from the source. Your vocal chords and the strings on a guitar are both sources which vibrate to produce sound waves. Without energy, there would be no sound.
There needs to be a source to produce the sound. This could be a speaker or an instrument. Anything that is capable of vibrating. The vibrations produce the wave. There needs to be a medium for the sound to travel through. This could be something like air or water. Once the sound is produced and it begins to travel it needs to be detected. In order to detect it there needs to be a listening device like a microphone or an ear. The microphone has a diaphragm that begins to vibrate once the sound waves penetrate the opening. The Ear has hairs in the Cochlea that vibrate once the sound waves reach it. Thus the waves are transferred back into vibrations and those vibrations are what we perceive as sound.
Now that we talked about sound waves and the way sound travels. We already know that the amount of waves that reach our ears per second is referred to as frequency. Frequency is measured in hertz. Our ears can detect 20 vibrations per second all the way up to 20,000 vibrations per second or 20k Hz.
Musical notes are basically parts of different sound waves and frequencies blended together. When we play a note on a guitar we don’t get a pure sound wave. We get a vibration wave from the length of the string and we will get different wave vibrations and frequencies from different areas of the string. These blend together to make the sound we hear.
So the string will have a fundamental frequency (The length of the note) and it will also contain harmonics. Harmonics are overtones generated from ratios of the fundamental frequency. They are other frequencies that are generated from different areas of the string. This note which is made up of different frequencies will sit in a certain area of the frequency spectrum.
So how is digital audio measured? The first part of the measurement is the sample rate. Think of a sample rate like a snapshot. The more snapshots there are the better the audio is represented. This is just like the more pixels to represent the picture the better the quality of the photograph. The higher the sample rate the better the audio recordings. The standard CD quality sample rate is 44.1k or 44.1 kilohertz.
The other measurement is the bit depth. Bit depth is basically the word size of the sample rate. In other words 16 zeros and ones will fit into one sample at a 16 bit depth setting. To record two channels of audio at 44.1 kHz sample rate with a 16 bit word size, the recording software has to handle 1,411,200 bits per second. While 16 bit 44.1k sample rate is the standard quality for CD. 24 bit 48k sample rate is the standard quality for DVD. The higher the bit and sample rates the higher quality the recording will be. More storage space is needed for higher bit depths and sample rates because the files are larger. Raising the sample rates higher will also have an impact on system resources and latency.