Hot to Learn Music Mastering A Complete Guide

 

Mastering music is the process of taking a finished, mixed audio recording and preparing it for distribution or playback. The goal of mastering is to ensure that the music sounds its best on as many different playback systems as possible, and to create a cohesive listening experience across an album or collection of songs.

During the mastering process, an engineer will typically make adjustments to the audio signal using a variety of techniques, including equalisation, compression, limiting, and stereo enhancement. These techniques can be used to balance the levels of different instruments or vocals, create a sense of space or depth in the mix, and add overall clarity and punch to the sound.

Mastering can be a complex and time-consuming process, and requires a good ear, an understanding of audio processing techniques, and experience working with different types of music and audio formats. It is an important step in the music production process, as it can significantly impact the final sound of a recording.

The concept of mastering music has been around since the early days of recorded sound. However, the importance of mastering has varied over time, and the techniques and tools used in the mastering process have evolved significantly.

In the early 20th century, mastering was primarily focused on preparing recordings for mechanical reproduction, such as on vinyl records or phonograph cylinders. In the 1950s and 1960s, with the advent of magnetic tape recording and the LP format, mastering began to focus more on creating a cohesive listening experience across an album and ensuring consistent sound quality from track to track.

As the music industry has evolved, so has the importance of mastering. With the proliferation of digital audio formats and the increasing use of streaming platforms, mastering has become increasingly essential in ensuring that music sounds its best on a wide range of playback systems and devices. In the modern music industry, mastering is a critical step in the production process, and is often handled by specialised mastering engineers or studios.

In the 1960s through the 70’s the mastering process was primarily focused on preparing recordings for vinyl release. This typically involved creating a master tape from the mixed audio recording, and then using that tape to create a set of "lacquer" discs, which were used to create the final vinyl records.

During the mastering process, engineers would use a variety of techniques to shape the sound of the recording and optimise it for playback on vinyl. These techniques included equalisation, compression, and limiting, which were used to balance the levels of different instruments and vocals, add punch and clarity to the sound, and prevent the audio signal from overloading the recording medium.

In addition to these technical considerations, mastering in the 1960s also involved creating a cohesive listening experience across an album. Engineers would carefully balance the levels and EQ of the tracks, and choose the order in which they appeared on the album, to create a cohesive musical journey for the listener.

Overall, the mastering process in the 1960s was an important step in the music production process, and played a significant role in shaping the sound of many classic recordings from that era.

As cassette tapes became more popular in the 1970s and 1980s, the mastering process had to adapt to the specific characteristics of this format. Cassette tapes have a lower dynamic range and frequency response compared to vinyl records, and are more prone to distortion and noise, so the mastering process had to take these limitations into account.


During the mastering process for cassette tapes, engineers would typically use equalisation and compression to shape the sound of the recording and optimise it for the cassette format. They might also use noise reduction techniques to minimise hiss and other types of tape noise.

In addition to these technical considerations, mastering for cassette tapes also involved creating a cohesive listening experience across an album. Engineers would carefully balance the levels and EQ of the tracks, and choose the order in which they appeared on the cassette, to create a cohesive musical journey for the listener.

Overall, the mastering process for cassette tapes required a different set of techniques and considerations compared to mastering for vinyl records, and played a significant role in shaping the sound of many recordings from this era.

As CD technology became more widespread in the 1980s and 1990s, the mastering process had to adapt to the specific characteristics of this format. CDs have a much wider dynamic range and frequency response compared to vinyl records or cassette tapes, and are less prone to distortion and noise, so the mastering process could be more expansive in terms of the techniques and processing used.

During the mastering process for CDs, engineers could use a wider range of techniques to shape the sound of the recording and optimise it for the CD format. These techniques might include equalisation, compression, limiting, and stereo enhancement, as well as more advanced techniques such as multi-band processing and dynamic EQ.

The Red Book audio standard is a specification for the format of audio CDs, which was developed by Sony and Philips in the 1980s. The standard, which is officially known as "IEC 60908:1999", defines the physical format of audio CDs, as well as the encoding of audio data on the disc.

The Red Book standard specifies that audio CDs should contain 16-bit audio data, sampled at a rate of 44.1 kHz, with a maximum playing time of 74 minutes. It also specifies the layout of the audio data on the disc, as well as the information that should be included in the CD's table of contents and other metadata.

The Red Book standard has been widely adopted as the standard for audio CDs, and is still in use today. It has played a significant role in the development of the CD format, and has helped to make CD audio a popular and widely used format for music and other audio content.

In addition to these technical considerations, mastering for CDs also involved creating a cohesive listening experience across an album. Engineers would carefully balance the levels and EQ of the tracks, and choose the order in which they appeared on the CD, to create a cohesive musical journey for the listener.

Overall, the mastering process for CDs required a different set of techniques and considerations compared to mastering for vinyl records or cassette tapes, and played a significant role in shaping the sound of many recordings from this era.

Cassette tapes were widely popular in the 1970s, 1980s, and early 1990s, but their popularity began to decline in the late 1990s and early 2000s, as CDs and other digital formats became more popular. As a result, the sale of cassette tapes has declined significantly in recent years.

According to the Recording Industry Association of America (RIAA), cassette tape sales in the United States peaked in 1992, at over 600 million units sold. However, by 2003, cassette tape sales had declined to just over 10 million units. In recent years, cassette tape sales have continued to decline, and are now a small fraction of overall music sales.

While cassette tapes are no longer as popular as they once were, they are still produced and sold by some manufacturers, and there is a small but dedicated group of collectors and enthusiasts who continue to use and appreciate the format.

CDs offer superior sound quality to cassette tapes due to a number of factors, including a wider dynamic range, wider frequency response, lower noise levels, and lower distortion. These factors result in a more transparent, natural, balanced, and accurate sound, which is generally superior to the sound of cassette tapes. CDs are also more durable and convenient to use than cassette tapes, which have contributed to their widespread adoption as the primary medium for recorded music.


And began the loudness war


The "loudness war" refers to the trend in modern music production of increasing the perceived loudness of a recording, often to the point where the audio is heavily compressed and distorted. This trend is often associated with the rise of CD technology in the 1980s and 1990s, as the wide dynamic range and high quality of CD audio made it possible to create loud, punchy recordings that stood out on the radio or in a live setting.

During the mastering process for CDs, engineers often used a variety of techniques to increase the loudness of a recording, including compression, limiting, and brickwall limiting. These techniques can be used to reduce the dynamic range of a recording, making the quiet parts louder and the loud parts quieter, and creating a more consistent overall level.

However, as the loudness war has progressed, many recordings have been heavily compressed and distorted, leading to a lack of dynamics and a fatiguing listening experience. This trend has led to criticism from some quarters, and there has been a movement in recent years to return to more dynamic, natural-sounding mastering techniques.

Mastering for streaming services is similar to mastering for CD in that it involves preparing a finished, mixed audio recording for distribution and playback. However, there are some key differences in the techniques and considerations involved in mastering for streaming platforms compared to mastering for CD.

One of the main differences is the audio format. Streaming platforms typically use compressed audio formats, such as MP3 or AAC, which have a lower bit rate and less audio information compared to CD-quality audio. This means that the mastering process for streaming platforms needs to take into account the specific characteristics of these formats, and may involve using different techniques to optimise the sound for this type of delivery.

Another key difference is the wide range of playback devices and systems that music on streaming platforms may be played back on. This includes everything from phone speakers and laptop speakers to home theatre systems and high-end audiophile setups. As a result, mastering for streaming platforms needs to consider how the music will sound on a wide range of different systems, and may involve using techniques such as multi-band processing and dynamic EQ to optimise the sound for different types of speakers and headphones.

Overall, the mastering process for streaming platforms involves many of the same techniques and considerations as mastering for CD, but requires a slightly different approach due to the specific characteristics of the delivery format and the wide range of playback systems that the music may be played back on.

Setting Levels?

Having consistent listening levels in your studio is important for several reasons. First, it helps you maintain balance in your mixes as you work on projects. When you have a constant monitoring level to refer to, you can make better decisions about how loud or quiet different elements of your mix should be in relation to one another. This is because you are consistently working with the same monitoring loudness, rather than constantly adjusting the volume up and down.

Another reason to maintain consistent listening levels is to avoid ear fatigue. Listening to music or audio at very high volumes for extended periods of time can be harmful to your hearing. To help prevent this, it's a good idea to aim for a sustained listening loudness of no more than 85 dB SPL (decibels of sound pressure level). You can use a sound pressure level (SPL) meter or app on your mobile device to measure the loudness of your monitors and ensure that you are not exceeding this level.

In conclusion, maintaining consistent listening levels in your studio can help you achieve better balance in your mixes and prevent ear fatigue. It's worth investing in an SPL meter or app to help you monitor your listening levels as you work.

  1. Use a decibel (dB) scale on your audio equipment: Many audio devices, such as mixing consoles and speakers, have a dB scale built in. You can use this scale to approximate the SPL of a sound by comparing it to known reference levels. For example, if the dB scale on your audio equipment goes up to 0 dB at its maximum volume, you can use this as a reference point to estimate the SPL of other sounds.

  2. Use your own hearing as a reference: Your ears are sensitive to changes in volume, and you can use your own hearing to estimate the SPL of a sound. For example, you can compare the volume of a sound to a known reference, such as the volume of a conversation or the sound of a car driving by on the street. You can also use your own physical sensations, such as the feeling of the sound vibrating in your chest, to gauge the loudness of a sound.

  3. Use an online dB SPL calculator: There are several online calculators that can estimate the SPL of a sound based on the distance from the source and the intensity of the sound. These calculators can be useful if you don't have access to an SPL meter or audio equipment with a dB scale. However, keep in mind that the accuracy of these calculators may vary.

It's important to note that these methods are approximate and may not provide an exact measurement of SPL. If you need to measure SPL accurately, it is best to use a specialised device such as an SPL meter.

All of this is important as while mastering and working critically with audio, we’re prone to what’s call ear fatigue.

Ear fatigue, also known as listening fatigue or auditory fatigue, is a condition that can occur when you listen to loud sounds for an extended period of time. It is characterised by a temporary reduction in your ability to hear clearly and accurately, and can cause symptoms such as ringing in the ears (tinnitus), discomfort, and a feeling of pressure or fullness in the ears.

Ear fatigue is caused by the mechanical and physiological effects of loud sound on the ear. When you listen to loud sounds, the tiny hair cells in your ear that are responsible for converting sound waves into electrical signals can become damaged or fatigued. This can lead to a temporary reduction in your ability to hear accurately, as well as physical discomfort.

To prevent ear fatigue, it is important to protect your hearing by avoiding exposure to loud sounds for extended periods of time. This includes listening to music or other audio at moderate volumes, using earplugs or noise-cancelling headphones in loud environments, and taking breaks from loud sounds if you are working in a noisy environment. If you are experiencing ear fatigue, you should rest your ears by taking a break from listening to loud sounds, and seek medical attention if the symptoms persist or worsen.

Monitoring Position and Your Awareness

There are several things you can do to place monitors in a good listening position:

  1. Set up the median plane: The median plane is the axis running from the midpoint of the monitors to the listening position, and ideally, it should be the median axis of the studio space. Placing the monitors and listening position on the median plane helps to ensure that the mix will be balanced and not skewed towards one side or the other.

  2. Place the monitors at a good distance from the median plane: It is generally recommended to place the monitors approximately 30 to 45 degrees from the median plane, forming a triangle with the mix position. This helps to create an optimal listening environment and prevent phase cancellation and other issues.

  3. Make sure the monitors are level with your head and at the same angle with respect to the listening position: Ensuring that the monitors are level with your head and at the same angle with respect to the listening position helps to ensure that the audio from each monitor is reaching your ears at the same time and at the same volume. This is important for accurately assessing the balance and levels of the mix.

Overall, to place monitors in a good listening position, it is important to set up the median plane, place the monitors at a good distance from the median plane, and ensure that the monitors are level with your head and at the same angle with respect to the listening position.

Once you have positioned your monitors, it is important to consider the position from which you will be mixing and where you will be sitting in relation to the monitors.

It is important to ensure that your mix position is on the median plane, which is the axis running from the midpoint of the monitors to the listening position. Ideally, this median plane should also be the median axis of the studio space. Having the median plane in this position helps to ensure that the mix will be balanced and not skewed towards one side or the other.

To achieve this balance, it is important to place the monitors at a distance of approximately 30 to 45 degrees from the median plane, forming a triangle with the mix position. It is helpful to measure the distance from the listening position to each monitor and ensure that they are equidistant.

In addition, the monitors should be level with your head and at the same angle with respect to the listening position. This helps to ensure that the audio from each monitor is reaching your ears at the same time and at the same volume, which is important for accurately assessing the balance and levels of the mix.

If the monitors are not properly positioned, it is possible that your panning decisions will create mixes that are heavy on one side or the other. For example, if you are sitting closer to the left monitor, you might set panning and levels too heavy to the right channel of the stereo image. To avoid this, it is helpful to mark the positions of the monitors with tape or a similar method, so that they can be easily returned to the correct position if they are moved.

Properly positioning the monitors and mix position during the mastering process is important for several reasons:

  1. Accurate assessment of the mix: Properly positioning the monitors and mix position helps to ensure that you are hearing the mix accurately and consistently. This is important when mastering music, as you need to be able to accurately evaluate the balance and levels of the mix, and make informed decisions about how to process and finalise the master.

  2. Consistency across playback systems: Properly positioning the monitors and mix position can help to ensure that the master will sound consistent across a variety of playback systems and environments. If the monitors are not properly positioned, the master may sound different on different systems, which can compromise the consistency and quality of the final product.

  3. Avoiding panning and balance issues: Properly positioning the monitors and mix position can help to avoid panning and balance issues, such as having the mix feel "right-heavy" or "left-heavy" on different playback systems. This can help to ensure that the master sounds balanced and cohesive, regardless of the playback system or environment.

Overall, properly positioning the monitors and mix position during the mastering process is important for accurate assessment of the mix, consistency across playback systems, and avoiding panning and balance issues.

 
Previous
Previous

FL Studio Install Plugins : How to Install a VST in FL Studio 21

Next
Next

How to Make Money Off Your Music as an Independent Artist