main site sponsors

Loudness Normalization: Part 1 – What’s The Problem?

If you bounce your mix in Logic (or any other DAW), you have to know about “Loudness Normalization” – SERIOUSLY. No matter whether your mix will be played on the radio or Spotify, if you upload it to your YouTube channel, or just listen to it in your iTunes Library, your mix will be affected and you better know how. In this three-part series, I will explain all the details. Part 1: What’s the Problem?

Part 1: What’s the Problem?

What is Loudness and why will it cause major problems with your mixes if you don’t understand some underlying principles (that I explain in part 1).

Part 2: Loudness Normalization Standards

Never ever use Peak Normalization on your bounced mix, and instead, embrace the new global standard of Loudness Normalization (that I explain in part 2).

Part 3: Logic’s Loudness Meter

Instructions on how to use Logic’s Loudness Meter and other tools to avoid surprises due to Loudness Normalization when you later hear your songs on iTunes, Spotify, Youtube, etc. (as I explain part 3).



Logic’s “Loudness Meter” 

Maybe you already stumbled over the Loudness Meter plugin in Logic’s Audio Effects Plugin Menu under the Metering section and was wondering, “what the heck is that?”. I will explain, not only how it works and how to use it, but also dive into the whole topic about Loudness Normalization to provide some background information so you understand why you should have this Loudness Meter on your Output Channel Strip all the time during your mix.

Be Aware

Please note that this article about “Loudness Normalization” is not about tips and tricks for your mix. Instead, it is about an important aspect  you have to know (or at least be aware of) whenever you reach for the Bounce button in Logic or any other DAW. Maybe you’ve already heard or read about this topic and know that there are new standards, concepts, and technologies that have been implemented over the last few years regarding this new buzzword “Loudness”. You have to follow this topic, because regulations and implementations are still changing. All this could have a major affect on how your fans will “hear” your songs, your podcast, or any audio content, no matter if it is on iTunes, SoundCloud, YouTube, Spotify, or good old radio.

Rules and Regulations

Rock ’n Roll is a little bit like the Wild Wild West when it comes to audio recording. You glue the VU meter to the right and don’t care about distortion as long as you like how it sounds. If a maximum setting is 10, then you ignore that and go to 11. But even the record of the most rebellious punk band has to follow some rules when it is transmitted over the radio, aired on television, and even now, on internet streaming services like YouTube, Spotify, or SoundCloud. Anarchy stops here, and rules and regulations take over.

There are various national and international organizations (ITU, EBU, AES, SMPTE, etc.) that have some brilliant minds who studied loudness and came up with suggestions and recommendations for how to use and implement technology based on their research that often make it into government regulations and laws, so radio and television, overseen by the government, have to follow. In addition, private companies have adopted various recommendations for their popular apps, like YouTube, iTunes, Spotify, or SoundCloud and this is where you seriously have to pay attention to when your mixes make it out of your studio and onto any of those listening channels, because, all of a sudden, your bounced mix might sound differently.


About the Basics

Before learning about WHAT happens to your mix after it “leaves” your Logic Pro X project and reaches your listeners, you have to be aware of WHY it happens. What is the problem? Let’s start with a few basics.

Listening to your Song(s)

Whenever you want to listen to a song, regardless whether it is on your computer, CD-Player, car radio, etc., you set the playback volume to your liking and that’s it. Chances are, you don’t have to touch the volume knob while listening to that one song. You determine your own “Playback Volume”. However, if you now listen to a second song, you might have to change that Playback Volume by turning the knob up or down if you “feel” that the second song is too loud or not loud enough compared to the first song. The important word here is “feel”, because it is a subjective decision to change the volume based on how you perceive the loudness. Please note that turning down the Volume Fader on your Output Channel during the mix, on the other hand, is an objective decision, for example, if the red clipping LED tells you that it reached 0dBFS.

Terminology: Level – Volume – Loudness

The previous paragraph had a few terms we need to pay attention to before going on any further. Although most users know these terms, they could mean the same thing or different things, depending on who is using the term and in what context.

    • Level: In audio production, this generic term describes the audio signal strength. The Level is something that you measure, usually with a Level Meter that you find on Logic’s channel strip or as a separate “Level Meter” plugin.
    • Volume: The term “Volume” is often used the same as “Level”. For example, does the Volume Fader on the channel strip change the “volume” or the “level”? You pretty much can use both terms.
    • Loudness: If you increase the Volume Fader, you raise the Level on that Track, so the signal gets louder. That means, you also increase the loudness. But what is “Loudness” exactly? Here you have to pay attention.

 What is Loudness?

Because in this article we talk about “Loudness Normalization”, we have to be clear about what the term “Loudness” refers to. If we consult the “Book of Knowledge”, Wikipedia, we find the following explanations:

Loudness is the characteristic of a sound that is primarily a psycho-physiological correlate of physical strength (amplitude)” – or how about this – “That attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud”. Any questions?

Let me try to explain it differently:

Perceived Loudness

Instead of just using the term “Loudness”, it might be better to use the full term “Perceived Loudness”. As I showed already, it describes the subjective impression of how you hear a song, how loud the song sounds to you. Even different people might have a different loudness perception when listening to the same playback volume of a song. For example, your perception of loudness might be different from your neighbors who are partying for the last five hours with an increased level of intoxication (a different type of level).

The term “Perceived Loudness” is especially important when listening to a sequence of songs and you determine if you feel that all the played songs have about the same loudness or if you feel the need to turn the playback volume up or down for a specific song or part of the program you are listening to (on the radio or TV) to achieve some kind of consistency of their playback level.

Measured Loudness

Although the Perceived Loudness is subjective, there are also meters that can measure loudness. However, you have to pay attention, because there are two types of Loudness Meters. I will go into more  details throughout this article, but here is just the short explanation:

    • RMS Meter and VU Meter: A Peak Meter that you can set to RMS (“Root Means Square”, a fancy word for “average”) or a traditional VU Meter are technically still Level Meters and not Loudness Meters. They measure the electrical level of an audio signal. However, their characteristics just happen to be close to how we perceive loudness by measuring and displaying the average signal level (or the energy of the signal) instead of the peak level of a signal. Also keep in mind that an RMS Meter can read the average level but not a long-term average as the Integrated Loudness Meter can (more on that later).
    • Loudness Meter: A true “Loudness Meter” (like the the one we have in Logic Pro X) still measures the audio signal, but it uses standardized algorithms and filters (based on psychoacoustic models) to represent the actual  perceived loudness of humans.


A Potential Loudness Mess

To better understand the standards and regulations, in our case “Loudness Normalization”, you have to understand the reason why it was necessary to create and enforce them in the first place. If you haven’t noticed, our audio production landscape got a little bit out of hand over the years (since the beginning of the digital audio revolution) when it comes to signal levels.

Different Audio Content with Different Loudness

As I already mentioned in the beginning, the problem regarding loudness could happen when you listen to a sequence of audio content, most likely from different sources when those individual sources (songs, commercials, interviews, TV-shows, etc.) have a different perceived loudness.

Here are a few examples:

    • Listening to Audio Streaming Services: Not only can the loudness be different between tracks on streaming services like Spotify, SoundCloud, or YouTube, the overall level when switching between services can also differ quite a lot.
    • Listening to Radio: The jump in loudness between songs (and commercials) is replaced by a much bigger problem described as “Loudness War”, one of the reasons why Loudness Normalization standards where implemented, as we will see later.
    • Listening to Playlists: If you listen to your iTunes Playlist with songs from different CDs or downloads (and even your own mixes), then the loudness of various songs can be quite different, especially if you listen to original records from the 70s and 80s (not the “re-mastered” ones) compared to recent recordings.
    • Listening (watching) TV: Broadcast TV was always a nuisance when the different loudness of the commercials compared the show let you jump up to reach for the volume control. The complain from listeners over the years pressured the lawmakers to pass legislations that forced broadcasters to implement standards surrounding the Loudness Normalizations. For example, in the US, the CALM Act (Commercial Advertisement Loudness Mitigation Act) prohibits since 2012 under penalty to play TV ads louder than the regular program.


Who’s to Blame

So what are the factors that the perceived loudness of different audio recordings vary so much? Here are the three main components, “Peak Level”, “Dynamic Range”, and “Frequency Content”. Think about it, these are all elements that you control in your mix. So, as we will see, proper Loudness Normalization starts already during your mix and it should be on your mind at that stage, way before you start bouncing your final mix or hand it over to the mastering engineer who’s duties are, otherwise, reduced to damage control.

-1- Peak Level 

One of the main differences when recording and transmitting in the analog domain is that it doesn’t have an absolute limit like the 0dBFS (Full Scale) in the digital domain, where all the available bits of an individual audio sample are set to “1”. Analog audio had more or less a reference level with some sort of headroom that everybody was following. This “healthy” concept of headrooms that engineers stilled used in the early days of digital recordings, diminished more and more and the Peak Level of recordings creeped up over time to its maximum level of 0dBFS (0dB full scale). Bye bye Headroom, we barely knew you.

But if all audio files (songs, commercials, interviews, etc.) are “normalized to 0dBFS”, which means the highest level of each audio files is 0dBFS, then isn’t that a good standard? As an “electrical standard” maybe, but it is completely useless (irrelevant) for the loudness, because Peak Level has nothing to do with how our ears perceive loudness and, therefore, the perceived loudness of each song in a sequence (all normalized to 0dBFS) could still be quite different, depending on other factors, especially the dynamic range.

-2- Dynamic Range

The Dynamic Range of a song describes the variation of the peak levels throughout the song. For example, is your peak meter constantly pressed against the maximum level or is it fluctuating between different levels with quieter parts, louder parts, and plenty room for transients?

The perceived loudness of a song increases when the song has less dynamic. Unfortunately, producers and record companies used that to game the system by compressing their song more and more and, therefore, making their songs sound louder compared to other songs when played on the radio (because “louder sounds better”). Pretty soon, everybody started to use that same technique, using less and less dynamics, and the “Loudness War” was in full swing with major casualties, good sounding records. More about that later.

-3- Frequency Content

A third aspect aspect of the perceived loudness is the frequency range of the recording. For example, has the mix low frequencies, high frequencies, and everything in between? For modern pop songs this is less of an issue, because they usually cover the entire audible frequency range. However, if you record solo  instruments or just dialog, then the audio signal might only contain a specific frequency range.

That frequency content is important for the perceived loudness, because it has to do with the frequency response of our ear. The so-called “Equal-Loudness Contours” (a revision of the famous “Fletcher-Munson Curves”) shows that our ear is much more sensitive to frequencies around 3kHz and less sensitive to frequencies above and even less sensitive in the lower frequency range.

That means, an audio signal with 3kHz playing at 0dB sounds much louder than an audio signal with 100Hz normalized to the same 0dB level. In addition, the sensitivity changes (the shape of the curve), depending on the playback volume.

You can see that the Loudness Contours getting flatter with higher playback volumes (y-axis). That’s one of the reasons why “louder sounds better”, because when you increase the playback volume of a song, the ear’s perception is that the lower and higher frequencies will increase more compared to the 3kHz range, making the same song sound better due to the wider frequency range).


Conclusion Part 1

That’s it for part 1 of this article to cover a few basics for the understanding of “Perceived Loudness”, the core of the new Loudness Normalization standards that I will explain in part 2 of this article. We will also find out the surprising difference (or similarity) of a Metallica record and the recording of a vacuum cleaner.

Until then, make sure to differentiate between the “Volume” as the measured signal level (seen with your eyes on the meter) and the “Loudness” as the perceived signal level (heard with your ears from the loudspeakers).

Edgar Rothermich

Edgar Rothermich

Edgar Rothermich is a composer, producer, educator and author of the best-selling book series “Graphically Enhanced Manuals (GEM)” He is a graduate of the prestigious Tonmeister program at the University of Arts in Berlin where he also was teaching for five years. His musical work in a wide variety of styles includes numerous scores for films and TV shows plus compositions for ballet and sacred music. His recent re-recording of the Blade Runner soundtrack (done exclusively in Logic Pro!) achieved critical acclaim from critics and fans alike. Follow him on Twitter @EdgarRothermich
Edgar Rothermich

Related Posts:

Follow Logic Pro Expert