Mastering Vocabulary

  • Loudness

    Loudness isn’t just about meters—it’s how we humans perceive sound intensity. It correlates with amplitude, but our perception isn't linear. We hear loudness on a logarithmic scale, which is why a 10 dB increase sounds roughly twice as loud, even though it's ten times the acoustic power.

    In a mastering context, we care about loudness not just in terms of making a track "hit," but in terms of how it stacks up psychoacoustically with others in a given listening environment—especially when broadcast, streamed, or sequenced in an album context.

  • LUFS

    LUFS (Loudness Units relative to Full Scale) is the standard for measuring perceived loudness in a way that maps more closely to how the human ear interprets volume. Unlike peak meters or simple RMS, LUFS integrates over time, weighs frequencies according to human hearing (per the K-weighting curve - thank you Bob Katz!), and gives a more musically focused and relevant indication of level.

    Integrated LUFS tells you the overall loudness of a track - typically the whole duration of the song; Short-term LUFS gives you a 3-second window; Momentary LUFS meters the last 400ms of audio. Streaming platforms normalize playback to somewhere between -14 and -16 LUFS integrated, depending on the service and whether normalization is enabled. So if you're pushing louder than that, your track might just be turned down.

  • Normalization

    Normalization is simply a uniform volume adjustment—it turns the whole track up or down to hit a set level, either by peak or loudness. It doesn’t compress or limit anything, and it doesn’t change the dynamics. In mastering, it’s usually something I’ll do right at the start of a session, just to get all the songs in a project roughly hitting the same loudness before I touch anything else. That way, I’m not reacting to how hard a song is hitting when evaluating tone, feel, or dynamics. It’s about setting a clean and transparent starting point—not finishing the job.

  • Spotify Loudness Normalization

    Spotify, along with many streaming platforms, uses loudness normalization to ensure consistent playback volume across disparate tracks. These tracks are generally songs not from the same collection or album and often sound very different from track to track - both in sonic arrangement and in perceived loudness. If a track is mastered louder than their target (typically around -14 LUFS integrated), it will be turned down during playback. This means pushing your track too loud in mastering doesn’t provide any advantage - other than being inordinately loud, and overly limited tracks might lose impact when attenuated.

    Furthermore the loudness specs are a moving target—they’re subject to change as the platforms evolve. So, while normalization appears to maintain consistency across diverse tracks, it's crucial to remember that it can also disrupt the dynamic flow of an album.

    Streaming specs shift. Platforms evolve. What stays constant is the need to honor the song—and the arc of the album. Mastering should serve the music, not the meter.

  • Metering

    Meters visualize what your ears might miss - or sometimes you need it to confirm a gut reaction. They give a technical readout of your audio’s behavior, helping ensure balance, consistency, and compliance across systems.

    In mastering, we rely on several types:

    LUFS meters measure perceived loudness over time—critical for streaming normalization.

    True peak meters catch inter-sample overs that standard peak meters can miss.

    RMS meters show average signal power, useful for gauging overall energy.

    Phase and correlation meters reveal stereo width and mono compatibility issues.

    Spectrum analyzers show frequency distribution, helping identify buildup or weakness.

    Using these tools isn’t about chasing numbers—it’s about making intentional, musical decisions that translate.

  • Dynamic Range

    Dynamic range is the delta or difference between your quietest and loudest program material. It’s not just a technical spec—it’s an expressive tool. A track with preserved dynamics breathes and unfolds; a track with squashed dynamics risks becoming fatiguing or flat, especially when everything is pushed equally hard. Not many people want to listen to a waveform that looks like a sausage.

    From a mastering standpoint, managing dynamic range is about context. You might allow wide dynamics for a string trio or acoustic piece, while a club record may demand tighter control. But even in aggressive genres, microdynamics (the play between transients and sustained elements) still matter. The old adage holds true: if everything is loud, then nothing is.

  • Clipping vs Limiting

    Clipping and limiting both control levels, but they play very different roles in mastering. Clipping is the brute-force method—when a signal exceeds the max level and the waveform gets sliced off at the top - without discretion. That’s engineered distortion by design (or by accident). Limiting, on the other hand, is the gentler cousin—gain reduction with a ceiling, often transparent if done right.

    Clipping can be used creatively—think drums or synths with bite—but it's ever a substitute for musical loudness. Limiting is where we finalize loudness, shape dynamics, and protect from overs. One is a chainsaw, the other a scalpel. In mastering, the goal is to carve, not maul. Unless you are the agile Sith Lord trained by the evil Darth Sidious.

  • Headroom

    Headroom is the space—literal and figurative—between your loudest audio peak and digital ceiling (0 dBFS). It's a the margin error that avoids clipping, distortion, and truncated transients.

    Good headroom isn’t about making your mix “quieter”—it’s about giving your music the room to breathe without hitting the ceiling prematurely. For mastering, we’re looking out for true peaks, inter-sample overs, and ensuring we don’t slam into codec conversion limits later on. No one wants a track that sounds like it’s gasping for air.

  • True Peak

    A true peak is the highest possible level an audio signal reaches between the digital samples—where standard peak meters can’t see. These inter-sample peaks often sneak past your DAW's meters and cause distortion during playback or encoding (especially in lossy formats like MP3s). Mastering engineers use true peak metering to catch these somewhat invisible overs and make sure your track doesn’t distort once it leaves the studio or is played downstream.

  • Inter-sample Peaks

    Inter-sample peaks are those sneaky distortions that occur between digital samples—just out of reach of standard peak meters. While your DAW might say everything’s sitting pretty below 0 dBFS, those peaks can actually overshoot when the waveform is reconstructed during playback or conversion (especially in lossy formats). The result? Harsh clipping or distortion that wasn’t there in your session.

    True peak meters are built to visually catch these digital mirages by oversampling and interpolating the waveform between sample points. Mastering engineers leave around -1 dBTP of headroom to dodge these artifacts and make sure your track translates cleanly across devices and codecs.

  • Frequency Stacking

    Frequency stacking happens when too many instruments are competing in the same frequency range, leading to a congested, smeared, or muddy mix. While this is often a mix issue, mastering can sometimes mitigate it if we’re careful.

    Picture a vertical slice of your frequency spectrum. If the kick, bass, and low synth are all stacked around 60–120 Hz, they’re going to fight for headroom and clarity. Same for vocals and snare in the midrange. When these overlaps pile up, you lose articulation, especially after compression or limiting. Think of it like stereo spectrum, but in frequency space: elements need their own acoustic “lane” to thrive.

  • Mastering for Vinyl

    Vinyl isn’t just another playback format—it’s a physical medium with very real tangible limitations. Mastering for vinyl means respecting its mechanical nature: too much low-end in stereo can throw the stylus around; overly bright transients can mistrack; inner grooves are less forgiving than outer ones.

    You’re also dealing with level-to-length trade-offs: the louder the cut, the shorter the side. And since vinyl is analog, your digital habits won’t always translate—there’s no “undo” once the lacquer is cut. Vinyl mastering requires rebalancing, sometimes de-essing, and occasionally rethinking your low-end altogether. A translation pass is essential.

  • Bit Depth

    Bit depth defines how finely we can capture the amplitude of each audio sample. A 16-bit system gives us 65,536 discrete volume steps, which equates to around 96 dB of dynamic range. A 24-bit system expands that to over 16 million steps, offering more detail and 144 dB of dynamic range, providing a more nuanced sound.

    Then there's 32-bit float—the mastering engineer’s dream. Unlike fixed bit depths, 32-bit float uses floating-point math, offering virtually unlimited headroom. This means even if your mix peaks above 0 dBFS, it won’t clip, and we can pull it back without introducing distortion. It's like giving us an audio safety net, ensuring we can recover your mix without compromising so-called quality.

    While we’ll still apply dither for final delivery formats, 32-bit float is the best option when handing off a mix to mastering—it's flexible and is as future-proof as it gets.

  • Dither

    Dithering is the controlled application of low-level noise to mask quantization errors when reducing bit depth—say, from 24-bit to 16-bit for CD delivery. Without dither, truncating the bit depth creates distortion. With dither, that distortion becomes randomized and less perceptible.

    There are various dither types—TPDF (triangular), rectangular, and noise-shaped. Noise-shaped dither pushes more of that noise into less audible frequency bands, making it psychoacoustically friendlier. Dither is a one-time-only kind of thing—apply it once, during final bit reduction. Any more than that, and you’re just layering in unnecessary noise.

    Dithering, in general, is next-level nerdiness. Dan Worall eloquently explains it without math in this tremendous YouTube video.

  • Sample Rate Conversion (SRC)

    Sample Rate Conversion (SRC) is the process of changing the sample rate of digital audio, such as converting from 96kHz to 44.1kHz. When done well, it’s invisible, and you won’t even know it happened—preserving the clarity and detail of the original recording. But if done poorly, SRC can introduce distortion, aliasing, or time-smearing effects that compromise the audio’s integrity and intention.

    In mastering, SRC is crucial for ensuring your audio translates seamlessly across multiple formats and platforms, whether for CD, streaming, or broadcast. High-quality SRC algorithms maintain the fidelity of your mix, making sure that any change in sample rate doesn’t impact the punch or clarity of your track.