Collateral Damage in The Loudness War

Jan 10, 2008 14:59

Rolling stone, bastion of solid music journalism that it is, recently published an article about The Death of High Fidelity, a cursory overview of what audio engineers have come to term "the loudness war" along with a few theories as to its root causes.

Naturally, I think they get it mostly wrong. Or at least take valid points and run in the wrong way with them, bolstering them with tangentially-relevant quotes.


For the unitiated, the "Loudness War" is a reference to the ever-increasing perceived loudness of pop music. Compare a CD made today with a CD made 20 years ago and the difference in loudness is marked. Metering will show that they both peak at about 0db below distortion, but the newer one sounds louder. Usually a lot louder. This is due to a process known as limiting, which is itself a specific form of a process known as compression. Compression looks for peaks in the sound and squashes them down a bit (following some rules for attack, decay, hold, threshold, etc), essentailly leveling out a lot of the peaks and valleys of a waveform and allowing the whole thing to be turned up a little more. Improvements in signal-processing hardware and software has allowed this process to become more precise and intelligent, meaning you can now squeeze a lot more limiting out of a track than you used to without adding weird sound effects. This seems harmless enough, esxcept that after a while you squeeze the dynamics out of a song and kill a lot of the musicality. Why would anyone want to do this? Well, therein lies the crux of the loudness war - on the radio and in the club, if the next song is louder, it grabs your attention and you're more likely to remember it (even if it's had its dynamics demolished). Loudness = sales, I guess. from a historical perspective, it really began so radio stations could maintain even signal-strength without overmodulating their bandwidth and incurring the wrath of the FCC.

The author of the article makes a few newbie mistakes - he lays the blame at the feet of mastering engineers, attributes cause to the decline in bitrate of the mp3, makes ProTools the tool of the devil, confuses audio compression with data compression, and provides a number of supporting quotes that may or may not have anything to do with the actual argument at hand.

First off, blaming the mastering engineer is a bit like blaming the messenger. This is a group of people that stake their reputations on the finished product sounding good, and make their living through having undamaged, well-trained ears. These aren't people who are jacking the limiting to eardrum-shattering levels out of choice. Someone, either the producer, a label exec, or in some asenine cases the artist themselves, is specifically asking for this sort of treatment.

Neither is mp3 really the culprit. Yes, it's a lower sound-quality than a CD, but the loudness wars have been going on for decades, long-predating the mp3. It was going on during the days of cassette, which were far worse for fidelity than the average mp3 file. The author goes on to make the argument that mp3's are compressed, and lose some frequency information - and this is true, but this doesn't really tie in to the audio compression/limiting argument. It's just the author saying "hey, audio quality sucks all around." And given the history with cassette, and some of the spotty vinyl practices of the past, and even the early digital recorders for the CD, one can't really say that overall consumer audio fidelity has decreased. Even the iPod can't really be blamed - who didn't have a cheap cassette walkman in the 80's?

The article has a go at Pro Tools as well, calling it a "word processor" for audio. Furthermore, he makes the claim that AutoTune can "fix a bad vocalist" and BeatDetective can make any drummer "sound like a professional." If only this were true. This is an oft-repeated, naive claim that audio technology can make a bad musician sound good. Trust me, it can't. If the vocals are way off, Autotune won't be able to fix it. If the drummer can't keep time, BeatDetective will not save the recording. These tools can fix the accidental mis-step (saving the artist from having to re-record the whole thing for one missed note or flubbed beat) or be used as an effect (the "cher effect" or to change the groove of a drum line or whatnot) but they are not panaceas. Pro Tools isn't as straightforward as word processor, either. But neither was a mixing board. Blaming the "ease" of Pro Tools (or for that matter, any other Digital Audio Workstation package like Logic, Cubase, Performer, etc) is just a music-snob's sour grapes. The tools are cheaper and the entry-point to audio-engineering is no longer "tea boy at a studio and 8 years as an intern." It could be argued that perhaps the lack of training leads to lower-quality output, but such arguments tend to be predicated on the thesis of "music used to be better in my day." One could counter-argue that perhaps there was better music 30 years ago, but there were also many fewer people with the means to make it and consequently a larger number of good musicians who were never heard. All in all, though, it's an irrelvant argument.

It should be noted, though, that the relative ease-of-use of lower-cost mastering software could be contributing to the loudness wars. 20 years ago, a singer-songwriter did not have the ability to spend $300 and get near-pro-quality mastering gear. Now, a few hundred will get you a program that does a pretty impressive job. The mastering process is by definition fairly surgical - it's fixing rogue frequencies, managing level balance across an album, adding a little bit of "shine and smear" if necessary so an album hangs together right - and as with most surgical tools it's relatively easy to lop your own arm off. Good mastering doesn't just require the ear and patience, it requires a good acoustic room, good speakers, and clean signal-chains, none of which come free with purchase of iZotope Ozone. These things can be had by the average artist now, too, but they need to know they need them first, and not having them can mean the difference between a clean, shiny master and a bass-distorted ear-wearier. Likely, though, this isn't the problem, as the kind of artist who does their own mastering is not the one getting the radio rotation that demands the squashed signal.

The article then presents us with a lot of quotes from various famous people about how the music sounded better in the studio, or how music today doesn't sound as good, etc. Those are all well-and-good, but that doesn't mean that crazy kids today with their iPods and Pro Tools are the problem. A bad mix has been a bad mix since Thomas Edison first put "Rock Me, Amadeus" on a wax cylinder.

Where the article does go right is in the waveform display. Well, sort of. You can clearly see in some cases that modern tracks have been squished so that the dynamic range is just plain gone from the song. The section showing how this has happened to remasters is a little more nebulous, since quite often the remaster is louder but not terribly so, and certainly the signal processing is cleaner than it was when first recorded and mastered in 1977 or whenever.

Pretty much everyone in the technical arena agrees that the Loudness War is a blight on music, and than in an era of digital signals and high-bandwidth it's fairly unecessary. It will likely take a consumer movement to stop it, though, because the cause of the recent "battles" aren't the engineers or the toolmakers or the artists, it's the people at the top asking for their songs to be louder than the rest. It's easy to demonize the "suits", and it's not always them asking for it - sometimes it's a near-deaf dance artist that doesn't know what they're asking for, or a producer who thinks it's a good idea. At any rate, Rolling Stone is just feeding the confusion by publishing articles like this. The target audience will end up blaming the wrong people and the wrong tools, and that doesn't move things in the right direction.

music, mastering

Previous post Next post
Up