> I've been doing a lot of mastering and mixing lately on a project and have
> learned a lot of new methods and techniques. I've heard folks say
> mastering and mixing is a black art, now I know why. In these particular
> songs, they sounded wonderful on my headphones. There were some really
> cool and deep things going on in the 44hz range and below, and some others
> in the 62hz range. It all sounded great through my headphones, but those
> frequencies were reeking havoc on my consumer stereo systems - car stereo,
> portable stereo, etc. They were really prominent resonant frequencies that
> were rattling the hell out of the speakers and causing distortion. And it
> wasn't a level problem...all my stuff was compressed/limited and below
> 0db, and there was no redlining in my original recordings. It only had to
> address troublesome resonant frequencies. So, I had to go back and
> re-master the files, adding a high pass filter that rolled everything off
> below 60hz. That did the trick, but I really miss the sound in the
> headphones. And I'm sure there are some hi fi systems that would have
> produced the original files well, but I can't expect everyone to have a
> system like that. Then I started fine tuning some of the other songs,
> doing a frequency spectrum analysis, watching and listening for other
> resonant frequencies, unusual spikes, etc....correcting them with various
> parametric EQs and so on. Then it got complicated, because if I was
> altering a whole mix, then I could not fix one problem from an instrument
> in the mix, without changing the frequency of another instrument...so I go
> back to the source tracks/wavs, etc, etc. I could spend hours and hour
> just on one song and still not be satisfied with the results, or waver
> between two different approaches. Is there a simpler approach?
>
> I'm wondering what others uses as a consistent approach to
> mixing/mastering their music. For example, after you remove the DC
> offset, do you apply a unique approach to applying EQ? What about
> compression/limiting? On average, how much of a threshold do you apply?
> Do you suck the dynamic range out of your mixes to maximize volume, or are
> you very conservative and preserve as much of the original dynamic range
> as possible, sacrificing some volume. What sort of tools are you using? I
> use Waves L2, and the whole sweet of others in that package. Ever use
> Waves MaxxBass? I read some articles that recommended it during the master
> process, but I did not like the results. It altered too many other
> frequencies in my mix beyond my original intent.
>
> Moreover, the idealist/purist in me would like to preserve as much of my
> original dynamic range and frequency character as possible. And, quite
> honestly, if I ever catch a sound guy altering the EQ on my guitar when it
> is was not meant to correct a problem but only server his own idea of how
> a guitar should sound, he will hear some sharp words from me. I spend a
> lot of time on the tone of my guitar, and do not appreciate a sound guy
> butchering it because of his own sound aesthetic. As they say, "If it
> ain't broke, don't fix it."
>
> So, if I want to preserve as much of my dynamic range and EQ as possible,
> what is the bare minimum I should be doing to my final mixes to ensure
> they don't generate problems on the average listener's stereo system? One
> source I found said to elminate anything below 60hz because most systems
> wouldn't be ableto represent it. I suppose if I wanted to be a purist, I
> would only ensure my overall level is at or close to 0db, and not apply
> any compression whatsoever...because once you do that, you are already
> altering the original dynamic range of the piece. Then, in principle, I
> should not have to mess with frequencies with EQ whatsoever, unless there
> are serious playback issues on common stereo systems. That is the
> direction I would like to head, but I struggle with competing with other
> mixes out there in the same genre that are so ridiculously loud because of
> the amount of compression/limiting applied, followed by level increases.
> How much of a change in dynamic range, from original source to mastered
> recording can a human ear identify? If, just as an example, I start with
> a -60db to 0db range (where only 10% of my material is above -10db), and
> master my file so that 40% of my material is above -10db, what am I
> sacrificing to obtain an overall perceived increase in level? I suppose
> this is where the black art comes in, because it's not as if there were a
> low of physics that dictates how this should be done; rather it is based
> on subjective or relative engineering practices.
>
> Any thoughts or best practices would be appreciated here on how to be both
> a sound source preservationist, yet a playback friendly sound engineer at
> the same time.
>
> Kris
>
>
>
>
>> Krispen Hartung wrote:
>>> As many folks know on the list, I use laptop processing via max (looper,
>>> other octave effects) that completely transform the sound of my guitar.
>>> It is not uncommon for me to play a low E on the guitar (82.4hz), and
>>> then apply a two octave drop. I'm not sure what that would be.
>> Divide the frequency by two for each octave you drop. (Multiply by two
>> for every octave you raise.) 82.4/4 = 20.6Hz. You're definitely into
>> the subwoofer's range.
>>
>> Cheers,
>>
>> Bill
>>
>
>
>
>
>