Support |
The situations where I have learned most about realistic drum programming is when I have been sitting together with a drummer in front of a computer to program drums for a piece of music. I came to realize that I had many incorrect assumptions about drumming and the first area where I had to shape up my knowledge was "tightness and quantization". I had been thinking that the exact timing of machines is bad for the feel in a drum track and that human drummers sound more organic and musical because every hit is not aligned to an exact time grid. With the first drummer I thought "wow, this is a funny guy that wants to quantize everything" but then I found this attitude among all drummers I worked with - no one is so keen on the studio's Quantize Button as real drummers. This totally opposed my previous assumptions, and still the drum tracks we came up with sounded better and more human that I had ever been able to create on my own. Why? Well, the answer is not "timing humanization" but "accent exactness". So that I learned by loosing a bit of my ego ;-) This has directly to do with "ghost hits" that Buzzrap mentions but there's a lot more to it. With accents (handled by "velocity" in MIDI along a 128 step range) you create loops within an on-going rhythm, accent loops that may go on for long periods like four, eight, twelve or sixteen bars. These tides of subtle "waves" in accent (how hard the virtual drummer bashes the skins) has to be applied in balance with drum fills. A related technique, for more progressive music styles, is to make accent loops that rely poly rhythmical to the short beat, much like when combining loops of different length (typical example syncing a three-beat loop and a four-beat loop). This technique is also common in melody composition (King Crimson, minimal techno), as a "less is more" way to create two complementary stories with just one melody line. But drummers do that all the time within the rest of the music, drumming can be a poly rhythm factory in its own right. The bottom line of this is that most sequencer applications "Humanize" buttons are crap. They apply randomization to the placement in time of each drum hit, and this is all bad. If you look at how a great drummer plays it may seem like "a little random is at play" if you only look at only one or two bars of his playing - and this is where the programming approach often fails. When looking at how a great drummer plays along the full length of a piece you will find that any deviation from a time grid is never caused by random; there is always an intention present at any level of the drumming. So to take on all this knowledge and apply it in programming is a huge task. And technically challenging too, because sequencers are not always designed by musicians so there are no easy ways to do things like smoothly letting a hihat pattern drift from slow to an aggressively leading general timing over a couple of bars. One constantly needs to balance the general timing of kick drum, snare and hihat. These three timing areas depend not only on the music and the instrumentation of the orchestra but also on the mixing. And here is the second thing I learned form working closely with drummers in the studio: a drum rhythm carries the music totally differently depending on how "sharp" or "muddy" the drum mixing is. But there's more to it: modern DAW mixing environments have plugins that will let you shape transients like modeling clay and this is a very important technique for "realistic drum programming" (but it isn't actually programming but rather sound design at its most extreme dynamics). If you solo a "realistic" drum track it may not sound realistic at all, but in the mix it does, and this is a psychological phenomenon in human hearing. The brain constantly generalizes sound from the very first milliseconds of the sound; hence the importance of the attack in drumming and producing recordings of drumming. The point is that a recording can not generalize creatively like our ears and brains are so good at, so all producing of recorded music has to be "cheating" to find the best ways to "imply directions" that bring up the experience of "realistic" in the listener. This is why you can press the mixer's solo button for a timpani drum channel and A/B compare it with pounding the real timpani drum. The real thing just won't fit onto the recording medium and the message will get lost. Real instruments have developed over centuries because they tend to sound great together when listened to live (with the exceptionally efficient human brain analyzing early reflections in 3.D). Recording is totally different, one has to minimize the real thing to make it fit sonically and also introduce nifty tricks for dynamic processing to replace the absent "human ear/brain code". Techniques to make this dynamics happen in a recording also includes setting up sub mixes of drum groups and having them interact dynamically by side chaining (often includes bass lines and "effect reverb breathing" etc). Maybe treating certain frequency ranges of a sound differently. All this can be heard on records so it is easy to learn by keeping reference recordings at hand while working on a mix of your own; compare, adjust your work, compare, adjust your work, compare... doing that for some time makes you learn progressively as you pick up "new senses". The more you learn the faster you learn more. Greetings from Sweden Per Boysen www.perboysen.com http://www.youtube.com/perboysen