[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Date Index][
Thread Index][
Author Index]
Re: Re: Ninjam techbabble (was: Chinapainting article)
>I don't want to come off arrogant but when I listen to the examples of
>online jam sessions (Ninjam) they sound horribly out of >sink?
You're not coming off arrogant, merely ignorant.
The short answer: if the playing on Ninjam's public servers sounded
completely untight or out of sync, this may either have to do with the
reason why most amateur punk bands sound out of sync (btw, really liked
your misspelling there), namely because of lack of qualification, or
because a bad tech setup, or because they follow an approach for playing
that makes the experience for a third party listening in less enjoyable
(see explanation below).
The rest of the message is rather long-winded...
Ninjam has this concept that it "rounds up" transmission delays over the
net in a way that make musical sense iff you play in time and have the
interval set to a meaningful value.
So for this to work (for beat-based music), the musicians need to know
what Ninjam's tempo and beat offset is. There are several possibilities,
all of which are slightly less than optimal:
1. Use the metronome. Downside: the metronome is output on the same audio
output as the music, so you can't (or at last most probably wouldn't want
to) use this approach when playing to an audience.
Solution: Os has developed this VST version of the ninjam client which
features separate outputs. Unfortunately, no public version exists. It
should be sufficient if one beat-oriented player (e.g. drummer) listens to
the metronome, the others can just play along with him.
I never did that myself.
2. Use MIDI clock. One of the clients (the wasabi client) does output a
MIDI clock based on Ninjam's time. Downside: the wasabi client won't
handle ASIO interface. Workaround: use two computers, one of them running
the normal client which handles audio and one of them running the wasabi
client which outputs MIDI clock. Problem here: you have a timing offset,
which will hurt you iff any of the other participants use either the same
approach or the metronome. Workaround: have your MIDI-synced equipment
setup so it compensates for that offset (all sequencers and drum machines
I know can do that). Again, only one (best the "beat source") would need
this cumbersome setup.
Examples: Both the tracks "Aspirin Age" and "Virtual Baggy Pants" on
http://www.moinlabs.de/i_kqpda.htm were done using this approach.
Other approaches:
3. In a situation with only two participants (note that in this
discussion, a "participant" is one client - which can very well feed the
playing of more than one musician), one participant can for some time
establish a groove and ignore what he hears from the other side (or at
least not let himself be affected by its timing). The other participant
can then simply play on top of that. This sounds good (meaning: in sync)
for that other participant, not so much for the first one (or for anyone
else who just connects to the server and listens).
Examples: the groovy part at the end of "The Milkey Way - Yivli Antare".
Here, Charlie played the groovy bass part (not necessarily in sync with
Ninjam's tempo), and I played on top of that. "Advantages in Freeform
Breeding" - here, both Rick Walker (percussion) and Krispen Hartung
(guitar) were one participant, I was the other participant.
In both cases, the album uses the recording done on my side.
4. Also, you can simply ignore any beat (works best if playing
non-beat-based music). This is the approach that was afaik used also
during Warren's and Per's session at Y2K6 kyberloopfest. The funny thing
here is that this sounds completely different (not necessarily in an ugly
way) at the different participants'.
Examples: the rest of "The Milkey Way", "Below this Sneppah". There will
also be a track on the second volume of the kybermusik series (due 1st
half of 2008) where I'm gonna combine the independent recordings of each
participant from a session I did with Tony K.
Things that can still go wrong:
* as I said, when using the "two computer MIDI approach" AND (somebody
else is also using it OR somebody else plays to the metronome), you have
to compensate between the timing offset of both your computers.
* in example 3, this normally only sounds good in exactly one place.
* Ninjam doesn't compensate for latency induced by audio interfaces
(although it could - this would be a nice thing one of you software
developers could do).
* players can be unable to play in sync
* what is beautiful for one might be horrible for another one
BTW, all you software developers out there (including but not limited to
Os):
I'd still like a Ninjam client OR a VST plugin (best both) that has the
following features:
* ASIO support (only for the standalone client) (priority 1 - this is
already available in the normal client)
* can output MIDI clock (priority 1 - this is available in the wasabi
client)
* has an additional setting for audio hardware latency to compensate
for that (priority 3)
* independent routing of audio output including metronome (priority 2)
* running on XP platform (priority 1)
Looking forward to your implementations!
Rainer