Looper's Delight Archive Top (Search)
Date Index
Thread Index
Author Index
Looper's Delight Home
Mailing List Info

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Date Index][Thread Index][Author Index]

Re: AW: Kork Mr1 recorder



At 5:40 PM +0100 11/27/07, Rainer Thelonius Balthasar Straschill wrote:
>
>DSD is a delta-sigma (1bit) conversion technology which samples at 'round
>2.88MHz. So how does this work, compared to typical PCM (e.g. CD, DVD)
>audio?

Now, I believe this type of one-bit sampling technology has actually 
been around since the late 80's.  IIRC, there was some sampling 
technology touting one-bit conversion (Yamaha TX16W perhaps?) which 
had ties back to the original Sequential Circuits engineering team, 
which was originally sold to Yamaha then Korg when SCI went out of 
business in the mid-80's.

This is one of those many occasions where I really miss Dr. Zvonar, 
dammit.  :(

>Is it worth four times the price?

Well, if you believe the marketing.  I think I've got a couple of 
evaluation points worth considering.  That's not to say they aren't 
addressed with current DSD technology, but I feel they're certain 
worth having asked and answered.

Okay, I am pulling up some crap from my severely mis-fired and 
perhaps slightly drug-addled memory cells of twenty years ago, so 
take the historical part with a grain of salt.

The main issue levelled at one-bit recording back in the 80's , if I 
remember my history, was that it had substandard transient response. 
For instance, if you record in 16-bits, it's possible for one value 
to be 0000000000000000, while its neighboring value to spike 
immediately to 1111111111111111 -- no necessary interpolation 
between.  If you take that same transient recorded in DSD (one-bit) 
and illustrate it using the same 16-bit terminology, this would 
translate to an initial value of 0000000000000000, followed by 
another of 0000000000000001, then 0000000000000010, all the way up 
until you finally reach 1111111111111111.  There's the potential to 
observe softening of the transients, which could result in a digital 
compression that might be pleasing or irritating.  Which one, I don't 
know.  Regardless, it is another type of digital distortion.

The supposed workaround for this is that, as you pointed out, the 
sampling rate is phenomenally high.  These days, it's certainly much 
higher than was ever possible back when DSD was first put out.  The 
way it's supposed to work is that the super-fast sampling rate 
compensates for having to approach these bit values "one step at a 
time".  Does it work?  Again, I'm not making a value judgment here. 
But try to approach your critical listening tests with that 
*potential* fault in mind.

The next possible criticism is much more modern, and dovetails into 
an argument put forward by Dan Lavry, of Lavry Engineering in his 
white paper here: 
http://www.lavryengineering.com/documents/Sampling_Theory.pdf

Now, of those of you that don't know, Lavry Engineering makes 
*extremely* high-end A/D/A converters.  These are some of the 
converters favored by people who think even the top-line Apogee's are 
crap.  So, if you're a mastering engineer and wanna go spend in the 
neighborhood of five figures for top-end converters, go Lavry.  In 
other words, IMNSHO, the guy knows his sh**.

In this paper, he's explaining (for one thing) why 192KHz sampling 
rates are nothing more than industry hype.  In fact, Dan Lavry 
refuses to support 192K, and even argues that it is sub-standard to a 
good-quality converter operating at 88.2K or 96K.  Why?  Because, 
amongst many reasons, the tolerances of modern electronics 
(especially mass-produced designs) can't really keep up with 192K as 
a stable rate.

The main operative point here is "stable".  Micro-fluctuations 
induced by the components start to really come out when they are 
driven at such a high rate.  I think it's roughly analogous to 
driving a 30 watt guitar amp full-on at 30 watts ("mine goes to 11", 
in other words).  What happens is that you end up with distortion, 
where the guitar waveform becomes clipped off so that it's 
transformed into more of a square wave at the peaks.  Now, of course, 
on a guitar amp this distortion is pleasing, and amps are actually 
designed to take advantage of this.  On a digital converter, however, 
distortions produced by having to perform near the limits of 
component tolerance are not nearly so desirable.

This can also be backed up by some of Bob Katz's jitter tests (I'm 
reading his excellent "Mastering Audio" book right now).  Katz found 
that, in evaluating clock specs, it was not really so much the 
ability of the clock to generate a spot-on 44.1KHz (or whatever rate) 
clock.  Rather, what mattered more to the converters and the overall 
quality of the sound was the *stability* of the clock source.  In 
other words, it didn't matter quite so much that the clock generated 
at a rate of, say, 43.915KHz, as long as the clock was stable at that 
rate and as free of clock jitter as possible.

This points out again that what matters a great deal is the stability 
and consistency of the sampling rate, not so much the speed of the 
rate itself.

So, bringing it back to DSD....  Now you've got a converter that is 
performing at 2.88MHz.  Is that going to be similarly susceptible to 
component tolerance factors?  Again, I don't know for certain, but 
that's got to be a factor to consider when evaluating this technology.

        --m.
-- 
_____
"take one step outside yourself. the whole path lasts no longer than 
one step..."