[sdiy] CD sound quality evolution

Brian Willoughby brianw at audiobanshee.com
Mon Feb 15 22:52:09 CET 2021



On Feb 15, 2021, at 10:32, Mike Bryant wrote:
> No.  Because the 16 (sic) bit Burr-Brown DACs in every CD player were crap,

I don't know ;-)  the Burr-Brown 16-bit DAC chips were quite good ... a lot better than the 14-bit DAC chips that shifted through 4 quantization step sizes to make up for the fact that full 16-bit resolution was not affordable yet.

I've heard some great BB 16-bit DAC setups. It probably has more to do with source material than the chip quality. Burr Brown were the golden reference for about a decade or more. Admittedly, both the digital, conversion, and analog reconstruction circuits have improved massively since those days.


> the ADCs in the digitization studio weren't much better, and good compressors were rare so a lot of CDs were recorded with so much headroom lots of the time the output was down towards the noise floor.  That's why many CDs were remastered in the 1990s.

It took the industry quite a while to learn how digital really works. The headroom is not really the issue. The real issue is the lack of good dithering. Modern 24-bit mastering with proper dithering can achieve excellent audio quality without dynamics compression. You just have to remove the correlated quantization noise by using proper dither. The human hearing system finds correlated quantization noise highly objectionable, but uncorrelated noise appears all around us in nature, and our brains can easily ignore it. See just about any of the Audio Engineeering Society papers by John Vanderkooy and/or Stanley P. Lipshitz.

> Also remember the Phil Collins debacle where the CD stopped playing half way through because a particular bit sequence matched the end of track flag.

Can you provide any detailed references for this?

I have studied the CD Red Book specification for years, and there is no possibility of an audio bit sequence getting confused with end of track flags. The TOC (Table Of Contents) is one source of track start time stamps, and it's completely separate from the audio. There is also the 8-bit subcode control data that includes a bit for track audio versus silence (track start and end), but this is also completely separate from the audio data.

In other words, CDDA can include literally any pattern of 16-bit stereo 44.1 kHz digital audio without any risk of "end of track" issues. If there is an issue, it's in the non-audio meta data. Of course, there are a few CD players, at least old ones, that misinterpret the CDDA spec, and might have problems. The CDDA spec itself is a bit vague, while also somewhat redundant, leading to manufacturers misinterpreting things that weren't 100% spelled out.

It's entirely possible that the Phil Collins CD was not Red Book compliant.

Where you can get into trouble with audio data is on encodings that go beyond CDDA. I'm a huge fan of DTS Surround discs. I licensed the DTS encoder so I can produce my own surround mixes. The problem here is that the "digital audio" is no longer real audio. There are bit patterns that identify DTS, and restrictions on the amplitude, so there can be problems with decoders that support surround if they get confused by pure CDDA that happens to include a special sequence of samples. In these cases, you have to take care to turn off the surround detection on CDDA sources to avoid the rare chance of glitches.


> Everybody was saying "why the f*** didn't they just wait until 20 bits was available".  Quite a few artistes refused to do CDs until the quality improved when ADC dithering and 1 bit DACs came along.

Yep. It actually took a while for hardware to evolve from 14-bit to full 16-bit. I don't think anyone anticipated 20-bit or 24-bit improvements, especially not the ability to use dither to squeeze 24-bit performance into 16-bit CDDA. But, the industry has learned so much as digital matures.

Brian





More information about the Synth-diy mailing list