2
\$\begingroup\$

How does CD technology work by keeping up with the byte stream and FEC? Does the disc rotate faster or seek ahead further than the playing audio?

I’m curious because I understand the audio unit is two bytes or 16 bits and with its error coding (Reed Solomon,) there is a parity byte for every three bytes added. I’m unsure if this is Hamming or convolutional FEC because it’s a parity byte and Wikipedia doesn’t elaborate on the encoding.

It would seem the technology needs to read four bytes for each sample, and when that sample is played, the next four bytes must be readily available in a buffer. I also read the disc rotates at a differenct speed as the optical sensor approaches the origin, maybe due to change in angular velocity. Given, this it would seem either the disc needs to be four times as fast or n*4 bytes must be buffered if it rotates at the same speed on each circle.

\$\endgroup\$

3 Answers 3

4
\$\begingroup\$

Yes, disc players buffer ahead; why shouldn't they? A couple kilobytes buffer weren't much of a problem even in relatively early times of the CD; and discman (portable player) was relatively famous for having enough buffer to survive relatively hefty jogs.

CD Audio and generally the CD use Reed Solomon codewords, and interleaving these, much much longer than 2 samples distance – that would be a really tiny scratch that would be correctable if your error correction words were concentrated on 32 bits each! I'm not sure why you think 4 B was a problematic buffer – it isn't. You don't need to be faster – you need to, at the beginning of playback, have a full deinterleaver length plus the decoding latency in "readahead", so that the next decoded (and potentially error-corrected) word is ready when the current buffer is done playing back.

There's no need for seeking back and forth. The player simply reads linearly, into a buffer of 32 symbols (8 bit each) of the first GF(28) (32,28)-Reed-Solomon code, then the 38 symbols (8 bit each) of the second (28,24)-RS code are decoded into 24 data symbols (8 bit each), which get pretty much butterfly-reordered (for maximum burst error dispersion!), and that gives you 12 samples (of 16 bit each), i.e., 6 samples per channel. The Red Book / IEC 60908:1987 (or :1999, depending of what you have been reading) has actually pretty illustrative figures (Fig. 12 and 13) of that!

\$\endgroup\$
6
  • \$\begingroup\$ So a sample or 4 bytes is 1 second of audio. If the disc is encoded a constant linear velocity, and there is only 6 - 60 seconds of buffer how is the buffer not exceeded? I’m suspecting the buffer is filled and the optical arm doesn’t advance to the next circular track line on the disc, and stays still until there is more buffer space? I understand rotation speed slows down as the track is read center out. \$\endgroup\$
    – user22646
    Commented 11 hours ago
  • \$\begingroup\$ @user22646 4 bytes is an interval of 1/44.1 kHz . A FiFo buffer has almost empty/full to feed servo controller demands to start/stop buffered transfers. A large FiFo size can correct for bump and reseeks but adds latency, Small FiFo select does not. That should resolve your confusion \$\endgroup\$ Commented 9 hours ago
  • \$\begingroup\$ This technology is developed in the 70s. The concept of getting error-corrected data frames to even have playable audio does not require more than a kilobyte or two of memory like what was in earliest CD players. Any buffering for audio after that is just extra so it can deal with vibration or save power by turning motor off to only periodically fill the audio buffer. \$\endgroup\$
    – Justme
    Commented 9 hours ago
  • \$\begingroup\$ HDD's used ECC before LD's & CD's were invented. They could correct up to 11 bit burst errors per sector. \$\endgroup\$ Commented 9 hours ago
  • \$\begingroup\$ In ECC C1 can correct up to 2 bytes per 24 B block, C2 ECC spans multiple blocks and can correct up to Can correct up to 4 bytes of error per block (block size = 28 bytes). Larger errors are corrected by interpolation up to 3,500 bytes (2.4 mm of track length). (thus width of scratch) \$\endgroup\$ Commented 9 hours ago
5
\$\begingroup\$

A couple of points:

  • A CD is read starting from the center. The track is recorded at constant linear velocity, which means that the rotation actually slows down as the head moves out toward the edge of the disk.

  • The flow of CD-quality audio is 44,100×4 bytes per second, or 176.4 kB/sec.

  • There is a lot of buffering in a CD player, as much as a megabyte or more (6 to 60 seconds of music) in ruggedized portable units. The audio you hear is not tightly coupled to the data coming off the disk.

When you start playing a track, the CD spins up and starts loading error-corrected data into the playback buffer. When the buffer contains a certain minimum amount of data, a separate process starts playing the data through the DAC.

Meanwhile, the disk-reading logic is trying to keep the data buffer full — even if the unit gets bumped hard enough to require a re-seek to the correct track and re-synchronization to the data stream, the time for which is covered by the data already in the buffer. The raw data-reading process can be made significantly faster than the flow of data to the DAC so that it can recover from gaps in the reading process.

\$\endgroup\$
3
  • \$\begingroup\$ If the buffer is filled, does the optical sensor just star focused on the current circular track position instead of moving outwards on the disc? Otherwise it seems the buffer will be exceeded if the optical sensor keeps reading more of the disc. That’s my confusion \$\endgroup\$
    – user22646
    Commented 11 hours ago
  • \$\begingroup\$ This technology is developed in the 70s. The concept of getting error-corrected data frames to even have playable audio does not require more than a kilobyte or two of memory like what was in earliest CD players. Any buffering for audio after that is just extra so it can deal with vibration or save power by turning motor off to only periodically fill the audio buffer. \$\endgroup\$
    – Justme
    Commented 9 hours ago
  • \$\begingroup\$ No, the raw data rate is simply reduced (by reducing the disk speed) as the buffer fills up. \$\endgroup\$
    – Dave Tweed
    Commented 13 mins ago
2
\$\begingroup\$

You are mixing two concepts that work on completely different levels.

First level is how to read data from disc and perform the error correction to even be able to play audio data stream live directly at original speed without buffering, and the second level is simply how to maybe read it at higher speed and store it into any amount of memory for playing it from the buffer.

When playing audio directly from the disc at standard speed, there is a feedback mechanism to rotate the disc at channel bit rate of 4.3218 Mbps. That data stream is a continuous stream of 588-bit channel frames, each containing a total of 32 bytes of audio and 8 ECC bytes.

But those physical bytes of audio are not concecutive logical bytes of audio sample stream. Continuous audio data is interleaved over multiple channel frames to allow for scratches, manufacturing defects etc. Which is why the channel frame bytes are first read into small memory buffer of only few kilobytes, then processed with error correction, and when enough channel frames are received, the continuous audio data can then be played out by reading the buffer memory according to the interleave pattern.

There's two methods to extend that read-ahead buffer, either enlarge the required ECC and De-interleaving memory to any size supported by chipset, or use another chipset and memory after it to have any arbitrary length buffer.

Of course on both occasions the CD data needs to be read in faster to fill the buffer and retry any bad frames if necessary.

If there is no extra memory for buffering the interleaved channel data stream or de-interleaved audio, the disc cannot be rotated faster than it is played back.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.