issue02

EMUSIC-L Digest                                      Volume 54, Issue 02

This issue's topics:
	
	cd encoding (46 messages)
	copy protection

Your EMUSIC-L Digest moderator is Joe McMahon .
You may subscribe to EMUSIC-L by sending mail to listserv@american.edu with 
the line "SUB EMUSIC-L your name" as the text.
 
The EMUSIC-L archive is a service of SunSite (sunsite.unc.edu) at the 
University of North Carolina.
------------------------------------------------------------------------
Date:         Mon, 5 Jul 1993 18:29:18 +0000
From:         Nick Rothwell 
Subject:      Re: European CDs

The CD audio format is universal. I mean, you can buy European CD's over
there, can't you? I have a pile of US CD's, and even some badged in Russia.
They all come out of the same small number of pressing plants.

The only incompatibility I know of is the pricing.

                        Nick Rothwell   |   cassiel@cassiel.demon.co.uk
     CASSIEL Contemporary Music/Dance   |   cassiel@cix.compulink.co.uk

------------------------------
Date:         Wed, 14 Jul 1993 13:04:43 EDT
From:         Simon Weatherill 
Subject:      CD Sampling rates

I have a CD player that has 8 times over sampling.  What does that
mean.  I've also heard of one bit oversamplers.  This one I think I can
figure out.  You *have* to oversample if you're decreasing the number
of bits.  Right?

I'm sure someone out there knows...

                                Thanks,

                                Simon A.T. Weatherill
                                Senior Network Engineer

+---------------------------------------------------------------+
+ Burlington Coat Factory      Voice: (603) 643-2800            +
+ Schoolhouse Lane               Fax: (603) 643-3945            +
+ Etna, NH 03750            Internet: simon.weatherill@coat.com +
+---------------------------------------------------------------+

------------------------------
Date:         Wed, 14 Jul 1993 14:02:23 EDT
From:         ronin 
Subject:      oversampling

oversampling is a variant of digital filtering in which zeroes are
inserted into the data stream after it has been read from the storage
medium, and the rest of the processing takes place at the new rate
required by the insertion ratio. if, for instance, you insert one
zero for every sample, then the rest of the processing chain must
work at twice the input sampling rate in order to get the data through
at the same throughput rate. output lowpass filtering, when applied
to this 'oversampled' data stream, produces a waveform of approximately
the resolution that would have been achieved had the source material
originally been sampled at that higher rate.
while it is true that, as was pointed out, a onebit system must necessarily
run at a higher rate in order to reproduce highres audio, onebit oversampling
refers to the above process applied to the already highrate onebit stream.
in other words, zeros are inserted between bits.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Wed, 14 Jul 1993 14:50:09 EDT
From:         Simon Weatherill 
Subject:      Re: oversampling

> oversampling is a variant of digital filtering in which zeroes are
> inserted into the data stream after it has been read from the storage

Zeros(!!) are added.  Won't this distort the sound?  How about
sample-and-hold - just keep repeating the same value until you're ready
to play the next?


                                Simon A.T. Weatherill
                                Senior Network Engineer

+---------------------------------------------------------------+
+ Burlington Coat Factory      Voice: (603) 643-2800            +
+ Schoolhouse Lane               Fax: (603) 643-3945            +
+ Etna, NH 03750            Internet: simon.weatherill@coat.com +
+---------------------------------------------------------------+

------------------------------
Date:         Wed, 14 Jul 1993 21:09:39 EDT
From:         mbartkow@GWENDU.ENST-BRETAGNE.FR
Subject:      Re: oversampling

Don't you see, that's a question of so common different meaning of a scientific
term and a trade mark ?
BTW the blanking samples are first interpolated in low pass filtering.

Maciej

------------------------------
Date:         Thu, 15 Jul 1993 08:07:13 -0500
From:         "David C. Bloom" 
Subject:      Re: oversampling

>
> > Zeros(!!) are added.  Won't this distort the sound?  How about
> > sample-and-hold - just keep repeating the same value until you're ready
> > to play the next?
>
> Sounds logical, but apparently this wouldn't have the desired effect of
> increasing the effective sampling rate.
>
> There are two hard bits in converting digital audio signals to analogue:
> linearity of the A/D converters and the need for an analogue filter which
> passes everything below sampling_frequency/2, and nothing above that frequency
 .
> `1-bit' technology overcomes the first problem by putting out identically-
> shaped pulses at a varying rate (instead of vatiable-height at a constant
> rate), and `oversampling' by interpolating dummy samples and using the extra
> headroom gained over the top of the audio range to simplify the output filter.


Gang__  There seems to be some confusion between sampling and encoding.
Sampling is the A/D process, which puts out chunks of 16- [CD], 14- [Linn],
or 8- [early ensoniq] bit samples every so often.  What happens next is more
a communications issue than signal processing.

If you're going to stuff data on some medium [a wire, a tape, a disc or CD],
the data get _encoded_.  They don't just store 16-bit words, so that every
16th bit is the most-significant-bit, etc.  The data typically go thru pulse-
code, FSK, manchester or some other modulating algorithm, and a string of
one's and zero's gets written to the medium.  Then, the demodulator acts on
arbitrary chunks of this bit-stream and reconstructs samples from it, which
are then fed to the D/A to make sound.  The chunks may be 18- [sony], 16-
[common CD], or even 1-bit long [the "1-bit technology" you refer to].

This is all transparent to most systems, as the communications subsystem
generally comes complete from the drive or modem manufacturer with a data
interface, but no medium-level interface per se.  All that encoding and
decoding is a mess anyway.  It's fun for tweaks to talk about, a bitch for
engineers to develop, but it's really irrelevant.  Like someone's Arthur C.
Clarke sigfile said, "Anything sufficiently technically complex is magic."

So Eric's right-on-the-money when he talks about inserting zero's [or one's
for that matter] in the bitstream.  [Back me up on this one, Simon, you're
a network engineer. :->]  There's no distorion, cuz the decoder just treats
them as background DC-bias, which never makes it past the D/A anyhow.  No
data are being invented here, it's just like "stretching it", and when you
decode the "stretched" bitstream, you get back the original regardless.  A
S/H could never do this, as it violates Nyquist.  You can't extrapolate data
without aliasing, introducing harmonics or some other kind of distortion.
Now if you were to insert those zero's in the _samples_, you'd have some
serous problems.  :->  __David

<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>         david c. bloom           <>                                      <>
<> open networks engineering, inc.  <>  What is the price of an afternoon   <>
<> 777 e. eisenhower pkwy, ste 650  <>  when a small girl is soothed in     <>
<>    ann arbor, michigan  48108    <>  your arms, when the sun bolts       <>
<><><><><><><><><><><><><><><><><><><>  through a doorway and both you      <>
<>  net  dcb@one.com    <>   \ \    <>  and the the child are very young?   <>
<>  vox  313.996.9900   <>    0-0   <>                   __Dorothy Evslin   <>
<>  fax  313.996.9908   <>     .    <>                                      <>
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>

------------------------------
Date:         Thu, 15 Jul 1993 09:25:10 -0500
From:         Mike Clemens 
Subject:      Re: CD Sampling rates

simon.weatherill@coat.com writes:
>I have a CD player that has 8 times over sampling.  What does that
>mean.  I've also heard of one bit oversamplers.  This one I think I can
>figure out.  You *have* to oversample if you're decreasing the number
>of bits.  Right?

[please correct if any of the following is wrong!]

I *think* this has to do with error-correction.  The CD player reads
in a stream of bits, then "oversamples" the stream, replacing bits
that didn't match the original read.  I think the # of times refers to
how many times the player will read a section of disc before
converting it to sound, and the # of bits refers to how wide a section
it checks.  So,

One bit, 8 times oversampling: checks one bit at a time, each bit is
checked 8 times

8 bit oversampling: checks 8 bit chunks, each chunk is checked once

I'm a little vague on the actual process used to determine which bit
is correct: how does the one-bit oversampler know that the original
bit is correct and all others are wrong, for example?  I know that
such issues come up in computer science, and typically certain bits
are reserved as "check" bits, to verify the status of other bits in
the string.  I imagine such a thing is true for CDs as well.

- Mike

------------------------------
Date:         Thu, 15 Jul 1993 18:19:00 EDT
From:         John Rossi III 
Subject:      Re: oversampling

I thought the whole concept of oversampling was arrived at to achieve a
higher cutoff for the brickwall antialiasing filter (which high end
stuff presumably does with an analog filter).  The way some guy at Denon
explained it to me was that if you did not oversample you would need to have
a brickwall filter around 21 KHz.  Four times oversampling effetively
increased the Nyquist frequency to 80 KHz so that a less steep analog filter
could be used in order to avoid phase shifts.  Have I been believing
a lie for all these years?

John

------------------------------
Date:         Thu, 15 Jul 1993 18:48:02 EDT
From:         mbartkow@GWENDU.ENST-BRETAGNE.FR
Subject:      Re: CD Sampling rates

> I *think* this has to do with error-correction.  The CD player reads
> in a stream of bits, then "oversamples" the stream, replacing bits
> that didn't match the original read.  I think the # of times refers
> to how many times the player will read a section of disc before
> converting it to sound, and the # of bits refers to how wide a
> section it checks.  So,

> One bit, 8 times oversampling: checks one bit at a time, each bit is
> checked 8 times

> 8 bit oversampling: checks 8 bit chunks, each chunk is checked once

What a horrible idea, Mike! Of course it's not true!
The error-correction technique using multiplied reading from magnetic
storage is sometimes used, but it's so far from CD. I recommend you to
read something basic abt digital audio processing.

Maciej

------------------------------
Date:         Fri, 16 Jul 1993 07:48:17 -0400
From:         Chris Gray 
Subject:      Re: oversampling

> Gang__  There seems to be some confusion between sampling and encoding.

There wasn't, until you introduced it.

> Sampling is the A/D process, which puts out chunks of 16- [CD], 14- [Linn],
> or 8- [early ensoniq] bit samples every so often.  What happens next is more
> a communications issue than signal processing.

> If you're going to stuff data on some medium [a wire, a tape, a disc or CD],
> the data get _encoded_.  They don't just store 16-bit words, so that every
> 16th bit is the most-significant-bit, etc.

So far as I know the data recorded on an audio CD is precisely a string of
16-bit data words, with a wee bit of formatting and error-detecting information.
Data CDs use a more sophisticated data structure which includes error-
detecting and error-correcting codes, but this happens within the basic CD
format (it's ``software'').

>  The data typically go thru pulse-
> code, FSK, manchester or some other modulating algorithm, and a string of
> one's and zero's gets written to the medium.

Maybe if the medium is a cassette recorder ;>. But weza talkin' optical.
This is the 1990's, the laser age (see also: domestic robots, aircars,
unlimited clean energy from atomic fusion).
>  Then, the demodulator acts on
> arbitrary chunks of this bit-stream and reconstructs samples from it, which
> are then fed to the D/A to make sound.  The chunks may be 18- [sony], 16-
> [common CD], or even 1-bit long [the "1-bit technology" you refer to].

Um, not so sure about the ``arbitrary''.

> [...]
> [...]
> [...]

I'm sure I've seen a good FAQ about this. Now where was it?

>  You can't extrapolate data
> without aliasing, introducing harmonics or some other kind of distortion.
> Now if you were to insert those zero's in the _samples_, you'd have some
> serous problems.  :->  __David

I saw a brochure from A MAjor Consumer Electronics Company (tm) in which it
was claimed that their CD player could reproduce frequencies beyond the
theoretical maximum by a process of digital extrapolation.  Apart from this
being mainly interesting for dogs, either Nyquist is obsolete or these
people are telling porkies.

Cheers,

Chris

------------------------------
Date:         Fri, 16 Jul 1993 14:11:19 +0100
From:         David J Greaves 
Subject:      CD Sampling rates

Oversampling is not to do with error correction but to do with
filtering.

An old CD play without oversampling presents 16 bit samples to
its DACs (one per channel) at the raw data rate of 44.1 Ksample
per second.  The DAC then needs to be followed by a sharp
cuttoff filter which interpolates between the samples to give
a smooth audio signal with no components above 20 kHz, the
Nyquist frequency.

Such filters are hard to realise with a smooth phase response
using analogue components and getting two channels matched is harder.

With oversampling, digital signal processing is used to generate
additional samples between those on the disc.  this results
in a higher rate of samples at the DAC and therefore a less
stringent analogue filter requirement.

Since we now have a higher sample rate, in theory we could do
with less bits or resolution for the same info and signal
to noise ratio.
One bit systems take this to the limit.

David Greaves
/*

------------------------------
Date:         Fri, 16 Jul 1993 13:19:55 GMT
From:         "Daniel S. Riley" 
Subject:      Re: oversampling

John Rossi III  said:
JROSSI> I thought the whole concept of oversampling was arrived at to
JROSSI> achieve a higher cutoff for the brickwall antialiasing filter
JROSSI> (which high end stuff presumably does with an analog filter).
JROSSI> The way some guy at Denon explained it to me was that if you
JROSSI> did not oversample you would need to have a brickwall filter
JROSSI> around 21 KHz.  Four times oversampling effetively increased
JROSSI> the Nyquist frequency to 80 KHz so that a less steep analog
JROSSI> filter could be used in order to avoid phase shifts.  Have I
JROSSI> been believing a lie for all these years?

Well, since no seems to know, might as well kick in my 2 cents of
uninformed speculation...

John's explanation is the only one I've heard that makes any sense to
me--you insert a bunch of zeroes between every sample, run it through
a 20 KHz digital filter (which can be arbitrarily ideal, given enough
processor power) to recover something that looks like the original
signal, DtoA it at 320 KHz (for 8x oversampling), and then run it
through a nice smooth analog anti-aliasing filter up around 160 KHz.
That turns the nasty 20 KHz filter into a digital filter that can be
optimised for whatever properties you want, and shifts the analog
filter up to a range where you can optimise it for minimal phase
shift without having to worry about making it a brick wall.

The choice between inserting zeroes or inserting the last sample
probably isn't critical--you pick whichever makes the digital filter
design better.

--
-Dan Riley                          Internet: dsr@lns598.tn.cornell.edu
-Wilson Lab, Cornell University     HEPNET/SPAN: lns598::dsr (44630::dsr)
              "Distance means nothing/To me." -Kate Bush

------------------------------
Date:         Fri, 16 Jul 1993 11:20:53 EDT
From:         ronin 
Subject:      Re: oversampling

john's got it.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Fri, 16 Jul 1993 11:23:54 EDT
From:         ronin 
Subject:      Re: oversampling

the coding format on a cd is pwm, not pcm. the 'bits' of the samples
are not in fact explicitly laid on the disk. rather, the pcm words are
are used to modulate the width of a pulse generator, which in turn
controls the writing laser. what gets put on the disk are 'pits' of
varying lengths, their lengths proportional to the magnitude of the
source dataword.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Fri, 16 Jul 1993 11:33:59 EDT
From:         ronin 
Subject:      oversampling code

try this, and get back to me:

/*** begin ***/
/*oversampling simulator. steps through one cycle of 250 point sine wave.
every point is padded by three interpolation points of value zero.
presents waveform data to differentiator, which subtracts current output
of dq flipflop. differentiator output is added to integrator. integrator
output is presented to dq flipflop, which generates sign of integrator
output. the dq output is the sigma-delta code. output sd code is processed
by simple integrating lopass filter, emulating reconstruction.
outputs wave, dq, lp: oversampled waveform with zeros, sigma-delta code,
lopass filter output. view with spreadsheet.
/*
#include 
#include 
int i,j,dq,wavelength;
float lp,interp,diff,integ,wave,pi2,qstep;
FILE *outf;

sdmod()
{
 diff=wave-dq;
 integ+=diff;
 if(integ>0)
  dq=1;
 else
  dq=-1;
 lp+=dq*qstep;
 fprintf(outf,"%f,%d,%f\n",wave,dq,lp);
}
main()
{
 outf=fopen("sdmod.dta","wt");
 wavelength=250;
 pi2=3.14159*2;
 qstep=1.0/(wavelength)/4.0);
 interp=0;
 for(i=0,diff=0,integ=0,dq=0,lp=0;i-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Fri, 16 Jul 1993 12:44:13 EDT
From:         mbartkow@GWENDU.ENST-BRETAGNE.FR
Subject:      Re: oversampling

John,

you are completely wright

Maciej

------------------------------
Date:         Fri, 16 Jul 1993 12:16:45 -0700
From:         Michael O'Hara 
Subject:      Re: CD Sampling rates

No, not even close.

Filters that would be used at 1/2 the sampling frequency in a
non-oversampling CD player introduce nasty distortion for the
most part, and are expensive.

Oversampling used a digital "recirculating interpolater" to
generate its best guess at the samples that would exist it the s
sample rate of the CD were much higher than it was. this allows
for much simpler filters (with more "gentle" slopes) that intro-
duce a much lower level of ringing, group delay, etc.

It does little to improve the sound, however.  My SONY CDP-520es
is still one of the cleanest units around. No oversampling,just
really expensive filters.

Tech info: Digital oversampling devices use functions of sin(x)
to generate the interpolated digital signal... the actual filters
used here aren't nearly long enough for my taste; so a pulse will
have meny many "ringing buddies" on either side.

The AUDIA digimaster uses a 64x oversampling filter, but with a
"splining" algorythym that generates NO artifacts. $4000 though.
./

------------------------------
Date:         Fri, 16 Jul 1993 16:23:24 EDT
From:         ronin 
Subject:      Re: CD Sampling rates

actually, it's sin(x)/x, aka the mexican hat function.
is the ringing you refer to that which is associated with the
truncated impulse response captured in an fir? or is it the
secondary lobes on the interpolation function? cuz those are
actually part of the reconstruction process.
i like the idea of a spline-interpolator. i wonder how the
hell you parameterize it on the fly. well, i guess that's why it's
expensive. big buffers, lots of math.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Fri, 16 Jul 1993 15:11:29 -0700
From:         Michael O'Hara 
Subject:      Re: oversampling

>>>Have I been beleiving a lie for all these years?

Only if you believe the stuff about "better" CD sound because of them!

------------------------------
Date:         Sat, 17 Jul 1993 23:47:43 +0000
From:         Nick Rothwell 
Subject:      Re: oversampling

>what gets put on the disk are 'pits' of
>varying lengths, their lengths proportional to the magnitude of the
>source dataword.

Doesn't that make it analogue? Or is the pit length read absolutely
accurately and/or corrected?

                        Nick Rothwell   |   cassiel@cassiel.demon.co.uk
     CASSIEL Contemporary Music/Dance   |   cassiel@cix.compulink.co.uk

------------------------------
Date:         Sun, 18 Jul 1993 05:43:11 GMT
From:         Matthew A Siegler 
Subject:      Re: oversampling

Speaking of which, can you belive those "rings" that you put around the edge
of cd's, which are supposed to improve "clarity."  What is this BS?  It seems
to me there are a lot of manufacturers out there banking on the ignorance of
people who only understand principles of analog media.

------------------------------
Date:         Mon, 19 Jul 1993 09:05:37 -0400
From:         Chris Gray 
Subject:      Re: oversampling

> the coding format on a cd is pwm, not pcm. the 'bits' of the samples
> are not in fact explicitly laid on the disk. rather, the pcm words are
> are used to modulate the width of a pulse generator, which in turn
> controls the writing laser. what gets put on the disk are 'pits' of
> varying lengths, their lengths proportional to the magnitude of the
> source dataword.

Eric I don't believe you. In the first place reading such analogue pits
within one part in 2**16 (for FORTRANners) would be highly demanding on
the player, which would also have to get the rotation speed just right.
Also everything I've read indicates that the Audio CD format is a subset
of the CD-ROM format(s), or rather that the latter are supersets which
include more error detection and other meta-information.

- Chris

------------------------------
Date:         Mon, 19 Jul 1993 11:07:25 EDT
From:         ronin 
Subject:      Re: oversampling

whether or not you believe me is really not of much concern to
me. read a book on the subject. i recommend either pohlmann or
watkinson.
as for the concerns about speed accuracy you mention, they're
even worse than you think. the angular velocity of the cd isn't
constant. the rotation speed changes to keep the data flowing
under the read laser at a constant rate.
of course, it's buffered, you know, so a little jitter is well
accounted for.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Mon, 19 Jul 1993 12:48:28 -0400
From:         Andy Farnell 
Subject:      Re: oversampling

Oh dear there seems to be a LOT of confusion about this quite simple thing.
I think the context was originally that of CD players and the oversampling
figures quoted by manufacturers so I will stick to this.

Oversampling comes about because it very difficult to make output filters
which are sufficiently steep. The ideal output filter is sometimes thought to
be  like a brick wall which passes everything below 1/2 x Hz and nothing above.
While this is not strictly true is a good approximation to work to. Now good
filter designs require many stages, adding lots of expensive analogue components
which also introduce noise and phase disturbance. The solution is oversampling,
and this has nothing to do with foldover images (aliasing) or the like really.

The idea is to raise the sampling rate used at the output (D/A) without
changing the informational content of the signal. The practical upshot
of so doing is to allow much less hectic filtering of the converted signal
since the components we wish to remove have now shifted up far away from the
audio signal we wish to keep.

The process of up-sampling the data stream is to insert, byte-wise, copies
of the n x 1/Nth byte (where N and n are the old and new sampling rates)
This in a sense is like a digital sample and hold. As for inserting zeros
I don't think this is a very good idea.

Andy Farnell

------------------------------
Date:         Mon, 19 Jul 1993 13:09:20 -0500
From:         Brian Adamson 
Subject:      Re: oversampling

% The process of up-sampling the data stream is to insert, byte-wise, copies
% of the n x 1/Nth byte (where N and n are the old and new sampling rates)
% This in a sense is like a digital sample and hold. As for inserting zeros
% I don't think this is a very good idea.
%

   Zeroes are OK... Just make sure you give your new higher-sample rate
interpolating/ low pass filter some gain.

% Andy Farnell


                                - Brian Adamson
                                  adamson@itd.nrl.navy.mil
                                  Code 5523
                                  Naval Research Laboratory
                                  Washington, DC 20375

------------------------------
Date:         Tue, 20 Jul 1993 07:47:19 +0000
From:         Nick Rothwell 
Subject:      Re: oversampling

>Speaking of which, can you belive those "rings" that you put around the edge
>of cd's, which are supposed to improve "clarity."  What is this BS?  It seems
>to me there are a lot of manufacturers out there banking on the ignorance of
>people who only understand principles of analog media.

Erm... I have no idea whether green pens work at all, but I don't see why
they might not in principle: reduced reflections (or whatever they're
supposed to do) leading to less error correction and/or data loss. I guess
it depends whether the EC scheme is lossy or not, and how close to the edge
the average CD player sails.

Audio Technica make "CD stabilisers", presumably for a similar reason.

                        Nick Rothwell   |   cassiel@cassiel.demon.co.uk
     CASSIEL Contemporary Music/Dance   |   cassiel@cix.compulink.co.uk

------------------------------
Date:         Tue, 20 Jul 1993 05:16:07 -0700
From:         Michael O'Hara 
Subject:      Re: oversampling

The audio CD format:

uses a MFM recording scheme, just like disk drives.  This scheme encodes the
data (making it larger) in order to prevent long strings of 1's or 0's from
occuring. This is a must, as the laser control servos have to have something
to lock onto... in this case, they are going for the loudest "noise", this
signaling the center of the "track".

In fact; the above applies to ALL CDs.

Keep in mind that all CDs ar a repository of data; a flawed one however.

Audio CDs use a cross-interleaved error correction (like pascals' "magic
squares") where the data is arranged into tables, and the checksums of the
rows, and columns are recorded along with the data.

A "data" CD has lots of other error correction as well, and in addition,
there is a duplicate of all data.  CD players for the computer have
an additional level of hardware error correction that Audio home units do not.

perhaps that was redundant, I will learn to use a darn editor someday.

Anyway; that kind of covers it.  I can post more detailed explainations
if desired.

------------------------------
Date:         Tue, 20 Jul 1993 06:02:22 -0700
From:         Michael O'Hara 
Subject:      Re: oversampling

Zeros are OK?

Not.

The process is simple, kind of.

Data to be oversampled is fed into a digital filter, that is a long
"row" of latches. this data is moving through the input side of the
filter, conveyor style.

Since we need several outputs for each time the input conveyor moves
up a notch, a state machine (A small computer of sorts, kind of) takes
some number of those input data words, all at once, and generates
its best idea of what imaginary samples in between the original samples
would look like, based on some function, the most common and cheapest
being the sin x over x function. This is done contextually - several
input samples are required. (I repeat this because it is the main
point about oversampling).

Anyway - we leave those expensive analog filters behind, but at a cost.

That cost is the "ringing" that is introduced but the limitd number of
samples that the filter can take as input. (length of the input conveyor)

This ringing makes a single pulse applied to the inut stream have
little ripples on each side (the mexican hat thingy) and it shouldnt.
(the washington monument thingy). er, inut == input.

Some, (i like audias' unit best) have implemented much better oversampling
algorythims. French curve,splining, etc.

More rambling from your friendly neighbourhood Dolphin.

------------------------------
Date:         Tue, 20 Jul 1993 10:12:00 EDT
From:         John Rossi III 
Subject:      Re: oversampling

I find it amazing how many ways a CD can actually work.  Currently, I am
going with the LFEP (Low Flux Encoding Position) theory.  Basically, the
way this works is that the pits on a CD cause physical areas of low
electromegnetic flux when illuminated by the laser.  The position of the
reflected laser beam is converted to an absolute amplitude by recovery
of the positional rotation of the reflection induced by the pit.  In this
way, the CD is capable of provioding encoding of multiple (up to four, I
guess) streams of information allowing for corrected-phase stereo (and
quad, I guess) separation.  The size of the pit is only important in
a relative sense in that large pits produce more flux than small pits.

John

------------------------------
Date:         Tue, 20 Jul 1993 16:13:11 +0200
From:         Adam MIROWSKI 
Subject:      Re: oversampling

Michael O'Hara writes:
>  [...]
>
> That cost is the "ringing" that is introduced but the limitd number of
> samples that the filter can take as input. (length of the input conveyor)
>
> This ringing makes a single pulse applied to the inut stream have
> little ripples on each side (the mexican hat thingy) and it shouldnt.
> (the washington monument thingy). er, inut == input.

I am not a big digital audio specialist, but...

Couldn't this ringing be the result of the limited bandwidth?
You cannot transmit a square pulse through a limited bandwidth
system, because a square pulse has an infinite bandwidth.
So maybe this "mexican hat" is what remains from a square pulse
(or from a Dirac impulse) after cutting high frequencies?

------------------------------
Date:         Tue, 20 Jul 1993 09:36:51 -0700
From:         Michael O'Hara 
Subject:      File: "EMUSIC-L LOG9307C"

--------------I find it amazing how many ways a CD can actually work.
 Currently, I am
going with the LFEP (Low Flux Encoding Position) theory.  Basically, the
way this works is that the pits on a CD cause physical areas of low
electromegnetic flux when illuminated by the laser.  The position of the
reflected laser beam is converted to an absolute amplitude by recovery
of the positional rotation of the reflection induced by the pit.  In this
way, the CD is capable of provioding encoding of multiple (up to four, I
guess) streams of information allowing for corrected-phase stereo (and
quad, I guess) separation.  The size of the pit is only important in
a relative sense in that large pits produce more flux than small pits.

John
----------------

Not on CDs, but IBM has demonstrated a blue light laser disk magnetooptical
type system not too unlike what you mention
----------------

------------------------------
Date:         Tue, 20 Jul 1993 11:40:44 CDT
From:         Bob Crispen 
Subject:      Re: oversampling

Nick Rothwell  sez:

>Erm... I have no idea whether green pens work at all, but I don't see why
>they might not in principle: reduced reflections (or whatever they're
>supposed to do) leading to less error correction and/or data loss. I guess
>it depends whether the EC scheme is lossy or not, and how close to the edge
>the average CD player sails.
>
>Audio Technica make "CD stabilisers", presumably for a similar reason.

May I quote the alt.folklore.urban faq:

>THE MISAPPLIANCE OF SCIENCE
...
>F. Coloring your CD's rim with (special) marker will enhance sound quality.

The "F" before the line above means (according to their notes):

>  F  = 100% falsehood

The gang at _Stereo Review_ (this is about a year ago I believe) took
a test CD and copied from the digital outputs of a CD player into a
humongous disk file.  They tried this with the CD stabilizer and with
green ink, pointy rubber feet, etc.  Then they compared the files, bit
for bit, with and without the devices.  Verdict: no difference.

While reading the faq file, I noticed another CD urban legend which
turns out to be true:

>T.*CDs are the size they are because it could hold Beethoven's 9th symphony.

and also this one which is false (damn shame):
>F. Mime has heart attack during act. People think it's part of act; he dies.
+-------------------------------+--------------------------------------+
| Rev. Bob "Bob" Crispen        | Music should not be held responsible |
| crispen@foxy.boeing.com       |   for the people who listen to it.   |
+-------------------------------+--------------------------------------+

------------------------------
Date:         Tue, 20 Jul 1993 11:49:36 CDT
From:         Bob Crispen 
Subject:      Re: Oversampling

Following up the green marker thing, I have discovered a phenomenon
called LP rot.

During the 1970s and early 1980s, LP records stored on my shelf
and never played developed scratches and crud.  This phenomenon
occurred mainly on rock records; almost never on classical
records.   And it stopped in the late 1980s, just about the time
my kids left home.

Since my kids swore to me that they never touched the records,
I can only conclude that I was a victim of LP rot.
+-------------------------------+--------------------------------------+
| Rev. Bob "Bob" Crispen        | Music should not be held responsible |
| crispen@foxy.boeing.com       |   for the people who listen to it.   |
+-------------------------------+--------------------------------------+

------------------------------
Date:         Tue, 20 Jul 1993 13:46:00 EDT
From:         John Rossi III 
Subject:      Re: oversampling

Yea, and the guys at stereo review don't believe that there is a difference
between the affective perception of digital and analog, either.

john

------------------------------
Date:         Tue, 20 Jul 1993 13:31:27 CDT
From:         Bob Crispen 
Subject:      Re: oversampling

John Rossi III  sez:

>Yea, and the guys at stereo review don't believe that there is a difference
>between the affective perception of digital and analog, either.

I'm not going to fight that one.  Let's get back to the original
point, since your post casts some doubt on it.  If the stream of bits
coming off the CD and out of the player's digital outputs is identical
when the green magic marker is on and when it's off, just *what* kind
of information is getting from the CD to the listener's ears that might
be affected by the green magic marker?

Are you claiming that the green magic marker on the CD leaves the
bitstream from the CD surface to the pickup alone, but somehow affects
the CD player's ability to convert those bits to analog?  Could you
please explain the mechanism involved?
+-------------------------------+--------------------------------------+
| Rev. Bob "Bob" Crispen        | Music should not be held responsible |
| crispen@foxy.boeing.com       |   for the people who listen to it.   |
+-------------------------------+--------------------------------------+

------------------------------
Date:         Tue, 20 Jul 1993 14:41:40 EDT
From:         ronin 
Subject:      cd encoding

playback:
the laser views the cd from the bottom, so the pits appear as bumps.
the height of these bumps is on the order of 1/4 the wavelength of the
laser, which has already been shortened by the refractive index of
the polycarbonate substrate. the beam bouncing off these bumps interferes
with the incident beam, periodically decreasing the intensity of the total
reflected beam. this changing beam intensity is picked up by photodetectors,
which then convert modulated light to electrical signal.
recording:
stereo information is interleaved as part of the Reed-Solomon algorithm,
and the resultant stream undergoes 'eight-to-fourteen' modulation (EFM),
in which each byte is converted to a 14-bit words designed to minimize
the number of 1/0 transitions in any given word.
reference:
brent karley, "optical disk technology", in ken pohlmann, ed., "advanced
digital audio".

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Tue, 20 Jul 1993 13:00:48 -0700
From:         Michael O'Hara 
Subject:      Re: cd encoding

...14-bit words designed to minimize
the number of 1/0 transitions in any given word.

I think that should be MAXIMISE. i.e. keep the signal noisy and
trackable.

------------------------------
Date:         Tue, 20 Jul 1993 16:08:41 EDT
From:         ronin 
Subject:      Re: cd encoding

On Tue, 20 Jul 1993 13:00:48 -0700 Michael O'Hara said:
>...14-bit words designed to minimize
>the number of 1/0 transitions in any given word.
>
>I think that should be MAXIMISE. i.e. keep the signal noisy and
>trackable.

no.
tracking is accomplished with guide tracks.

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Tue, 20 Jul 1993 16:15:00 EDT
From:         John Rossi III 
Subject:      Re: oversampling

First, it was Nick and not me who mentioned that the Green Marker stuff
might possibly have some credibility.  Mine was just a post which poked
fun at stereo review as a source of ALWAYS CREDIBLE information.  Anyway,
when you think about it, what you have in a CD is a layer of aluminum which
is embedded in some transparent/translucent plastic.  Independent of the
actual conversion/encoding/decoding mechanisms which are involved in the
extraction of the information from the disk, the fact remains that you have a
laser shining electromagnetic radiation through s layer of transparent/
translucent plastic and a reciever which responds to the reflected laser on
the way back.  Chromatically, one might envision all kinds of prisim like
things which might happen as the laser beam trasnsverses the plastic media.
It would not be impossible for the actual transmission process to be
affected by the coating of the disk.  Think about how light difuses through
glass (i.e., the principle by which fiber optics work).  Although it is
improbable, a green ring around the ouitside of a disk may act as some kind
of light termination device (lind of like a resistor termination in a
high speed electronic circuit) which stops any difused light from affecting
the primary beam and reflection, thuys reducing interference.

Now, I'm not saying that I believe any of this, but you asked for a POSSIBLE
explanation of how this might work.

Often, people are over-quick to eliminate absurd suggestions simply because
they are absurd.  Sometimes, however, absurdity is the perfect fit for
reality.  My point about the analog/digital difference and the viewpoint
of Stereo Review is an example.  After years of A/B testing the general
consensus is that standard CD audio can not be affectively discriminated
from analog.  If you read Dave Moulton's article about 3 Home Studio and
Recording issues ago, you might recall that some Japaneese academic group
was able to back up Neil Young's assertion that digital is affectively
different than analog.

Now if only people would start to consider the HIV hypothesis of AIDS as
absurd as it is, and people would agree.

John

------------------------------
Date:         Tue, 20 Jul 1993 21:15:40 -0700
From:         Michael O'Hara 
Subject:      Re: cd encoding

Tracking is accomplished with guide tracks?

WHatever are you talking about?  The entire reason for changing
the data stream to MFM encoding IS TO ENSURE that there is NOT
a string of ones or zeros (consecutivly speaking) in the data

The pickup only sees transitions pit wise, and a long string of
1's cant even be seen by the pickup!!  Same for 0s.

The side effect of this (the main reason for this, perhaps) is that
the data stream is always "noisy".  The more centered on the track you
are, (ignoring the focus / diffraction wavefront for simplicitys' sake)
the more noisy the picked up signal is. If your detector is divided
into sections, then you can develop a directional error signal as well.
(they are always this way - usuallu in quadrants, with the focus diff.
grating included here).  No "guide tracks that I have ever heard of.

If I am wrong, please give me a reference.
I wish to be an enlightened Dolphin. :)

------------------------------
Date:         Tue, 20 Jul 1993 21:29:57 -0700
From:         Michael O'Hara 
Subject:      Re: oversampling

I think, personally, that the CD optical methodology is sound.
Even if there is an odd dropout or two, No Big Deal.

Clean, well cared for CDs are the most important thing, and since
all we care about is the inteference of the wave fronts at the
surface, and an acceptable S/N to discern the transitions, that
should have absolutely no effect. (the green marker)

I *have* heard small defects on a thrashed CD that was compared toto its'
 replacement.  But this CD was REALLY fu**ed up.

The main problem is the *distortion* caused by cheap digital
filters.. I think this problem is as bad sound-wise, as the
badfilters in the very first CD player I had (remember those
early units? mine took 2 min to start on the CBS CD leader...).

The main difference in oversampling in most units is COST

My fav Sony CDP-520 can reproduce a pulse without ringing,
(no oversampling) and I wouldnt trade it for most of the
junk out there today.  Excellent analog filters. (57 poles i heard)

anyway - dont worry about the CD machanics - they work well.

YOur filters are the most important thing. :)  And only there,
(assuming monotonicity) will audible benifits ensue.

Dolph.

------------------------------
Date:         Wed, 21 Jul 1993 09:46:22 GMT
From:         Martin Rootes 
Subject:      Re: Green Magic Markers

I don't know about the effect of green marker pens on CDs but I do know that
my can of elephant repellent works brilliantly. My friends keep saying there's
no elephants in Sheffield, well that proves it works. It only cost me stlg1000
too, but you can't put a cost on security.

    Martin.

Come on, if this really did work then why aren't CDs produced with a green
ring round the outside.





------------------------------------------------------------------------------
Martin Rootes - Senior Systems Programmer/Analyst, Sheffield Hallam University
Email :         M.Rootes@shu.ac.uk
------------------------------------------------------------------------------

------------------------------
Date:         Wed, 21 Jul 1993 11:37:19 BST
From:         Mark Etherington 
Subject:      CDs and all that

I foolishly borrowed my brother's cheap CD player, which at the time appeared
to simply having trouble reading some of my CDs. Actually what it was doing was
putting small circular scratches on them, meaning I can't play certain tracks on
many of my CDs now.

Is there a particularly coloured pen that will cure this?

Mark Etherington
Queens' College
Cambridge UK

------------------------------
Date:         Fri, 23 Jul 1993 07:31:47 +0000
From:         Nick Rothwell 
Subject:      Re: CDs and all that

>Is there a particularly coloured pen that will cure this?

Toothpaste.

                        Nick Rothwell   |   cassiel@cassiel.demon.co.uk
     CASSIEL Contemporary Music/Dance   |   cassiel@cix.compulink.co.uk

------------------------------
Date:         Fri, 23 Jul 1993 10:33:51 EDT
From:         ronin 
Subject:      cd tracking

you are correct in that i have misinterpreted something i read
a little too quickly. on the other hand, your own explanation
doesn't parse, and doesn't seem to connect with a closer reading
of the information i have available (again, pohlmann and watkinson).
for instance, why do you continue to insist that the encoding technique
is MFM, which i understand to be an actual magnetic format? is it really
just another bit-to-pulse scheme that can modulate any data-writing
technology? is it related to EFM (which *is* what cds are encoded with)
in some way that i don't grasp? have you confused cds in particular with
optical disks in general, which do might have more of the characteristics
you ascribe to them?
as for tracking...
you're right. there are no guide tracks. one conventional means of tracking
is to illuminate the track with three beams, usually derived from one by
means of a diffraction grating. the first 'extra' beam runs a little bit
ahead and to one side of the main beam, and the other runs a little bit
behind and to the other side. the signal from the front beam is delayed
to match up with the rear beam, both are lopassed to remove channel data,
and are fed to a differential amplifier, which produces a signal proportional
to their difference in average brightness amplitude. this signal controls
the tracking servo. if the beam is off track, then one tracking beam will
be a little off the pits and onto the mirror, and the other beam will be
vice versa, so their average amplitudes over the same pit region will be
different. the sign and magnitude of the difference is used to correct
the pickup sled.
again, this is for cds. i make no statements about optical disks in general,
about which i have no information and know very little (although watkinson
actually does have a nice section on optical disk variations, which i
just haven't bothered to read).
now... what's this about noise, again?

-----------< Cognitive Dissonance is a 20th Century Art Form >-----------
Eric Harnden (Ronin)
 or 
The American University Physics Dept.
4400 Mass. Ave. NW, Washington, DC, 20016-8058
(202) 885-2748  (with Voice Mail)
---------------------< Join the Cognitive Dissidents >-------------------

------------------------------
Date:         Fri, 23 Jul 1993 08:31:29 -0700
From:         Michael O'Hara 
Subject:      Re: cd tracking

Heh.. MFM is the parent and superset of EFM. same thing.

The three beam system is a variation on the system I described.
The principals are the same, the diffraction is done a little
differently.

The three beam type pickup is the most common in use today.

The Modified FM :) used for writing the data ensures that there
is always activity in the track, "Noise" is what it always looks
like.  The actual data may have long strings of 0s - This would
be "silence" is recorded on the track directly.  Hence, the Encoded FM :)
signal (compresses more than MFM, and was as far as I know,
invented by Steve Wozniak for the DiskII! - I may be off base here)
is used instead.  In the case of a diskdrive, this is because the heads
can only sense an AC signal, in the case of a CD - it is used for
tracking.  Two different problems, same solution!

Stranger unt stranger.

Oh, yes, straight MFM just adds a "1" pulse between every data bit.
Since there can be 2 or 3 1s or 0s in a row before problems occur,
we just choose all the data bytes that will work, e.g. FF is BAD.
66 is GOOD. Take a list of all those "good bytes".  Let us say we
get 75 good bytes.  We will take 64 of thise bytes and use them for
writing data to the disk.  In essence this makes our storage 6 bits wide.
And it leaves us with several "reserved" bits that allow us to implement
an "autosync" system that means we do not need to use the optical switch
to tell where a "sector" or "block" etal. begin.  We just use a couple
of the reserved bytes, these never occur in the data track.  The 8 bit
data can be converted to 6 bit data however you like. 256 8 bit bytes =
324 6 bit bytes.

Heh, I would insert after the first line of that rambling paragraph,
"In the case of EFM, we try to use the space more efficiently."

I should use an editor, but they suck. :)

whew. any other questions?

Oh, I should say that early disks used a little hole that had a lamp
shining behind it. Once every revolution, the hole punched in the media
would pass by, signaling the start of track. (yecch.)

your Friendly neighborhood Dolphin.

------------------------------
End of the EMUSIC-L Digest
******************************