[sdiy] digital control of CV
John Mahoney
jmahoney at gate.net
Tue Sep 4 01:09:26 CEST 2007
Below is a 2002 vintage of Paul's "white paper", reposted for your
collective enjoyment. Thanks to an anonymous list member for sending it to me.
The selection of chips on the market today may differ from what's
noted, below, but the principles are the same. Etched in silicon, you
might say (if you were a bad punster).
--
john
From: "Paul Schreiber" <synth1 at airmail.net>
Subject: [sdiy] DAC selection in MIDI-CV Part #1-3
Date: Wed, 18 Sep 2002 12:09:41 -0500
> "I'm designing one at the moment (16 channels + 4 gates).
> It will be based around a PIC and MAX525BCPP 12bit D/As.
> These Quad-D/As are relatively cheap, but accurate enough
> (+/- 1 LSB) for pitch control over a full 10V scale."
No, they are NOT! (speaking as Maxim's DAC FAE). Although I've
discussed this in the past several times (check archives), I'm doing
some DAC apps now (on the older MAX502 parallel 12-bit DAC), it's
time for a ADC/DAC refresher course :)
Here is a DAC FAQ.....
a) What errors are in DACs, and what errors can I control?
There are MANY sources of error in DACs and their 2 main "outside"
components: the voltage reference and the buffer amp. You *must* do a
full "system level study" because all the errors add up, and you have
to address ALL of them.
The first thing to address is simple but often overlooked: what
voltage accuracy do I need. This has NOTHING AT ALL to do with the
number of bits in the DAC! This has to do with the VCOs. What ideal
voltage does the VCO need to receive in order to be on pitch
(assuming the VCO is 'perfect')?
At 1V/octave, a full step is 1/12 or 83.333mv. If we desire 1% of
that (pretty much the limit of pitch perception), then the voltage
fed to the VCO out of the DAC system can not have an error of more
than 833uv. That's 833 MILLIONTHS of a volt. It doesn't matter if we
are in the 0-1V octave or a 9-10V octave, the DAC system MUST be able
to 'hit' a voltage within 833uv.
Setting the DAC chip aside, let's look just at the op amp buffer
(this may be inside the DAC chip in some parts). The critical spec
here is the op amp's Output Offset Voltage Drift spec. We really
don't care what the initial offset voltage is (it's a constant), but,
it had better not drift around over temp or the VCOs drift. The best
op amps have drifts of 5uv/C. or better. Be sure yours does (like the
MAX427 or MXL1013, used extensively in MOTM, see www.maxim-ic.com for
datasheets and FREE samples).
The next thing to worry about is the temp drift of the reference. The
reference may be internal or external to the DAC. As a rule, external
references are *much* more accurate and stable. There are again 2
specs of concern: accuracy and drift. Accuracy is part of the overall
"voltage offset", as the output of the DAC is some fraction of the
voltage reference (DACs work by precision dividing down the
reference). Drift is in PPM (Parts Per Million) per degree C. A 5V
reference with 5PPM drifts 1uv/C. Choose a reference that has 50PPM
or better (the same as a really good 0.1% resistor). Get 10PPM and be
done with it :)
Now let's look at issues with the DAC itself.
When you think of a DAC, the first thing that pops into your head is
"how many bits", but *really* the first thing needs to be "What is
the Integral Non-Linearity (INL) error?" Unless you are an 'IT
Professional', and the first thing you think of is "Who is buying the beer?"
INL is the overall ACCURACY of how well the DAC 'hits' the output you
tell it to. You can have a 24-bit DAC, tell it to output 1.000V and
if it outputs 1.0175V that's a problem. DACs are designed to be used
one of 2 ways: as an open-loop, absolute voltage generator, or in a
"set until happy" system (like say a volume control, where the user
doesn't care what the digital code is to generate the output, just
the output matters). In a MIDI-CV converter, the WHOLE IDEA is to
translate a SPECIFIC code (the MIDI note) to a SPECIFIC voltage. That
is why you really have to slog through all of this stuff :)
Much DAC confusion is associated with MIDI being "only 7 bits of
information". Heck, an 8-bit DAC is one better, right?!??
Errr....nope. All that means is that there are 127 possible outputs,
but it says NOTHING AT ALL about how ACCURATE those 127 outputs have
to be. I have designed systems with 4 output voltages, but they had
to be within 4uv at 2PPM ACCURATE. So, do not confuse "code length"
to "DAC bits".
Here is the nitty-gritty: picking the right (translate: cheapest!)
DAC for the application.
What you do is 1 simple, easy calculation. You divide up the overall
voltage output you want the *DAC* to do by *1/2* of the error
voltage. This will tell you how many 'steps' the DAC needs at +-1LSB
INL. Again, we DON'T CARE how many steps the INPUT is, we only care
what the DAC ACCURACY is!!
Let's start with the following "design document":
Output Range: 0 - 5V
Error: 1mv
So, the number of DAC steps, at a specified INL (this is on the data
sheet) of +-1LSB is:
5/.0005 = 10,000 steps, and each step is +-1LSB INL (this is a
function of laser die trimming at the manufacturer).
OK, so what DACs have 10,000 steps? Well, the DACs are *binary weighted*, so:
12 bit: 4096 steps
14 bit: 16,384 steps
16 bit: 65,536 steps
We are immediately hosed with a 12-bit DAC, so the next in line is a
14-bit. I suggest the Maxim MAX544BCPA, SPI interfaced 8-pin DIP part
with a MAX675CPA +5V voltage reference.
Now, what if we can get a better laser-trimmed part, say +-1/2LSB?
Well, let's look at the MAX536ACPE, which is a quad 12-bit DAC at
+-1/2 LSB. Instead of our 1mv error, we have (5/4096) 1.22mv error.
So, for a multi-channel MIDI-CV, whose DAC output is ranging 0-5V,
and you live with a 1.22mv error (1.5%), the MAX536ACPE is your part.
But wait! As you are browsing parts, the MAX5541, LOW COST 16-bit
part catches your eye. Hmmm...the data sheet says this thing as a
whopping 16 LSBs of INL error! Can we use that??!? The MAX541 has
4LSBs of INL buts it's twice the price. Let's check them out.
In a 5V span, 1 'ideal' LSB in a 16-bit system is (5/65536) = 76.3uv.
Therefore, 16LSBs are (16)(76.3uv) = 1.22mv! Hey, I've seen THAT
number before :)
Yes, it turns out this "16-bit" DAC has the SAME ACCURACY as the
"12-bit" DAC!!!! So why bother??!?
There are several reasons, and one is the dreaded Marketing :)
But, let's say you wanted a -5V to +5V span, and THEN the 4LSB
MAX541C grade would work great! If you used it over a 0-5V range,
you can go nuts and use the 'A' graded (trimmed) part and get errors
under 300uv (yummmy, MOTM territory....)
Last point: you may find out that it is "easier" to have the DAC span
a PORTION of the total output, and have additive DC voltages to span
octaves. This technique is use in some commercial converters. Let's
say you have the extreme case of the DAC spans only 1 octave. What can we use?
1/.0005 = 2000LSBs. So, a 12-bit, DAC at +-2LSBs is fine. In the
'olden days', there were very accurate 8 and 10-bit DACs (say trimmed
to +-1/8LSB) that could work. But today, you would get a low cost 12-bit one.
There is one more factor, and that is a software issue. If you have
7-bits in, and 14-bits out, you need a lookup table for the DAC.
Also, this wider spread allows for things like pitch bend, aftertouch
and software LFOs not to have 'zipper noise'.
Well, hopefully this sheds some light on DAC selection.
Paul S.
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.484 / Virus Database: 269.13.3/986 - Release Date: 9/3/2007 9:31 AM
More information about the Synth-diy
mailing list