Analog-to-digital (A/D) converters -- also known as ADCs -- are remarkably handy to have around for reading sensors and transducers, monitoring supply voltages, taking temperature readings, and the like.
They're especially handy if you've already got an A/D converter or two lurking in your design just waiting to be used. That's actually the case for all of the Xilinx 7 Series All Programmable FPGAs, 3D ICs, and SoCs. Each of these devices incorporates an AMS (analog mixed-signal) module called the XADC that includes two 12-bit, 1Msamples/sec A/D converters, a 16-channel analog multiplexer, and a second multiplexer connected to on-chip power supply voltages and an on-chip temperature sensor.
Here's a block diagram of the Xilinx 7 series XADC:
The two 12-bit, 1Msamples/sec A/D converters in the Xilinx XADC are good for a wide variety of industrial applications just as they are, but sometimes you need a bit more resolution... or perhaps two bits more.
Wouldn't you prefer to use what you've already got on chip rather than add BOM (bill of materials) cost and chew up board space with another component or two?
There is a way to obtain that additional A/D resolution if you're willing to give up sample rate. The technique is called "oversampling and decimation," and it's based on some solid fundamental principles starting with the Nyquist-Shannon sampling theorem, which gives the minimum sampling rate needed for a band-limited signal as follows:
fn = 2fm
Where fm is the maximum frequency of interest and fn is the sample rate.
Most engineers learn about the Nyquist sampling frequency somewhere along the line. Here it is, actually being useful.
Although it may not be intuitive, you can increase the effective number of bits (ENOB) of A/D resolution through a trick called oversampling gain, as described by the following equation:
fos = 4W · fs
Where fos is the oversampling frequency that you need to achieve W bits of additional resolution compared to what you have at sample rate fs.
How is this possible? It sort of looks like magic, doesn't it?
Well, like most magic tricks, this one requires the proper preparation. Here are the rules:
- The band-limited signal being measured must not vary by more than 0.5 LSB (least-significant bit) during a conversion over the entire 1/fs period. (Note: NOT 1/ fos)
- There needs to be at least 1 LSB of Gaussian (white) noise with a mean value of zero present on the signal being measured.
The first rule is pretty obvious. If the signal being measured varies more than 0.5 LSB within the sample period, then you're not really conforming to the Nyquist-Shannon rules, and so the assumptions being made here are invalid. The second rule looks pretty sketchy, doesn't it? Since when did you actually welcome noise in a system?
Let's look at how this all works to see why we need the noise -- then we'll discuss the noise sources.
To Page 2 >