Wow! My head is buzzing. I just saw an article on EE Times by our very own Adam Taylor on the basics of using FPGAs to perform math operations. I was already familiar with the various number formats, but I must admit the formulas for calculating the number of bits required to represent a quantity, along with the accuracy calculations, were new to me. (Either that, or I had forgotten them -- the way things are these days, both possibilities are equally likely.)
The original version of this article appeared in issue 80 (third quarter 2012) of the Xilinx magazine Xcell Journal. This got me to wondering if Adam had written any other articles for that magazine, so I did a quick search and discovered that the little rascal is prolific! Check out the following Xcell Journal articles to see what I mean.
All I can say is "Give me strength!" I don't know how Adam finds the time. Anyway, check out his article on the basics of FPGA mathematics, and tell me what you think. Did you already know all this stuff, or was it as new to you as it was to me?
@Hamster, here is, I think, the simplest way to look at it.
When we humans do math, we think this way: -54 = -50 + -4
Whereas computers think this way: -54 = -60 + 6
To confirm this, let's look at a hexadecimal example: 0xFA = 0xF0 + 0x0A or, in decimal: -6 = -16 + 10
In other words, it is the sum of a signed MSB and an always-positive LSB offset.
So, when the DSP block multiplies, the partial products are: AL*BL <-- product of two positive offsets AH*BL <-- product of a signed MSB and a positive offset BH*AL <-- product of a signed MSB and a positive offset AH*BH <-- product of two signed MSBs.
@David, you are correct... it was for illustration only, from a decimal perspective. Maybe it is clearer this way:
--------- We humans tend to think this way: -54 = -50 + -4 Whereas computers think this way: -54 = -60 + 6
Technically, a computer would more likely use "-64 + 10", but here I'm sticking with the decimal notation to illustrate the point, which is that the computer starts from a signed MSB base, and adds an always-positive offset.
@hamster: Here is some c code to multiply 4 x -4, shows for negative numbers times positive requires the left bits to be 1's when the product width is extended. So propagating the sign works also for both negative will result in a positive product when added.
int a = -4, b = 4, c = 0, d = 0, al, bl, au, bu, albl, aubl, albu, aubu; c = a * b; // c == 0xfffffff0 al = a & 0x0001ffff; // al == 0x0001fffc au = (int)(a & 0xfffe0000) >> 16; // au == 0xfffffffe bl = b & 0x0001ffff; // bl == 0x00000004 bu = (int)(b & 0xfffe0000) >> 16; // bu == 0x00000000 albl = al * bl; // albl == 0x0007fff0 aubl = au * bl; // aubl == 0xfffffff8 albu = al * bu; // albu == 0x00000000 aubu = au & bu; // aubu == 0x00000000 d = aubu + (aubl << 16) + (albu << 16) + albl; // d == 0xfffffff0 .. c == 0xfffffff0
As was already pointed out (rfindley? I think) negative numbers are formed by adding a positive offset to the most negative number which is a 1 followed by however many 0's. Not obvious when complementing and adding 1 to form 2's complement.
Two interestiing and slightly related things this week, we thought briefly about using mathcad to generate code for some very simple filters like a couple of biquads, using their automagic vhdl builder, mainly because my boss didn't believe machine generated code could be so nasty and then we found out the cost and my boss very quickly went off the idea, you are basically paying the equivelent of a junior engineer's salery for a year to produce code which wouldnt be aceptable if it was produced by a junior engineer.
And the on-line demo which mathcad did for us didnt work which made me laugh.
I was slightly disappointed by the lack of mention of the different types of multiplication (high result, low result, and doubled precision result). For fixed point operations, using the high result is more usually appropriate than using the low result and can be cheaper than a doubled precision result. (Normalized floating point numbers use the high result. I would guess that FPGA DSP slices support generating the high result, though there might not be any reduction in resource use relative to doubled precision.)
> I would guess that FPGA DSP slices support generating the high result...
I think that the DSP48 slices are really poorly named (a mrketing choice??). They should really be called MULT18x18AndLotsOfAdding. However they are really flexible...
I think of them as multiplying in base 2^17 (plus sign makes 18).
Just like how when multiplying two 3 digit numbers you and I require perform nine single digit multipcations and lots of addition, to multiply two 35 bit numbers every cycle requires four DSP48 blocks....
So to perform a pipelined 64 bit mult every cycle requires 16 DSP48s...
@Paul, sorry I did not mention them, it is always difficult to determine what to include and what not to in these articles. Typically most FPGA designs do not use floating point numbers due to the complexity hence my focus on the basics of fixed point multiplication.
Maybe I could do a follow on blog here, I will talk to Max about it
Adam : Why not floating points ? It does not make things that difficult isnt it ? I used one which wasnt that complexed as such but I really like the suff so I right not have realized the complexicity.
@geekyasa: I think there used to be a lot of overhead in doing floating-point in FPGAs, but more recently I've seen it used a lot -- let me chat to Adam about this -- maybe he will write some blogs on it for us...
Adam Taylor wrote: "it is always difficult to determine what to include and what not to in these articles."
If teaching was easy, anyone could do it well. Having done a very small amount of informal teaching, I have some appreciation of some of the difficulty. For educational purposes, the shotgun approach--scattered coverage with limited coherence--tends to be less effective, so you are quite right in constraining what subtopics are covered.
I liked how the article went from concepts to application to implementation. You just need to work on discerning what I want--note reading my mind will not work because even I do not know what I want--and writing perfect articles targeted to my wants. :-)
By the way, I wonder if APP should set up a hypertext knowledge base (possibly using a wiki). Sometimes a narrative style of presenting information is appropriate (blogs, articles, books) and sometimes an information web is more useful.
@Paul: "and sometimes an information web is more useful" I agree. The APP Blog moves "fast and furious" and is fun. However, sometimes an idea or technology discussion emerges that needs more time to be developed by those interested in the subject. Unfortunately, the subject closes because it is superceded by the "latest and greatest" blog. We're moving too fast, IMHO (I just learned what that means!)
thanks, numeric_std is the only way to do FPGA math ;)
I was planning another blog here on the fixed and float packages, as you say they are very interesting. I did some expoeriments many years ago with them (maybe 2007/2008 ish) and the results where interesting I plan to re run the experiments using modern devices and tool chains and blog about it.
The appellation "primary colors" refers to a small collection of colors that can be combined to form a range of additional colors, but which "small collection of colors" should we use as our primaries?