Wow! My head is buzzing. I just saw an article on EE Times by our very own Adam Taylor on the basics of using FPGAs to perform math operations. I was already familiar with the various number formats, but I must admit the formulas for calculating the number of bits required to represent a quantity, along with the accuracy calculations, were new to me. (Either that, or I had forgotten them -- the way things are these days, both possibilities are equally likely.)
The original version of this article appeared in issue 80 (third quarter 2012) of the Xilinx magazine Xcell Journal. This got me to wondering if Adam had written any other articles for that magazine, so I did a quick search and discovered that the little rascal is prolific! Check out the following Xcell Journal articles to see what I mean.
All I can say is "Give me strength!" I don't know how Adam finds the time. Anyway, check out his article on the basics of FPGA mathematics, and tell me what you think. Did you already know all this stuff, or was it as new to you as it was to me?
Two interestiing and slightly related things this week, we thought briefly about using mathcad to generate code for some very simple filters like a couple of biquads, using their automagic vhdl builder, mainly because my boss didn't believe machine generated code could be so nasty and then we found out the cost and my boss very quickly went off the idea, you are basically paying the equivelent of a junior engineer's salery for a year to produce code which wouldnt be aceptable if it was produced by a junior engineer.
And the on-line demo which mathcad did for us didnt work which made me laugh.
@David, you are correct... it was for illustration only, from a decimal perspective. Maybe it is clearer this way:
--------- We humans tend to think this way: -54 = -50 + -4 Whereas computers think this way: -54 = -60 + 6
Technically, a computer would more likely use "-64 + 10", but here I'm sticking with the decimal notation to illustrate the point, which is that the computer starts from a signed MSB base, and adds an always-positive offset.
@Paul: "and sometimes an information web is more useful" I agree. The APP Blog moves "fast and furious" and is fun. However, sometimes an idea or technology discussion emerges that needs more time to be developed by those interested in the subject. Unfortunately, the subject closes because it is superceded by the "latest and greatest" blog. We're moving too fast, IMHO (I just learned what that means!)
Adam Taylor wrote: "it is always difficult to determine what to include and what not to in these articles."
If teaching was easy, anyone could do it well. Having done a very small amount of informal teaching, I have some appreciation of some of the difficulty. For educational purposes, the shotgun approach--scattered coverage with limited coherence--tends to be less effective, so you are quite right in constraining what subtopics are covered.
I liked how the article went from concepts to application to implementation. You just need to work on discerning what I want--note reading my mind will not work because even I do not know what I want--and writing perfect articles targeted to my wants. :-)
By the way, I wonder if APP should set up a hypertext knowledge base (possibly using a wiki). Sometimes a narrative style of presenting information is appropriate (blogs, articles, books) and sometimes an information web is more useful.
@hamster: Here is some c code to multiply 4 x -4, shows for negative numbers times positive requires the left bits to be 1's when the product width is extended. So propagating the sign works also for both negative will result in a positive product when added.
int a = -4, b = 4, c = 0, d = 0, al, bl, au, bu, albl, aubl, albu, aubu; c = a * b; // c == 0xfffffff0 al = a & 0x0001ffff; // al == 0x0001fffc au = (int)(a & 0xfffe0000) >> 16; // au == 0xfffffffe bl = b & 0x0001ffff; // bl == 0x00000004 bu = (int)(b & 0xfffe0000) >> 16; // bu == 0x00000000 albl = al * bl; // albl == 0x0007fff0 aubl = au * bl; // aubl == 0xfffffff8 albu = al * bu; // albu == 0x00000000 aubu = au & bu; // aubu == 0x00000000 d = aubu + (aubl << 16) + (albu << 16) + albl; // d == 0xfffffff0 .. c == 0xfffffff0
As was already pointed out (rfindley? I think) negative numbers are formed by adding a positive offset to the most negative number which is a 1 followed by however many 0's. Not obvious when complementing and adding 1 to form 2's complement.