I just received an email from my old chum Dave Strenski, who is an application analyst at Cray Inc. (the supercomputer company).
As an aside, did you ever see that Homebrew Cray Supercomputer project, in which a young lad called Chris Fenton replicated the functionality of a Cray 1 supercomputer using a Xilinx Spartan-3E 1600 development board? This little beauty is a 1/10-scale, binary-compatible, cycle-accurate Cray-1 implemented in a single FPGA!
Cray 1 implemented using a Xilinx Spartan-3E 1600 development board.
In the above image, the real Cray 1 is shown on the right, while Chris's version -- which is only a few inches tall -- is displayed on the left. The original Cray ran at 80MHz and could use anywhere up to a whopping 32 megabytes of memory (wow)! Actually, it's easy to laugh now, but we have to remember that this was absolutely the state-of-the-art at the time (by comparison, today you can get an Android cellphone with a 1GHz processor and gigabytes of memory... it's a funny old world).
Chris's version was implemented on a Xilinx Spartan-3E 1600 development board. Chris says that his Cray occupies about 75 percent of the logic resources and consumes all of the on-chip RAM. The result is a spiffy Cray-1A running at about 33MHz with about 4 kilowords of RAM. As Chris says, "Now Computer Engineer Barbie has an appropriate place to sit down!"
But we digress...
As I say, I just heard from Dave Strenski, who tells me that he’s written an update to his original FPGA Floating Point performance article from a couple of years ago. In fact, this is the fourth article in a series:
As another aside, Dave also plays around with solar power as a hobby, and Google has made a video about his local solar project:
But wait, there’s more, because Dave says that he and his cohorts have invented a free way to read utility meters. Now I'm wondering what he does in his spare time...
All of this does raise an interesting question. Not so long ago, we wouldn’t have dreamed of using floating-point representations in FPGAs, because the power-guzzling result would have been horribly inefficient. Now, with thousands of hard-core multipliers at our disposal, all sorts of tricks and techniques we can employ, and the ability to use programmable fabric to implement algorithms in a massively parallel fashion, FPGAs are finding their way into all sorts of floating-point-intensive applications, such as radar, for example.
So, which numerical formats are you using in your FPGA designs: integer, fixed-point, floating-point, binary coded decimal (BDC), a mixture of these, or some other?
Hi Warren -- re your comment " It would be nice to see a GPU comparison to FPGAs in the HPC space." You are correct -- it would be nice to see such a comparison -- so when do you think you wil have it ready to share with the rest of us (grin)
Hi William -- I have to say that I love Gray Codes. I always thought these were realtively simple, but I keep on finding hidden layers to them, like implementing Gray Codes in bases other than 2, or implementing Gray Codes that are truly Gray but have count sequences shorter than 2^n (liek a 4-bit Gray Code that cycles through only 14, or 12, or 10 states)...
In the not-so-distant past, you really didn;t think about using floating-point math in FPGAs. All that has changed over the last couple of years -- now FPGAs can be used to do mind-boggling amounts of floating-point math -- I will have to discuss thi sin more detail in the future
I remember a few years ago when it looked like double precision floating point was the holy grail for FPGA designs. For many numerical algorithms DP was a must have to get correct results. Getting them fast was also important (one example was in financial trading where a split second made the difference between a profit or a loss). Now it seems like GPUs are a likely candidate for these applications too. It would be nice to see a GPU comparison to FPGAs in the HPC space.
That's cool the little Cray. I guess we could have controlled the Apollo capsule with an iPhone with speed left to listen to mp3's on the trip. In my work with programmable logic controllers for industrial control, I can say that there is no substitute for floating point math. It's a pain to have to use integer math when the real world doesn't.