As I outlined in a previous column, I am in the process of talking to some of the visionaries of the programmable industry to get their ideas on 1) Where have we been? 2) What are today's challenges? and 3) What will the future look like? This will provide the rest of us with a starting point for our own discussions, thoughts, and prognostications.
In this column, I'm starting a mini-series on the future of FPGAs and low power. We have already had some initial discussions on this topic, but a recent interview with Arif Rahman -- an IC Design Architect at Altera -- will allow us to get into more of the details.
Arif has years of experience with FPGA design and architecture, including the solving of power / speed / functionality trade-offs. Today's blog covers some of Arif's thoughts and our discussions on "Where have we been" with regard to FPGAs and low power. And, as usual, I have started a new message board for us to continue the discussion.
The growing importance of low power
Arif began our discussion with a quick overview of the growth in customer demands for more power-efficient FPGAs. Over the last several years, the requests from FPGA designers for lower power and better power efficiencies started to grow as their users increasingly required more portable and battery-operated equipment. Additionally, large server farms (for both storage and compute needs) found that operating power began to dominate the cost of ownership. With these large installations, a reduction in power by 10 percent could lower overall cost by a similar percentage. Other customers jumped on the low-power bandwagon as similar power concerns made their way into a variety of other applications. Low power was becoming a big deal.
Until around five years ago, the method used to address these power concerns was to simply lower the core FPGA voltage. This voltage scaling technique -- which is also used in ASICs and processors -- was sufficient to reduce FPGA operating power from one generation to the next. (Total FPGA operating power is made up by contributions from both leakage power [the power required to keep circuitry ready to 'work'] and dynamic power [the power required when signals do actual 'work'].)
As FPGAs grew in capacity and users required higher speeds, however, power requirements increased dramatically. Starting around 2006, it became clear that the voltage scaling approach was running into a wall. A whitepaper from Altera in 2007 summarized the key low power issues at the time. The transistor threshold voltage was beginning to limit the ability to reduce core voltages indefinitely. Although reducing oxide thickness could reduce threshold voltages, smaller oxide thickness also increased leakage current, which then increased power dissipation. One way in which this could be addressed -- at least in part -- was to use multiple oxide thicknesses, where one standard thin oxide is used for most transistors, another for I/O driver cells, and a third for memory and pass transistor cells. This multi-oxide approach could allow some optimization of the power distributed within critical circuitry to control overall power dissipation.
Next page >