Shyam described another technique that many high-performance processors use called "voltage scaling." The power dissipated by a device depends directly on the core voltage applied -- the lower the voltage, the lower the power dissipation. Unfortunately, the clock rate is reduced as the voltage drops. But if the application can get away with less performance (perhaps because it is processing less data than the maximum rate), then a voltage drop could save a considerable amount of power. Having several voltage steps available for key devices to scale power and performance will dramatically improve power efficiency. This is particularly the case with regard to "bursty" systems like networking, storage, and communications. Server farms are a prime example where voltage scaling can save millions of dollars on energy bills. With smaller installations (perhaps a single board or two), it may seem like similar techniques don't make much of a difference, but if a small system will see a production run of many thousands, the aggregate savings can be very large indeed.
So, expect the requirement to improve power efficiency to add more layers to typical power management functions. Standards for controlling the core voltage and putting portions of a chip or system into low power modes will eventually simplify the power management process, but you can bet programmable power management devices will stay at the heart of efficient power control systems for years to come.
Don't forget to check out Shyam's book, Power 2 You: A Guide to Power Supply Management and Control, which is available in English, Chinese, and Japanese! This book provides detailed design descriptions for many key features in current power subsystem designs.
Where do you think the evolution of programmable power management devices is heading? Are there other key features that future designs will need? What breakthroughs will make it easier to control overall system power and make every design more power efficient? Please post your comments and questions here.