Two of my fellow bloggers here on All Programmable Planet -- Hamster and Jezzmo -- recently engaged in an interesting discussion as to why a -1 speed grade part was working when -3 speed grade timing was accidentally used in the tool.
To expound on this further, there are a number of factors (collectively referred to as "PVT") that can come into play with regard to timing in an FPGA, ASIC, or any other form of digital device. The first factor is process (P), which refers to the semiconductor process that is used to manufacture the device, as well as the process used for any interconnects in the system.
The second factor is voltage (V), which refers to the supply voltage powering the device. Due to various losses in the power bus nets as they wend their way through the system (boards, connectors, packages, and the silicon dice), each device -- and each logic element on each device -- may be presented with a slightly different voltage.
The third factor is temperature (T), which -- in this context -- refers the junction temperature of the transistors in each logic element. Due to differences in heat transfer and power dissipation across the die, package, circuit board, etc., the junction temperature may be slightly different at each logic element on a die.
Static Timing Analysis (STA) is kind of a "worse-case" estimate of what your timing margin will be for a given speed grade of part at its maximum temperature and minimum supply voltage for every logic element in the device. STA takes the "max" (maximum) delay for each element and each path and adds them together to come up with STA values for each path. The "max delay" number it uses is the slowest part in the bin for that speed grade, at the lowest core voltage, at the maximum operating temperature -- all quite conservative numbers.
What this means in the real world is that one can often trade a lower maximum temperature of operation for a little grace on the speed grade. One can also often gain a little more grace by using a better grade of voltage regulator. One further gets a little more grace due to the worse-case nature of the STA estimate of timing as compared to the statistical nature of the real parts. Before you know it, a 464MHz part (70°C) is running at 500MHz (25°C).
In the case of safety-critical or mission-critical items, one probably wants to stick with the STA's worse-case numbers, but these can be a "cost adder" for consumer goods, RTL prototyping, and similar applications. Another thing to consider is the fact that the factory measures parts when they are new. With several thousand hours of operation, coupled with phenomenon like electro-migration affecting chip timing, one may want to use a little more margin for a design that must keep on operating for a while.
Statistical Design Methodology is where one performs one's own PVT tests on the exact design in use, and one determines the exact timing of the design via a combination of real-world measurements and statistical calculation. Some large corporations have these measurements as options in their timing libraries for standard design practices in their industries. One can use these numbers and/or a corporate set of formulas to determine timing other than worse-case.
Have you ever used anything like this in any of your designs? Would it be of use to you?