In my previous blog, I talked about verification in the context of ASICs and the ways in which those methodologies may, or may not, be applicable to FPGAs. As was to be expected, I received some comments about bugs being "free" in an FPGA, thereby resulting in there being less of an incentive to find them all before "fabricating" the device. Of course, the product still has to be of a similar quality when it gets shipped or your reputation may suffer. All of which begs the question -- how do you know when you have performed sufficient verification -- even with an FPGA?
In the ASIC world they turn to coverage. Anyone who knows me knows that I am not a big fan of functional coverage as defined in the ASIC domain, but let's take a quick step back in time and look at where this came from. When everyone was performing directed testing and designs were much, much simpler, there was no such thing as functional coverage. (In the software world they still have no such equivalent -- at least that I am aware of.) Tests were developed that would target certain functionalities; typically those that were directly defined in the requirements document. These were end-to-end tests that looked for well-defined outcomes to occur and that would be representative of the operations that a typical user may perform.
When the team had created a significant number of these tests, they would employ code coverage techniques to ascertain which pieces of the code had not been exercised. In this context, "code" refers to both high-level programming languages in the software world and whatever languages you are using to capture your design intent in the hardware world, including Verilog and VHDL.
Code coverage comes in many different flavors, including line, statement, branch, expression, and many more. Basically, any syntactic unit in the code can be singled out for coverage. This worked fairly well, because it was completely orthogonal to the way in which the tests were created.
When is enough, enough?
Unfortunately, the way in which early directed tests were written was not particularly good. The reference code was distributed among the tests, as were the checkers. There was no modularity and no structure, and maintenance of those tests was time consuming. There was also a problem in that even if every part of the syntactic structure was exercised, it did not mean that bugs could not still be lurking in the code. This is because, in order to detect certain bugs, path coverage would have to be utilized as this does indeed cover all possible behaviors. The problem is that path coverage is intractable.
In order to address this, someone had the bright idea to utilize randomization to help with verification (does anyone know who the originator of this idea was?). The theory was that spending additional time developing a modular testbench and having a generator develop the tests was more efficient than developing individual tests. The problem is that each of those generated tests performs an unknown action and tests arbitrary functionality. How could you tell whether a test covered unique capabilities of a design, or whether it was a duplicate of one already run? The answer to this was functional coverage.
However, rather than finding a way to derive functional coverage from an executable requirements document, a new format and mechanism was defined and thus coverage had to be modeled. This is based on arbitrary observations from the design that indicate that certain aspects of a design have been exercised. For example (and this is an example often used within the industry), a data packet switch may have several inputs, several outputs, and several types of packet. If each packet type has been seen on each input and each packet type has been seen on each output, then there is a reasonable confidence that the device has been adequately verified. Of course, not all combinations may be possible, and thus exclusions and dependencies have to be defined, which means that -- before you know it -- this coverage model is as complex as the design model and the reference model. It is also does not verify that actual results checking occurred or was even possible based on the recorded coverage events.
In conclusion, if we assume that an ad-hoc verification methodology is to be performed on an FPGA design, how do we know when we have performed sufficient verification? What do you use today? Is it enough to just run the production software and see if works or crashes, or do you employ more sophisticated verification?