When putting together a virtual prototype that can be used for software development, the principle requirements are that it executes fast (really, really fast -- comparable to real-time performance) and that it is capable of consuming the same code (in fact, the same binaries / machine code) that would be executed on the final product.
In order to satisfy the first requirement, the model has to be stripped down as much as possible, but not so much that the second requirement cannot be met. What is removed is timing, which is exactly what should not be present in the input description for high-level synthesis. That means we can create the virtual prototype out of the computation blocks, coupled together with transaction-level interfaces. These are ideal interfaces and point-to-point. It is possible to also create transaction-level interfaces that do take into account limitations of a bus, but they are not timing-accurate.
This creates a model that can be used for system-level verification and validation in a number of different ways (I will not get into those at this time). But how are the computation blocks to be verified? For that, we still need to create a testbench. (The system-level model could be used as a testbench, but this is not highly efficient.) Luckily, methodologies such as the UVM (Universal Verification Methodology) have separated the pin interface from the transactions that pass around the testbench itself. This means it is possible to use essentially the same transaction-level interface to connect the testbench and the computation block.
But this computation IP alone is not sufficient for synthesis. Synthesis has to create the pin-level interface that will enable it to be integrated into an RTL model. There are various ways in which this can be achieved. One is to extend the model so as to provide the necessary communication. However, due to the regularity of the transaction-level interface (assume TLM or OCP-IP) and the real communication fabrics in use (AMBA, CoreConnect...), it is more likely that the high-level synthesis vendor will make a library of these available.
When this communication block is added, the model becomes more accurate because the interfaces are constrained by reality, and this can affect performance of the complete system and the way in which the computation block operates. In general, the communications block is a control type of design rather than a dataflow problem. This used to be the weak spot of high-level synthesis but that is no longer the case.
In my next blog, I will continue to consider this aspect of high-level synthesis along with the rest of the verification flow.