In my previous blog, we examined the way computation blocks can be put together to create a virtual prototype that can verify system-level functionality. Software development teams often use such devices to help with early debug and integration of their code.
But even if we assumed this one description for block functionality could be used without change for both the virtual prototype and for transaction-level synthesis (which is not always the case), this block would not yet be ready for synthesis. It must know the context of the design in which the block is to be integrated. Thus, in addition to computation blocks, transaction-level synthesis requires communication blocks.
The separation of these two pieces is beneficial in that it enables a single computation block to target multiple implementations without having to change the core. For example, a block may be attached to one design using an AXI bus, and in another design, it may utilize CoreConnect.
The most important change that happens when computation and communications blocks are brought together is that connections between computation blocks may no longer be point-to-point, but instead may use common fabrics that make it harder for the block to make transfers in an ideal manner. This has to be taken into account when the block is being synthesized. It can alter the optimizations that are considered. For example, there is no point having the block process data faster than it can realistically access that data, or faster than any results can be transferred out of the block.
Could we create a communications block at an abstract level? Well, we could in theory, but one of the main things transaction-level synthesis does is to provide timing, and the timing of many communications schemes is fixed. This is a primary reason for the slow adoption of early transaction-level synthesis tools. They could not handle a description in which timing was fully or partially defined and deal with the impact on the rest of the design. Without this, it would be up to the integrator to take this large lump of automatically created logic and integrate it with the rest of the design.
So how is the communication block defined? Ideally, it uses a transaction-level interface on one side and a pin-level interface on the other. In this manner, a library of these blocks can be made available and used in a mix-and-match fashion. If you are creating the computation block, this approach can provide a quick and easy way to get the protocol defined.
In many cases, a generic interface will not provide the optimal solution. Thus, the developer of the intellectual property, who is supplying both the computation and communications blocks, will often design them in a pre-integrated manner.
Unfortunately, this interface is not yet standardized. TLM 2.0 is often used as an interface for transaction-level models in a virtual prototype, but it is not synthesizable. TLM 1.0 is synthesizable, but it does not have the flexibility required in many cases. Thus vendors are forced to take a proprietary and hybrid approach to this interface, which tends to be based on TLM 1.0 with some extensions from TLM 2.0. Hopefully, a synthesizable version of this standard is in the works.
In my next column, we will continue looking at verification and how we can perform pre- and post-synthesis verification. In the meantime, why do we always create interfaces, languages, and standards for verification without defining the semantics suitable for synthesis? Why don't we start with the synthesizable language and add verification extensions? This would make life so much easier!