In my previous blog, I promised to talk about a formalism that is used to analyze the way in which functional blocks talk to each other. That model of computation is called Kahn Process Networks (KPN).
Fundamentally, for a given input stream, a process in a KPN will always produce exactly the same output stream: it is thus a function from input streams to output streams. Also, all communication channels in a KPN are unbounded and lossless FIFO channels. This means that all write operations are non-blocking and that all read operations on an empty channel will block (stall) until a value is written to the channel. KPNs have the desirable property of having a uniquely defined behavior, which depends on neither the timing characteristics of its processes and communication channels nor on their scheduling. For example, a KPN will produce identical results whether its processes are executed sequentially on a single processor or in parallel on multiple processors and/or hardware accelerators. This model of computation is therefore well-suited for the specification, hardware/software co-design, and design space exploration of embedded systems.
KPNs do, however, have practical limitations when faced with realities. First, they assume infinite-length (unbounded) FIFO channels, whereas clearly only finite-length channels are available in a real-world implementation. This is, of course, true whether the FIFO channels are implemented as dedicated hardware FIFOs or through a shared memory structure. Practical solutions to this rely on flow control, which was discussed in the comments on the previous blog. There are many ways to implement flow control, such as having the sender first send a given number of bytes, or a packet, to the receiver over the channel and then waiting for the receiver to reply with an acknowledge before the sender proceeds to send further data. Conversely, after having read the given number of bytes, the receiver may reply with an acknowledge to the sender over the channel. This flow control ensures that the number of bytes that needs to be stored in a channel is always bounded.
A second limitation of KPNs is that they cannot model several important concepts in real-time processing such as timeouts, polling, race conditions, or real-time constraints due to the absence of non-blocking read operations. Therefore, implementations of KPNs for hardware/software co-design extend the KPN model of computation by allowing non-blocking reads on a channel (polling). These non-blocking reads can be used to model non-deterministic input events and accesses to time-sensitive input/output peripherals.
However, this flexibility comes with a price: an embedded system that uses polling loses the formal properties of KPNs, and the functional behavior of the system may depend on the timing characteristics of its processes and channels or on their scheduling. One partial solution is to have the simulation platform enforce a minimum delay of one clock cycle between two non-blocking reads on the same channel by the same process. This ensures a full execution of the system by preventing processes from polling a channel an infinite number of times in the same clock cycle. In the terminology of timed process networks, the channels are "delta-causal." This ensures the system has a uniquely defined behavior for each given schedule and set of timing characteristics for its processes, channels, and input events.
Have you had any personal experiences with the KPN model of computation? If so, I would be very interested to hear about them in the comments below.