As a member of the teaching faculty with the School of Computer Engineering at the Nanyang Technological University of Singapore, I am constantly exposed to interesting topics and ideas.
A few weeks ago, out of curiosity, I went to a lecture about "Alan Turing and Computation" by Rodney Downey, a visiting professor from Victoria University of Wellington, New Zealand. Attending this lecture has subsequently led to me explore the Turing Machine in more detail.
Since 2012 marks the centenary celebrations of Alan Turing, this is apt to trigger more awareness and discussions with regard to this article. For my part, I shall focus on the Turing Machine and its remarkable resemblance to modern computer systems.
The Turing Machine came from a paper in the Proceedings of the London Mathematical Society that was submitted in 1936: "On Computable Numbers, with an Application to the Entscheidungs problem." In this paper, Turing described an "a-machine" (automatic machine) that is now widely regarded as the father of all computers. At the heart of the system are two components: (1) an infinite "tape" with symbols on it that is capable of moving forward and backward and (2) a Read/Write head that reads a segment of the tape (and possibly writes some symbols). So, the main functions of the machine involve the movement of the tape and reading from/ writing to the tape.
Considering the advent of technology since then, the Turing Machine may be implemented with a simple finite-state machine (FSM). Jeroen and Davy from CWI at Amsterdam built a cool demonstration named the LEGO Turing Machine as seen in the following video.
The operations are sequential and inevitably involve instruction fetch, decode, and execute. In a typical operation cycle, the instruction on the tape is read by the Read/Write head, decoded in the FSM, and executed with the result written to the tape. The curious reader might ask "How is the instruction and data organized?" In this case, the question relates to the Von Neumann versus Harvard architectures. In fact, the concept of the Turing Machine predates both, but it bears a closer semblance to the Von Neumann architecture.
Modern computing systems are very sophisticated. Today, a simple FSM cannot describe the instruction sets of modern general-purpose computing systems. The concept of an Instruction Set Architecture (ISA) has evolved. For example, there is Complex Instruction Set Computing (CISC) as opposed to Reduced Instruction Set Computing (RISC). Recently, I heard from a colleague about research into Reduced energy Instruction Set Computing (ReISC). And then we have to consider a single processor core as opposed to dual cores, quad cores, multicores and many cores. And this is not to mention the fact that there are myriad I/O interfacing standards and protocols such as Universal Serial Bus (USB), WiFi, and Controller Area Network (CAN).
And so, I reflect upon the question "Is the Turing Machine still valid?" In essence, the Turing Machine comprises a storage (tape), I/O interface, and CPU (head). This is the basic makeup of any basic modern computing system. I am inclined to believe the Turing Machine still rocks today. Its concept has shaped the development of modern computing systems. So kudos to Alan Turing for coming out with this visionary model.
In parting, as we celebrate Alan Turing Year in 2012, I am curious about the next question, which is "Turing envisions a machine that automates, but it is a machine nonetheless. Can our computing system evolve beyond the Turing Machine?" What do you think?