NZSM Online

Get TurboNote+ desktop sticky notes

Interclue makes your browsing smarter, faster, more informative

SciTech Daily Review

Webcentre Ltd: Web solutions, Smart software, Quality graphics

Feature

Breaking the Turing Barrier

Is there any limit to discrete computation, and more generally,
to scientific knowledge?

C. S. Calude and M.J. Dinneen

The story starts in 1936. As a result of an original analysis of mental activity, English mathematician Alan Turing introduced the abstract model of a machine -- now called the Turing machine -- to define the concept of a fixed computational method or algorithm.

Turing also introduced the concept of the "universal" Turing machine, a single machine capable of emulating any other Turing machine. The general idea is that a problem is defined as computable if there exists a Turing machine which produces a solution to it as a result of the execution of a finate number of steps.

There are some problems which can't be solved by any finite method -- they are non-computable -- and these are said to be limited by the Turing barrier. In fact, most problems are non-computable.

Practically all conventional computers, such as PCs, Unix workstations and mainframes, are based on the idea of a computer that stores and executes a program from internal memory or from an external device, rather than using "hard-wired" instructions. These are known as von Neumann architectures (after US mathematician John von Neumann). They are a realization of Turing's universal machine, where the instructions for a basic Turing machine (such as a program) are read and executed.

For over 50 years, the Turing machine model of computation has defined what it means to "compute" something, and many theoretical results in computer science rest on this model.

In recent years, researchers have looked at natural processes in the physical and biological world as motivation for constructing new models of computation holding out the hope of breaking the Turing barrier. But are there alternatives?

This is one of the problems studied by the Centre for Discrete Mathematics and Theoretical Computer Science at Auckland University.

The quantum phenomenon of interference has led to one such model, as has the process of folding of DNA strands in a living cell. In addition, refinements to the Turing view of computing have led to "super-Turing" models, that allow one to compute in ways that transcend Turing's original scheme. All three approaches have evolved from competing and rather unrelated areas of thought.

Breaking Turing's barrier is important theoretically, as unconventional models are explored with an eye toward understanding the true limits of computation. This, in turn, will shed light on the basic questions on the limits to scientific knowledge. There are practical aspects as well, as it could provide a method to speed up computations beyond classical capabilities.

Real World Meets Theoretical

Here is a possible scenario. Computers, in contrast to the theoretical Turing machines, are physical devices -- whatever they can or cannot do is determined by the laws of physics.

Turing machines, as models, do not have to obey these laws. A Turing machine operates with a finite list of primitive operations -- read a square of the tape, write a single symbol on a square of the tape, move one square to the right, and so forth -- but it makes no mention of the duration of each primitive operation (of course, we may assume that each primitive operation requires a fixed duration, but this is not part of the original model). It is only important whether or not the machine halts after a finite number of operations.

Temporal considerations are not relevant for these mathematical models. Things are different for real computers, where time does matter.

Infinite Operations in Finite Time

In 1927, well before Turing's model was proposed, German mathematician Hermann Weyl postulated a hypothetical machine that is capable of completing an infinite sequence of distinct operations within a finite time; say, by supplying the first result after half a minute, the second after a quarter of a minute, the third an eighth of a minute later, etc.

In fact, this temporal patterning was described earlier by Betrand Russell, in a famous lecture given in Boston in 1914. In a discussion of Zeno's paradox of the race-course, Russell said:

If half the course takes half a minute, and the next quarter takes a quarter of a minute, and so on, the whole course will take a minute.

Is it physically possible to execute an infinite number of operations in a finite amount of time? Science has not offered a definitive answer yet. Russell argued that for human beings this scenario is not logically impossible, but it may be biologically impossible.

What about computers? Recently, Karl Svozil suggested that the answer depends upon the underlying physical environment considered for computation.

Classically (when computation is performed within a classical universe and information is measured in bits), infinite sequences of operations cannot be operationally executed in a finite time.

However, if one places the computation into a quantum mechanical universe, when classical bits are replaced by so-called quantum bits (qubits), infinite sequences of computations can be performed, just as in the Zeno problem. There is a price to be paid: computation appears to occur entirely at random because of the nondeterministic nature of quantum mechanics.

Is there a realistic chance to perform quantum computations? To do this, we need to be able to support quantum logic and be able to manipulate coherent quantum states.

Conventional devices under investigation for carrying out these operations include ion traps, high finesse cavities for manipulating light and atoms using quantum electrodynamics, and molecular systems designed to compute using nuclear magnetic resonance. The latter store quantum information in the states of quantum systems such as photons, atoms or nuclei, and realise quantum logic by semiclassical potentials such as microwave or laser fields.

Unconventional ideas for quantum computation include fermionic quantum computers, bosonic computers and architectures relying on anyons.

Turing thought that a computer was capable in principle of doing anything the human brain can do, and in 1950 he set forth a theory that remains a cornerstone of artificial intelligence.

The test proposed by Turing involved a computer communicating with human judges via a teleprinter link: if the computer's responses to questions were indistinguishable from those of a human being, Turing stated, the computer has to be regarded as exhibiting intelligence. The Turing test provoked a lot of discussion.

Roger Penrose has dedicated three books to this topic. In his latest one, The Large, the Small and the Human Mind published in 1997, he claims that "appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally".

The reference is made to classical computation; unconventional paradigms may dramatically change this view.

Professor Calude lectures in Auckland University's Computer Science Department.
Dr Dinneen lectures in Auckland University's Computer Science Department.