40

Turing machines are perhaps the most popular model of computation for theoretical computer science. Turing machines don't have random access memory, since we can only do a read where the slider is currently located.

This seem unwieldy to me. Why don't theoretical computer scientists use a model with random access memory, like a register machine, as the basic model of computation?

user56834
  • 4,244
  • 5
  • 21
  • 35

8 Answers8

66

Why don't theoretical computer scientists use a model with random access memory, like a register machine, as the basic model of computation?

The short answer is that this model is actually more complicated to describe and prove things about. "Jumping" from one place in memory to another is a more complex operation. Note that it requires:

  • Reading an address (requires that we define how the address is written on the tape)

  • Jumping to where that address says on the tape

Alternatively I'm not sure what you have in mind by "register machine", but this also requires care. How large can the registers be? And the machine then has separate mechanisms for how it accesses/modifies the registers, and how it accesses/modifies main memory.

In sum, it's a more complicated model and Turing machines are easier to deal with.

However, the long answer is that theoretical computer scientists do use a model with random access memory, in real practice: it's called the RAM (Random Access Machine) model. The Turing machine is not considered to be a good model for so-called "fine-grained complexity" (e.g. whether a problem can be solved in $O(n^2)$ or $O(n)$ time), and so this requires more careful models. The RAM model is standard and perhaps, arguably, more accepted than the Turing machine as a model of how real computers work, despite being more complicated.

For example, we can show that for a Turing machine to decide if two strings are equal requires $O(n^2)$ time. But this of course does not hold in more accurate models of computation like the RAM model, where it will take $O(n)$. Therefore:

  • If you just care about whether a problem can be solved at all (or whether it can be solved in polynomial time), Turing machines are considered sufficient.

  • But if you care about exactly how difficult it is to solve, then you have to resort to a more complex model, such as RAMs.

Caleb Stanford
  • 7,298
  • 2
  • 29
  • 50
18

But we do! Read Knuth's "The Art of Computer Programming" series, for one. Almost all of the books with "algorithms"somewhere in the title implicitly use the RAM model.

Different uses, different models.

vonbrand
  • 14,204
  • 3
  • 42
  • 52
10

Register machines do get used as a model of computation for parallel algorithms. (PRAM machine)

Having threads communicate with each other through shared memory the same way they do in real machines is presumably useful: If you're talking about parallel algorithms them you're presumably concerned about time complexity so you want a machine model that can do things with similar efficiency to real CPUs.

If you only care about something being computable at all in finite time, running on some finite number of threads greater than 1 is irrelevant, and as other answers point out, Turing vs. Register machine is also fine.


(A shared-tape parallel Turing machine sounds insane to me, and maybe you'd have a side-channel for communicating between TMs somehow other than via shared tape? Once you have the added complexity of parallelism, it presumably makes more sense to use a model of computation similar to what we use in real life, although cellular automata or other fundamentally parallel models of computation do exist.)


But yes, as you point out, idealized RAM machines as a theoretical model of computation are a thing, too, even for serial computation. (They're a class of "register machines"; in this context every memory location is a "register" (of unbounded size), not like real CPUs where registers are small fast storage separate from memory address space. If you're used to assembly language for real CPUs, this is not what you'd expect from the name.)

I don't know how common it is to use this model; presumably not rare when considering time complexity of algorithms?

Peter Cordes
  • 1,105
  • 8
  • 16
6

If computation time is not an issue, it's possible to formulate a Turing machine algorithm which will find the Nth data location, for some arbitrary-length binary number N, following a particular marker. Plan on using half of the slots on the tape to hold bona fide data, and half to keep track of which location is active. To find the Nth location, first copy the marker to a spot just after the original. Assume for simplicity that one has an "X" in the marker spot before the a copy of the binary number of interest which is stored in little-endian fashion and a "Y" before the first location in the addressable range. Decrement the counter, and if it hadn't been zero, search for the "Y", and once it's found, replace the Y with some otherwise-unused symbol (e.g. Z) and replace the tag for the location to its right with Y. Then search back for the X, and repeat the procedure, decrementing the counter again and then moving the "Y" to the right. Once the counter hits zero, the location of the Y will mark the appropriate position on the tape, and if one searches left from there, one can find the leftmost Z and replace that with a Y in preparation for the next random-access operation.

Such an approach would be horrendously slow, but if one is merely interested in determining what kinds of tasks can be accomplished at all in finite time, the fact that an operation requires many orders of magnitude more steps than would be required in a RAM-based machine would be a non-issue.

supercat
  • 1,281
  • 8
  • 11
3

I'm self-taught in most fields that pertain to programming, software engineering, computer science, and electronic engineering. I have interests that range from 3D Graphics Programming, 3D Game Engine design, hardware emulation, compiler design, os design, hardware design down to the circuitry and from my research and studies I think the best way to answer this is that it comes down to how hardware is actually built and its constraints.

In mathematics, we have notations of values that exceed the limitations of physical hardware. Random Access Memory is used in many computer models, however, we have to look at the nature of what a Turing Machine is and how we use it to perform mathematical computations. We can represent integers and reals, but there are always the underlying issues of precision, truncation, under and overflow of values.


You had stated:

Turing machines are perhaps the most popular model of computation for theoretical computer science. Turing machines don't have random access memory, since we can only do a read where the slider is currently located.

This seem unwieldy to me.

Then you had asked:

Why don't theoretical computer scientists use a model with random access memory, like a register machine, as the basic model of computation?

We need to look at the definition of what a Turing Machine is:

A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

provided by: wikipedia

Do yourself a favor and read over the wiki page thoroughly and consult other resources as this has widely been used as the de facto model for most modern computers.


To answer your question you should read the details from the wiki page about how a CPU acts like a Turing Machine and how most programming languages are Turing Complete. It comes down to the layers of abstraction which is in contrast to a what a Turing Machine is while being an Abstract model.

In a computer layout design typically the CPU the Turning Machine has a finite state of registers with finite register sizes and bus widths along with a finite set of instructions that can be performed. The dynamic random access memory modules are normally connected to but separated from the CPU. Internally and at each abstraction layer, a computational device is a finite state machine independently but when combined to build a more complex machine, theoretically can perform an infinite amount of instructions.

If you were to use either a breadboard with integrated circuits, an FPGA, or even a simulator program such as Logisim and start building a basic CPU with its data paths, buses, control lines, etc. You will end up seeing the limitations of the hardware when you start to see the connections between the design of the logic gates, the component layouts, and how an ISA (Instruction Set Architecture) is designed to produce a Turing Machine.

When you begin to understand how an Assembly Language is constructed from an ISA and is used to simplify the ability to talk to the machine in its machine language you will then be able to build higher levels and layers of abstractions on top of it such as the C family of languages, Compilers, Operating Systems and such. Then you will end up with more advanced topics such as compiler theory and Operating System designs.

It pretty much comes down to two main things, finite state machines acting as Turing Machines and the layers of Abstraction. There are two viewpoints that this can be seen from, bottom-up and top-down. Looking from the hardware up to the software or looking down from the software to the hardware and understanding how it works where they meet in the middle. This is where most of the theory is applied. In the end, it is basically a hierarchical design where it is one system built on top of another.

A Turing Machine in itself doesn't require random access memory, however, many Turing Machines are connected to and work with random access models in tandem. They are two separate things that I believe you are trying to compare... Apples and Oranges, yes they are both fruit, but far from the same. Yet, we can use both of them together to make a fruit salad!

Basically a CPU is a Turing Machine that has Registers with a finite set of states, but in a computer system, the CPU is connected to Memory Modules that are Random Access through the system bus or a bus controller. Without the two you wouldn't have a computer system. They talk and communicate with each other to perform a set of instructions based on the current or previous instruction that has already been executed, in memory ready to be executed, and so on...

2

Because this is a model for finite state machines, not for Turing-equivalent computing, at least in the normal sense where you have a finite memory unit size and address size. On the other hand if you allow these to be infinite, you end up with a model of computing where lots of things artificially take O(1) space or time that's not useful for reasoning about real-world efficiency. Reconciling these issues is possible; the transdichotomous model is one way to do so.

Turing machines and other models are in a sense "better" because they're simpler and the infinitude is only achievable via repetition of highly finite operations. If you want "arbitrary range addressing", you have to construct it yourself within the program rather than the model artificially giving it to you for free. In addition, the "infinite tape" idea at least makes some real-world sense in that you could (to the point of resource exhaustion) keep extending a finite tape on demand, whereas finding a way to cram infinitely many address bits into one storage unit is not happening (but the transdichotomous model, or at least some variant of it, can make this work by essentially saying you throw away your computer and make a bigger one if you run out of space for your problem).

1

as our basic model of computation

Turing machines are used to study computability and computational power. They're essentially not used for studying the performance of real-world algorithms.

To determine whether a model of computation is as powerful as a random-access register machine, all you need to do is show that it's as powerful as a Turing machine (which is almost always much easier.)

Basically Turing machines are used because they tend to make proofs a lot shorter.

Consider that it's much easier to provide a mathematical formulation of how a Turing machine simulates a different type of Turing machine, than it is formulate how a register machine simulates a different type of register machine.

Artelius
  • 686
  • 3
  • 4
-1

Turing invented the 'machine' purely to show, in the simplest possible way,

1)that mechanical computing was possible, and 2) that if a problem could not be solved by it, then it could not be solved.

Performance is irrelevant - no-one has ever built one, or ever will. But we can emulate it, and reason about it.