Usuário(a):Alberto Trindade Tavares/Máquinas de Turing equivalentes

Uma máquina de Turing é um dispositivo hipotético com uma capacidade de memória infinita, inicialmente concebida por Alan Turing em 1936. A máquina manipula símbolos em células de uma fita potencialmente infinita de acordo com uma função de transição, e pode ser adaptada para simular a lógica de qualquer algoritmo de computador.

Embora nenhum dos seguintes modelos tenha sido demonstrado ter mais poder que o modelo de máquina de Turing de uma fita - infinita e multi-símbolo -, seus autores definiram e usaram tais máquinas para investigar questões e resolver problemas mais facilmente do que eles poderiam ter ao usar o modelo tradicional da máquina de Turing.

Máquinas equivalentes ao modelo da Máquina de Turing editar

Turing-equivalência

Muitas máquinas que possam ser imaginadas a ter mais capacidade computacional do que uma simples máquina de Turing universal podem ser mostradas não ter mais poder (Hopcroft and Ullman p. 159, cf Minsky). Elas podem computar mais rapidamente, ou talvez usar menos memória, ou o seu conjunto de instruções pode ser menor, mas elas não podem computar de forma mais poderosa (i.e. mais funções matemáticas). (A tese de Church-Turing levanta a seguinte hipótese: tudo que pode ser “computado”, pode ser computado por alguma máquina de Turing).

Os modelos de máquina sequencial

Todos os modelos a seguir são chamados de "modelos de máquina sequencial" para distingui-los dos "modelos de máquina paralela" (van Emde Boas (1990) p. 18).

Máquinas de Turing baseadas em Fita editar

Máquinas de uma fita com símbolos e/ou instruções restritos editar

Os seguintes modelos são máquinas de Turing de fita única, mas restringidos com (i) símbolos de fita restritos { marcado, em branco }, e/ou (ii) instruções sequenciais e/ou (iii) ações de máquina completamente atomizadas.

Modelo de computação "Formulação 1" de Post editar

 Ver artigo principal: Máquina de Post-Turing

Emil Post (1936) em uma descrição independente de um processo computacional, reduziu os símbolos permitidos ao conjunto binário equivalente de marcas na fita { "marcado", "desmarcado"=vazio}. Ele mudou a noção de "fita", partindo de uma fita infinita pelo lado direito para um conjunto infinito de caixas, cada uma com um conjunto de instruções para ambas as direções. Ele atomizou as 5-tuplas de Turing em instruções de 4-tuplas separadas das instruções de imprimir/apagar símbolos. Embora seu (1936) modelo seja ambíguo em respeito a isso, o modelo de Post (1947) não exige a execução de instruções sequenciais.

Seu modelo extremamente simples pode simular qualquer máquina de Turing, e embora sua "Formulação 1" de 1936 não use a palavra "programa" ou "máquina", ela é efetivamente uma formulação de um computador programável muito primitivo e associada a linguagem de programação, com ações em caixas como uma memória ilimitada e o conjunto de instruções constituindo um programa.

Wang machines editar

 Ver artigo principal: Wang B-machine

In an influential paper, Hao Wang (1954, 1957) reduced Post's "formulation 1" to machines that still use a two-way infinite binary tape, but whose instructions are simpler — being the "atomic" components of Post's instructions — and are by default executed sequentially (like a "computer program"). His stated principal purpose was to offer, as an alternative to Turing's theory, one that "is more economical in the basic operations". His results were "program formulations" of a variety of such machines, including the 5-instruction Wang W-machine with the instruction-set

{ SHIFT-LEFT, SHIFT-RIGHT, MARK-SQUARE, ERASE-SQUARE, JUMP-IF-SQUARE-MARKED-to xxx }

and his most-severely reduced 4-instruction Wang B-machine ("B" for "basic") with the instruction-set

{ SHIFT-LEFT, SHIFT-RIGHT, MARK-SQUARE, JUMP-IF-SQUARE-MARKED-to xxx }

which has not even an ERASE-SQUARE instruction.

Many authors later introduced variants of the machines discussed by Wang:

Minsky (1961) evolved Wang's notion with his version of the (multi-tape) "counter machine" model that allowed SHIFT-LEFT and SHIFT-RIGHT motion of the separate heads but no printing at all. In this case the tapes would be left-ended, each end marked with a single "mark" to indicate the end. He was able to reduce this to a single tape, but at the expense of introducing multi-tape-square motion equivalent to multiplication and division rather than the much simpler { SHIFT-LEFT = DECREMENT, SHIFT-RIGHT = INCREMENT }.

Davis, adding an explicit HALT instruction to one of the machines discussed by Wang, used a model with the instruction-set

{ SHIFT-LEFT, SHIFT-RIGHT, ERASE, MARK, JUMP-IF-SQUARE-MARKED-to xxx, JUMP-to xxx, HALT }

and also considered versions with tape-alphabets of size larger than 2.

Böhm's theoretical machine language P" editar

 Ver artigo principal: P"

In keeping with Wang's project to seek a Turing-equivalent theory "economical in the basic operations", and wishing to avoid unconditional jumps, a notable theoretical language is the 4-instruction language P" introduced by Corrado Böhm in 1964 — the first "GOTO-less" imperative "structured programming" language to be proved Turing-complete.

Multi-tape Turing machines editar

In practical analysis, various types of multi-tape Turing machines are often used. Multi-tape machines are similar to single-tape machines, but there is some constant k number of independent tapes.

The TABLE has full independent control over all the heads, any of all of which move and print/erase their own tapes (cf Aho-Hopcroft-Ullman 1974 p. 26). Most models have tapes with left ends, right ends unbounded.

This model intuitively seems much more powerful than the single-tape model, but any multi-tape machine, no matter how large the k, can be simulated by a single-tape machine using only quadratically more computation time (Papadimitriou 1994, Thrm 2.1.). Thus, multi-tape machines cannot calculate any more functions than single-tape machines, and none of the robust complexity classes (such as polynomial time) are affected by a change between single-tape and multi-tape machines.

Two-stack Turing machine editar

Two-stack Turing machines have a read-only input and two storage tapes. If a head moves left on either tape a blank is printed on that tape, but one symbol from a “library” can be printed.

Formal definition: multi-tape Turing machine editar

A k-tape Turing machine can be described as a 6-tuple   where

  •   is a finite set of states
  •   is a finite set of the tape alphabet
  •   is the initial state
  •   is the blank symbol
  •   is the set of final or accepting states
  •   is a partial function called the transition function, where L is left shift, R is right shift, S is no shift.

Deterministic and non-deterministic Turing machines editar

 Ver artigo principal: non-deterministic Turing machine

If the action table has at most one entry for each combination of symbol and state then the machine is a "deterministic Turing machine" (DTM). If the action table contains multiple entries for a combination of symbol and state then the machine is a "non-deterministic Turing machine" (NDTM). The two are computationally equivalent, that is, it is possible to turn any NDTM into a DTM (and vice versa).

Oblivious Turing machines editar

An oblivious Turing machine is a Turing machine where movement of the various heads are fixed functions of time, independent of the input. In other words, there is a predetermined sequence in which the various tapes are scanned, advanced, and written to. Pippenger and Fischer (1979) showed that any computation that can be performed by a multi-tape Turing machine in n steps can be performed by an oblivious two-tape Turing machine in O(n log n) steps.

Register machine models editar

 Ver artigo principal: Register machine

van Emde Boas (1990) includes all machines of this type in one category (group, class, collection) -- "the register machine". However, historically the literature has also called the most primitive member of this group i.e. "the counter machine" -- "the register machine". And the most primitive embodiment of a "counter machine" is sometimes called the "Minsky machine".

The "counter machine", also called a "register machine" model editar

 Ver artigo principal: Counter machine

The primitive model register machine is, in effect, a multitape 2-symbol Post-Turing machine with its behavior restricted so its tapes act like simple "counters".

By the time of Melzak, Lambek, and Minsky (all 1961) the notion of a "computer program" produced a different type of simple machine with many left-ended tapes cut from a Post-Turing tape. In all cases the models permit only two tape symbols { mark, blank }.

Some versions represent the positive integers as only a strings/stack of marks allowed in a "register" (i.e. left-ended tape), and a blank tape represented by the count "0". Minsky (1961) eliminated the PRINT instruction at the expense of providing his model with a mandatory single mark at the left-end of each tape.

In this model the single-ended tapes-as-registers are thought of as "counters", their instructions restricted to only two (or three if the TEST/DECREMENT instruction is atomized). Two common instruction sets are the following:

(1): { INC ( r ), DEC ( r ), JZ ( r,z ) }, i.e.
{ INCrement contents of register #r; DECrement contents of register #r; IF contents of #r=Zero THEN Jump-to Instruction #z}
(2): { CLR ( r ); INC ( r ); JE ( ri, rj, z ) }, i.e.
{ CLeaR contents of register r; INCrement contents of r; compare contents of ri to rj and if Equal then Jump to instruction z}

Although his model is more complicated than this simple description, the Melzak "pebble" model extended this notion of "counter" to permit multi- pebble adds and subtracts.

The Random Access Machine (RAM) model editar

 Ver artigo principal: Random access machine

Melzak (1961) recognized a couple serious defects in his register/counter-machine model: (i) Without a form of indirect addressing he would not be able to "easily" show the model is Turing equivalent, (ii) The program and registers were in different "spaces", so self-modifying programs would not be easy. When Melzak added indirect addressing to his model he created a random access machine model.

(However, with Gödel numbering of the instructions Minsky (1961) offered a proof that with such numbering the general recursive functions were indeed possible; in Minsky (1967) he offers proof that μ recursion is indeed possible).

Unlike the RASP model, the RAM model does not allow the machine's actions to modify its instructions. Sometimes the model works only register-to-register with no accumulator, but most models seem to include an accumulator.

van Emde Boas (1990) divides the various RAM models into a number of sub-types:

  • SRAM, the "successor RAM" with only one arithmetic instruction, the successor (INCREMENT h). The others include "CLEAR h", and an IF equality-between-register THEN jump-to xxx.
  • RAM: the standard model with addition and subtraction
  • MRAM: the RAM agumented with multiplication and division
  • BRAM, MBRAM: Bitwise Boolean versions of the RAM and MRAM
  • N****: Non-deterministic versions of any of the above with an N before the name

The Random Access Stored Program (RASP) machine model editar

 Ver artigo principal: Random access stored program machine

The RASP is a RAM with the instructions stored together with their data in the same 'space' -- i.e. sequence of registers. The notion of a RASP was described at least as early as Kiphengst (1959). His model had a "mill" -- an accumulator, but now the instructions were in the registers with the data—the so-called von Neumann architecture. When the RASP has alternating even and odd registers—the even holding the "operation code" (instruction) and the odd holding its "operand" (parameter), then indirect addressing is achieved by simply modifying an instruction's operand (cf Cook and Reckhow 1973).

The original RASP model of Elgot and Robinson (1964) had only three instructions in the fashion of the register-machine model, but they placed them in the register space together with their data. (Here COPY takes the place of CLEAR when one register e.g. "z" or "0" starts with and always contains 0. This trick is not unusual. The unit 1 in register "unit" or "1" is also useful.)

{ INC ( r ), COPY ( ri, rj ), JE ( ri, ri, z ) }

The RASP models allow indirect as well as direct-addressing; some allow "immediate" instructions too, e.g. "Load accumulator with the constant 3". The instructions may be of a highly restricted set such as the following 16 instructions of Hartmanis (1971). This model uses an accumulator A. The mnemonics are those that the authors used (their CLA is "load accumulator" with constant or from register; STO is "store accumulator"). Their syntax is the following, excepting the jumps: "n, <n>, <<n>>" for "immediate", "direct" and "indirect"). Jumps are via two "Transfer instructions" TRA—unconditional jump by directly "n" or indirectly "< n >" jamming contents of register n into the instruction counter, TRZ (conditional jump if Accumulator is zero in the same manner as TRA):

{ ADD n , ADD < n >, ADD << n >>, SUB n, SUB < n >, SUB << n >>, CLA n, CLA < n >, CLA << n >>, STO < n >, STO << n >>, TRA n, TRA < n >, TRZ n, TRA < n >, HALT }

The Pointer machine model editar

 Ver artigo principal: Pointer machine

A relative late-comer is Schönhage's Storage Modification Machine (1970) or pointer machine. Another version is the Kolmogorov-Uspensii machine, and the Knuth "linking automaton" proposal. (For references see pointer machine). Like a state-machine diagram, a node emits at least two labelled "edges" (arrows) that point to another node or nodes which in turn point to other nodes, etc. The outside world points at the center node.

Machines with input and output editar

Any of the above tape-based machines can be equipped with input and output tapes; any of the above register-based machines can be equipped with dedicated input and output registers. For example, the Schönhage pointer-machine model has two instructions called "input λ01" and "output β" (Schönhage 1990 p. 493)

It is difficult to study sublinear space complexity on multi-tape machines with the traditional model, because an input of size n already takes up space n. Thus, to study small DSPACE classes, we must use a different model. In some sense, if we never "write to" the input tape, we don't want to charge ourself for this space. And if we never "read from" our output tape, we don't want to charge ourself for this space.

We solve this problem by introducing a k-string Turing machine with input and output. This is the same as an ordinary k-string Turing machine, except that the transition function   is restricted so that the input tape can never be changed, and so that the output head can never move left. This model allows us to define deterministic space classes smaller than linear. Turing machines with input-and-output also have the same time complexity as other Turing machines; in the words of Papaditriou 1994 Prop 2.2:

For any k-string Turing machine M operating within time bound f(n)) there is a (k+2)-string Turing machine M’ with input and output, which operates within time bound O(f(n)).

k-string Turing machines with input and output are used in the formal definition of the complexity resource DSPACE in, for example, Papadimitriou 1994 (Def. 2.6).

Other equivalent machines and methods editar

  • Multidimensional Turing machine: For example, a model by Schönhage (1990) uses the four head-movement commands { North, South, East, West }.
  • Single-tape, multi-head Turing machine: In an undecidability proof of the "problem of tag", Minsky 1961 and Shepherdson and Sturgis (1963) described machines with a single tape that could write along the tape with one head and read further along the tape with another.
  • Markov Algorithm (1960) is another remarkably simple computational model, based on string rewriting, equivalent to the Turing machines.

References editar

  • Stephen A. Cook and Robert A. Reckhow (1972), Time-bounded random access machines, Journal of Computer Systems Science 7 (1973), 354-375.
  • Calvin Elgot and Abraham Robinson (1964), Random-Access Stored-Program Machines, an Approach to Programming Languages, Journal of the Association for Computing Machinery, Vol. 11, No. 4 (October, 1964), pp. 365-399.
  • Peter van Emde Boas, Machine Models and Simulations, pp. 3-66, located in:
    • Jan van Leeuwen, ed. Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, The MIT Press/Elsevier, 1990. ISBN 0-262-72014-0 (volume A). QA76.H279 1990.
  • J. Hartmanis (1971), "Computational Complexity of Random Access Stored Program Machines," Mathematical Systems Theory 5, 3 (1971) pp. 232-245.
  • Kaphengst, Heinz, Eine Abstrakte programmgesteuerte Rechenmaschine', Zeitschrift fur mathematische Logik und Grundlagen der Mathematik:5 (1959), 366-379.
  • John Hopcroft and Jeffrey Ullman (1979). Introduction to Automata Theory, Languages and Computation 1st ed. [S.l.]: Addison–Wesley, Reading Mass. ISBN 0-201-02988-X. Verifique |isbn= (ajuda) 
  • Hopcroft, John E.; Rajeev Motwani, Jeffrey D. Ullman (2001). Introduction to Automata Theory, Languages, and Computation 2nd ed. Reading Mass: Addison–Wesley. ISBN 0-201-44124-1 
  • Joachim Lambek (1961, received 15 June 1961), How to Program an Infinite Abacus, Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 295-302.
  • A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e. Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
  • Z. A. Melzak (1961, received 15 May 1961), An informal Arthmetical Approach to Computability and Computation, Canadian Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 279-293.
  • Marvin Minsky (1961, received August 15, 1960). «Recursive Unsolvability of Post's Problem of 'Tag' and Other Topics in Theory of Turing Machines». Annals of Mathematics. Annals of Math. 74 (3): 437–455. JSTOR 1970290. doi:10.2307/1970290  Verifique data em: |ano= (ajuda)
  • Marvin Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Inc., N.J., 1967. See Chapter 8, Section 8.2 "Unsolvability of the Halting Problem."
  • Christos Papadimitriou (1993). Computational Complexity 1st ed. [S.l.]: Addison Wesley. ISBN 0-201-53082-1  Chapter 2: Turing machines, pp. 19–56.
  • Pippenger, Nicholas; Fischer, Michael J. (1979), «Relations Among Complexity Measures», JACM, 26 (3): 361–381, doi:10.1145/322123.322138 
  • A. Schōnhage (1980), Storage Modification Machines, Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980.
  • John C. Shepherdson and H. E. Sturgis (1961) received December 1961 Computability of Recursive Functions, Journal of the Association of Computing Machinery (JACM) 10:217-255, 1963.