Candidato(a) Norte Nordeste Centro-Oeste Sudeste Sul
Votação % (p.p.) Votação % (p.p.) Votação % (p.p.) Votação % (p.p.) Votação % (p.p.)
Jair Bolsonaro (PSL) 3 785 038 43,38% 7 453 206 25,86% 4 555 415 57,66% 23 915 925 53,23% 9 453 736 57,39%
Fernando Haddad (PT) 3 211 002 36,80% 14 583 380 50,60% 1 642 054 20,78% 8 623 232 19,19% 3 262 843 19,81%
Ciro Gomes (PDT) 748 170 8,57% 4 892 966 16,98% 750 776 9,50% 5 425 104 12,07% 1 499 282 9,10%
Geraldo Alckmin (PSDB) 388 613 4,45% 598 969 2,08% 358 475 4,54% 3 018 013 6,72% 725 612 4,40%
João Amoêdo (NOVO) 92 433 1,06% 199 010 0,69% 167 364 2,12% 1 671 966 3,72% 535 335 3,25%
Cabo Daciolo (PATRI) 114 128 1,31% 409 796 1,42% 87 518 1,11% 637 904 1,42% 97 805 0,59%
Henrique Meirelles (MDB) 157 792 1,81% 248 133 0,86% 152 606 1,93% 479 289 1,07% 250 178 1,52%
Marina Silva (REDE) 121 121 1,39% 217 894 0,76% 85 664 1,08% 516 783 1,15% 123 155 0,75%
Álvaro Dias (PODEMOS) 36 643 0,42% 71 208 0,25% 52 379 0,66% 277 561 0,62% 419 077 2,54%
Guilherme Boulos (PSOL) 61 156 0,70% 120 124 0,42% 40 015 0,51% 304 242 0,68% 90 311 0,55%
Vera Lúcia (PSTU) 4 727 0,05% 15 206 0,05% 3 152 0,04% 25 310 0,06% 7 018 0,04%
José Maria Eymael (DC) 3 622 0,04% 7 374 0,03% 3 036 0,04% 23 533 0,05% 3 966 0,02%
João Goulart Filho (PPL) 1 865 0,02% 6 463 0,02% 1 933 0,02% 14 172 0,03% 5 590 0,03%
Total de votos válidos 8 726 310 94,21% 28 823 729 90,48% 7 900 387 93,08% 44 933 034 90,14% 16 473 908 93,05%
Votos em branco 136 915 1,48% 718 922 2,26% 177 762 2,09% 1 563 685 3,14% 504 607 2,85%
Votos nulos 399 794 4,32% 2 313 008 7,26% 409 755 4,83% 3 352 550 6,73% 726 776 4,10%
Votos pendentes 0 0,00% 746 ~ 0,00% 0 0,00% 0 0,00% 0 0,00%
Total 9 263 019 80,30% 31 856 405 81,23% 8 487 904 78,97% 49 849 269 78,01% 17 705 291 82,74%
Abstenções 2 272 887 19,70% 7 363 200 18,77% 2 260 052 21,03% 14 053 775 21,99% 3 694 153 17,26%
Não apurado 0 0,00% 0 0,00% 0 0,00% 0 0,00% 0 0,00%
Eleitores aptos a votar 11 535 906 100% 39 219 605 100% 10 747 956 100% 63 903 044 100% 21 399 444 100%
Candidato(a) Norte Nordeste Centro-Oeste Sudeste Sul
Votação % (p.p.) Votação % (p.p.) Votação % (p.p.) Votação % (p.p.) Votação % (p.p.)
Jair Bolsonaro (PSL) 4 242 504 51,89% 8 824 454 30,31% 5 163 023 66,55% 28 351 800 65,37% 11 084 395 68,27%
Fernando Haddad (PT) 3 933 015 48,11% 20 289 812 69,69% 2 595 426 33,45% 15 016 238 34,63% 5 152 685 31,73%
Total de votos válidos 8 175 519 92,32% 29 114 266 92,66% 7 758 449 92,89% 43 368 038 87,59% 16 237 080 92,30%
Votos em branco 128 688 1,45% 439 013 1,40% 154 310 1,85% 1 327 407 2,68% 429 934 2,44%
Votos nulos 551 131 6,22% 1 867 136 5,94% 439 315 5,26% 4 818 207 9,73% 923 792 5,25%
Votos pendentes 0 0,00% 0 0,00% 0 0,00% 0 0,00% 0 0,00%
Total 8 855 338 76,76% 31 420 415 80,11% 8 352 074 77,71% 49 513 652 77,48% 17 590 806 82,20%
Abstenções 2 680 390 23,24% 7 798 914 19,89% 2 396 199 22,29% 14 389 256 22,52% 3 808 234 17,80%
Não apurado 0 0,00% 0 0,00% 0 0,00% 334 ~ 0,00% 335 ~ 0,00%
Eleitores aptos a votar 11 535 728 100% 39 219 329 100% 10 748 273 100% 63 903 242 100% 21 399 375 100%

A filosofia da inteligência artificial tenta responder a questões como:

  • Pode uma máquina agir inteligentemente? Pode resolver qualquer problema que uma pessoa resolveria através do raciocínio?
  • Pode uma máquina possuir uma mente, estados mentais e uma consciência, da mesma maneira que os seres humanos possuem? As máquinas podem sentir?
  • A inteligência humana e a inteligência de uma máquina são idênticas? O cérebro humano é essencialmente um computador?

Estas três questões refletem os interesses divergentes dos pesquisadores em inteligência artificial, filósofos e cientistas da cognição, respectivamente. As respostas a estas questões dependem da definição de "inteligência", "consciência" e de exatamente quais "máquinas" estão sob discussão.

Proposições importantes na filosofia da inteligência artificial incluem:

  • "Convenção cortês" de Turing: Se uma máquina se comporta de forma indistinguível à de um ser humano, então é tão inteligente quanto um ser humano.[1]
  • Conferência de Dartmouth: "Qualquer aspecto do processo de aprendizado e qualquer outro fenômeno da inteligência podem ser descritos tão precisamente que uma máquina pode ser construída para simulá-los."[2]
  • Hipótese do sistema de símbolos físicos de Newell e Simon: "Um sistema de símbolos físicos tem os meios necessários e suficientes para uma ação inteligente geral."[3]
  • Hipótese da inteligência artificial forte de Searle: "Um computador programado apropriadamente, com as entradas e saídas corretas, teria uma mente exatamente no mesmo sentido que seres humanos têm uma mente."[4]
  • Mecanicismo de Hobbes: "Razão nada mais é do que cálculo." [5]

Pode uma máquina demonstrar inteligência? editar

Seria possível criar uma máquina capaz de resolver todos os problemas que humanos resolvem usando seu raciocínio?

Esta pergunta define o que as máquinas serão capazes de realizar no futuro e direciona as pesquisas na área de inteligência artificial (IA): considera-se apenas o comportamento das máquinas e desconsideram-se questões de interesse de psicólogos, cientistas da cognição e filósofos; para respondê-la, não importa se uma máquina está realmente pensando (como um ser humano) ou se ela apenas finge que está pensando.[6]

O posicionamento da maioria dos pesquisadores de IA segue basicamente o que se encontra definido no texto de introdução das Conferências de Dartmouth, de 1956:

  • Qualquer aspecto do processo de aprendizado e qualquer outro fenômeno da inteligência podem ser descritos tão precisamente que uma máquina pode ser construída para simulá-los..[2]

Argumentos contrários a essa premissa tentam mostrar que é impossível construir um sistema ideal de IA, seja por falta de poder computacional, ou por haver alguma característica da mente humana necessária para pensar e que ainda não pode ser reproduzida por uma máquina nem por qualquer outro método. Argumentos favoráveis tentam mostrar que é possível construir tal sistema.

O primeiro passo para responder a essa pergunta é definir claramente o que é "inteligência".

Inteligência editar

 
"Interpretação padrão" do teste de Turing.[7]

Teste de Turing editar

 Ver artigo principal: Teste de Turing

Alan Turing, em seu famoso e seminal ensaio de 1950,[8] reduziu o problema de definir inteligência a um problema simples de conversação. Ele sugeriu que, se uma máquina consegue responder qualquer pergunta, usando as mesmas palavras que uma pessoa usaria, então pode-se dizer que se trata de uma máquina inteligente. Uma versão moderna do experimento envolve uma sala de chat online, onde um dos participantes é uma pessoa e outro é um programa de computador. O programa passa no teste se ninguém conseguir distinguir qual dos participantes é humano.[1] Turing observa que ninguém (exceto filósofos) faz a pergunta "pode um ser humano pensar?" Escreve ainda: "Em vez de discutir continuamente sobre esse ponto, é habitual assumir a convenção cortês de que todos pensam."[9] O teste de Turing estende essa convenção cortês para as máquinas:

  • If a machine acts as intelligently as human being, then it is as intelligent as a human being.

Uma objeção ao teste de Turing é que ele é explicitamente antropomórfico. Se o objetivo é criar máquinas mais inteligentes que pessoas, por que insistir que elas se pareçam com pessoas? Russell e Norvig escrevem que "textos de engenharia aeronáutica não definem como objetivo de seu campo criar 'máquinas que voem exatamente como pombos a ponto de poderem enganar até mesmo outros pombos'".[10]

Agente inteligente editar

 
Agente reativo simples

Recent AI research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[11]

  • If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.[12]

Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for human traits that we may not want to consider intelligent, like the ability to be insulted or the temptation to lie. They have the disadvantage that they fail to make the commonsense differentiation between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence and consciousness.[13]

Argumentos favoráveis editar

Um cérebro pode ser simulado editar

 Ver artigo principal: Artificial brain
An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device."[14] This argument, first introduced as early as 1943[15] and vividly described by Hans Moravec in 1988,[16] is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029.[17] A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005[18] and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors (see also [19]).

Few disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle.[20] However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes.[21] Thus, merely mimicking the functioning a brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind.

O pensamento humano consiste em processamento de símbolos editar

 Ver artigo principal: Sistema de símbolos físicos

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:

  • A physical symbol system has the necessary and sufficient means of general intelligent action.[3]

This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).[22] Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

  • The mind can be viewed as a device operating on bits of information according to formal rules.[23]

A distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as <dog> and <tail> and the more complex "symbols" that are present in a machine like a neural network. Early research into AI, called "good old fashioned artificial intelligence" (GOFAI) by John Haugeland, focused on these kind of high level symbols.[24]

Argumentos contrários à hipótese do sistema de símbolos físicos editar

Estes argumentos mostram que o pensamento humano não consiste (somente) de manipulação de símbolos. Eles não mostram que inteligência artificial é impossível, mas sim que é preciso mais do que essa manipulação.

Argumentos anti-mecanicistas de Gödel editar

Predefinição:Main article In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism.[25] Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument.[26] Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement). This is provably impossible for a Turing machine (and, by an informal extension, any known type of mechanical computer) to do; therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a machine.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.[27][28][29] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[30]

More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent.[31]

Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying".[32] But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.[33]

This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.[34]

After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. See Quantum computer - relation to computational complexity theory. By Penrose and Lucas's arguments, existing quantum computers are not sufficient, so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Plank mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron.[35] However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.[36]

Dreyfus: predominância de habilidades inconscientes editar

Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and argued that these unconscious skills would never be captured in formal rules.[37]

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior."[38] Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[39]

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning.[40] The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.[41] Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[42]

Pode uma máquina ter uma mente, consciência e estados mentais? editar

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves hipótese definida por John Searle como "IA forte":

  • Um sistema de símbolos físicos pode ter uma mente e estados mentais.[4]

Searle distinguiu a hipótese anterior do que ele chamou de "IA fraca":

  • Um sistema de símbolos físicos pode agir com inteligência.[4]

Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.[4]

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]."[43] Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[44]

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence." (See artificial consciousness.)

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciência, mente, estados mentais, intencionalidade editar

As palavras "mente" e "consciência" são usadas de várias formas por diferentes comunidades. Alguns pensadores da nova era, por exemplo, usam o termo "consciência" para referir-se a algo semelhante ao elã vital de Bergson: um fluido energético invisível que permeia a vida e, especialmente, a mente. Obras de ficção científica usam a palavra para descrever alguma propriedade essencial que nos torna humanos: uma máquina ou alienígena "consciente" é representada como um personagem completamente humano, com inteligência, impulso, desejos, discernimento, orgulho e assim por diante. Em outras ocasiões, as palavras "mente" ou "consciência" também são empregadas como sinônimo de alma.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle.[45] What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem."[46] A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?[47]

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain.[48] The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Argumentos contrários editar

Sala chinesa de Searle editar

 Ver artigo principal: Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates "general intelligent action." Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.[49]

Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains."[50] He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."[51]

Moinho de Leibniz, telefones de Davis e Blockhead editar

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill.[52] In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym".[53] Ned Block also proposed his "blockhead" argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room editar

Responses to the Chinese room emphasize several different points.

  • The systems reply and the virtual mind reply:[54] This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies:[55] Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply:[56] To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."[57]
  • Brain simulator reply:[58] What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply:[59] Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation? editar

 Ver artigo principal: computational theory of mind

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules).[60] The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.[61]

This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

  • Reasoning is nothing but reckoning[5]

In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):

  • Mental states are just implementations of (the right) computer programs[62]

This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).[62]

Questões relacionadas editar

Alan Turing observava que há muitos argumentos da forma "uma máquina nunca fará X", sendo :

Ser gentil, resourceful, bonita, amigável, ter inciativa e senso de humor, saber o que é certo e errado, cometer erros, apaixonar-se, gostar de morango com creme, make someone fall in love with it, aprender através da experiência, use words properly, ser o sujeito de seu próprio pensamento, have as much diversity of behaviour as a man, fazer algo realmente novo.[63]

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.[63]

Alan Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.[63]

Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression."[63] All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Pode uma máquina sentir emoções? editar

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people".[64] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[64] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[65]

However, emotions can also be defined in terms of their subjective quality, of what it feels like to have an emotion. The question of whether the machine actually feels an emotion, or whether it merely acts as if it is feeling an emotion is the philosophical question, "can a machine be conscious?" in another form.[43]

Pode uma máquina ter autoconhecimento? editar

"Autoconhecimento", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, it is obvious that a program can be written that can report on its own internal states, such as a debugger.[63]

Pode uma máquina ser original ou criativa? editar

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest.[66] He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.[67] It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.)

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings.[68] Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Pode uma máquina ser benevolent ou hostil? editar

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.[43]

The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Singularity Institute). (The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind.)

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity."[69] He suggests that it may be somewhat or possibly very dangerous for humans.[70] This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[69]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[71] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[72][73]

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[74] They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[75]

Pode uma máquina ter alma? editar

Por fim, aqueles que acreditam na existência d argumentam que "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection" and considers it on its own merits:

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.[76]

Conclusion and themes for future research editar

John McCarthy, idealizador do conceito de AI e criador da família de linguagens de programação LISP, says that some philosophers of AI will do battle with the idea that:

  • AI é impossível (Dreyfus),
  • AI é imoral (Weizenbaum),
  • O conceito de AI, em princípio, é incoerente (Searle).

See also editar

Notes editar

  1. a b This is a paraphrase of the essential point of the Turing test. Turing 1950, Haugeland 1985, pp. 6–9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2–3 and 948
  2. a b McCarthy et al. 1955. This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI."also Crevier 1993, p. 28
  3. a b Newell & Simon 1976 and Russell & Norvig 2003, p. 18
  4. a b c d This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  5. a b Hobbes 1651, chpt. 5
  6. See Russell & Norvig 2003, p. 3, where they make the distinction between acting rationally and being rational, and define AI as the study of the former.
  7. Saygin 2000.
  8. Turing 1950 and see Russell & Norvig 2003, p. 948, where they call his paper "famous" and write "Turing examined a wide variety of possible objections to the possibility of intelligent machines, including virtually all of those that have been raised in the half century since his paper appeared."
  9. Turing 1950 under "The Argument from Consciousness"
  10. Russell & Norvig 2003, p. 3
  11. Russell & Norvig 2003, pp. 4–5, 32, 35, 36 and 56
  12. Russell and Norvig would prefer the word "rational" to "intelligent".
  13. Russell & Norvig (2003, pp. 48–52) consider a thermostat a simple form of intelligent agent, known as a reflex agent. For an in-depth treatment of the role of the thermostat in philosophy see Chalmers (1996, pp. 293–301) "4. Is Experience Ubiquitous?" subsections What is it like to be a thermostat?, Whither panpsychism?, and Constraining the double-aspect principle.
  14. Dreyfus 1972, p. 106
  15. Pitts & McCullough 1943
  16. Moravec 1988
  17. Kurzweil 2005, p. 262. Also see Russell Norvig, p. 957 and Crevier 1993, pp. 271 and 279. The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-1970s and was touched on by Zenon Pylyshyn and John Searle in 1980
  18. Eugene Izhikevich (27 de outubro de 2005). «Eugene M. Izhikevich, Large-Scale Simulation of the Human Brain». Vesicle.nsi.edu. Consultado em 29 de julho de 2010 
  19. http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf
  20. Hubert Dreyfus writes: "In general, by accepting the fundamental assumptions that the nervous system is part of the physical world and that all physical processes can described in a mathematical formalism which can in turn be manipulated by a digital computer, one can arrive at the strong claim that the behavior which results from human 'information processing,' whether directly formalizable or not, can always be indirectly reproduced on a digital machine." (Dreyfus 1972, pp. 194–5). John Searle writes: "Could a man made machine think? Assuming it possible produce artificially a machine with a nervous system, ... the answer to the question seems to be obviously, yes ... Could a digital computer think? If by 'digital computer' you mean anything at all that has a level of description where it can be correctly described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think." (Searle 1980, p. 11)
  21. Searle 1980, p. 7
  22. Searle writes "I like the straight forwardness of the claim." Searle 1980, p. 4
  23. Dreyfus 1979, p. 156
  24. Haugeland 1985, p. 5
  25. Gödel, Kurt, 1951, Some basic theorems on the foundations of mathematics and their implications in Solomon Feferman, ed., 1995. Collected works / Kurt Gödel, Vol. III. Oxford University Press: 304-23. - In this lecture, Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact".
  26. Lucas 1961, Russell & Norvig 2003, pp. 949–950, Hofstadter 1979, pp. 471–473,476–477
  27. Graham Oppy (20 January 2015). «Gödel's Incompleteness Theorems». Stanford Encyclopedia of Philosophy. Consultado em 27 April 2016. These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.  Verifique data em: |acessodata=, |data= (ajuda)
  28. Stuart J. Russell; Peter Norvig (2010). «26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection». Artificial Intelligence: A Modern Approach 3rd ed. Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-604259-7. ...even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations. 
  29. Mark Colyvan. An introduction to the philosophy of mathematics. Cambridge University Press, 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."
  30. LaForte, G., Hayes, P. J., Ford, K. M. 1998. Why Gödel's theorem cannot refute computationalism. Artificial Intelligence, 104:265-286, 1998.
  31. Russell & Norvig 2003, p. 950. They point out that real machines with finite memory can be modeled using propositional logic, which is formally decidable, and Gödel's argument does not apply to them at all.
  32. Hofstadter 1979
  33. According to Hofstadter 1979, pp. 476–477, this statement was first proposed by C. H. Whiteley
  34. Hofstadter 1979, pp. 476–477, Russell & Norvig 2003, p. 950, Turing 1950 under "The Argument from Mathematics" where he writes "although it is established that there are limitations to the powers of any particular machine, it has only been stated, without sort of proof, that no such limitations apply to the human intellect."
  35. Penrose 1989
  36. Litt, Abninder; Eliasmith, Chris; Kroon, Frederick W.; Weinstein, Steven; Thagard, Paul (6 May 2006). «Is the Brain a Quantum Computer?». Cognitive Science. 30 (3): 593–603. doi:10.1207/s15516709cog0000_59  Verifique data em: |acessodata=, |data= (ajuda);
  37. Dreyfus 1972, Dreyfus 1979, Dreyfus & Dreyfus 1986. See also Russell & Norvig 2003, pp. 950–952, Crevier & 1993 120-132 and Hearn 2007, pp. 50–51
  38. Russell & Norvig 2003, pp. 950–51
  39. Turing 1950 under "(8) The Argument from the Informality of Behavior"
  40. Russell & Norvig 2003, p. 52
  41. See Brooks 1990 and Moravec 1988
  42. Crevier 1993, p. 125
  43. a b c Turing 1950 under "(4) The Argument from Consciousness". See also Russell Norvig, pp. 952–3, where they identify Searle's argument with Turing's "Argument from Consciousness."
  44. Russell & Norvig 2003, p. 947
  45. "[P]eople always tell me it was very hard to define consciousness, but I think if you're just looking for the kind of commonsense definition that you get at the beginning of the investigation, and not at the hard nosed scientific definition that comes at the end, it's not hard to give commonsense definition of consciousness." The Philosopher's Zone: The question of consciousness. Also see Dennett 1991
  46. Blackmore 2005, p. 2
  47. Russell & Norvig 2003, pp. 954–956
  48. For example, John Searle writes: "Can a machine think? The answer is, obvious, yes. We are precisely such machines." (Searle 1980, p. 11)
  49. Searle 1980. See also Cole 2004, Russell & Norvig 2003, pp. 958–960, Crevier 1993, pp. 269–272 and Hearn 2007, pp. 43–50
  50. Searle 1980, p. 13
  51. Searle 1984
  52. Cole 2004, 2.1, Leibniz 1714, 17
  53. Cole 2004, 2.3
  54. Searle 1980 under "1. The Systems Reply (Berkeley)", Crevier 1993, p. 269, Russell & Norvig 2003, p. 959, Cole 2004, 4.1. Among those who hold to the "system" position (according to Cole) are Ned Block, Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey. Those who have defended the "virtual mind" reply include Marvin Minsky, Alan Perlis, David Chalmers, Ned Block and J. Cole (again, according to Cole 2004)
  55. Cole 2004, 4.2 ascribes this position to Ned Block, Daniel Dennett, Tim Maudlin, David Chalmers, Steven Pinker, Patricia Churchland and others.
  56. Searle 1980 under "2. The Robot Reply (Yale)". Cole 2004, 4.3 ascribes this position to Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey
  57. Quoted in Crevier 1993, p. 272
  58. Searle 1980 under "3. The Brain Simulator Reply (Berkeley and M.I.T.)" Cole 2004 ascribes this position to Paul and Patricia Churchland and Ray Kurzweil
  59. Searle 1980 under "5. The Other Minds Reply", Cole 2004, 4.4. Turing 1950 makes this reply under "(4) The Argument from Consciousness." Cole ascribes this position to Daniel Dennett and Hans Moravec.
  60. Dreyfus 1979, p. 156, Haugeland 1985, pp. 15–44
  61. Horst 2005
  62. a b Harnad 2001
  63. a b c d e Turing 1950 under "(5) Arguments from Various Disabilities"
  64. a b Quoted in Crevier 1993, p. 266
  65. Crevier 1993, p. 266
  66. Turing 1950 under "(6) Lady Lovelace's Objection"
  67. Turing 1950 under "(5) Argument from Various Disabilities"
  68. Katz, Leslie (2 de abril de 2009). «Robo-scientist makes gene discovery-on its own | Crave - CNET». News.cnet.com. Consultado em 29 de julho de 2010 
  69. a b Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  70. The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
  71. Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  72. Science New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
  73. Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  74. AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  75. Article at Asimovlaws.com, July 2004, accessed 7/27/09. Arquivado em 30 de junho de 2009, no Wayback Machine.
  76. Turing 1950 under "(1) The Theological Objection", although it should be noted that he also writes "I am not very impressed with theological arguments whatever they may be used to support"

References editar

Page numbers above and diagram contents refer to the Lyceum pdf print of the article.

Predefinição:Philosophy of mind Predefinição:Philosophy of science Predefinição:Psychology Predefinição:Philosophy topics


Category:Philosophy of science Category:Philosophy of technology Category:Philosophy of mind Category:Open problems Category:Articles containing video clips Artificial intelligence