Part II of “Artificial Intelligence or Mechanically Mindless?”
I was writing an extensive explanation covering the brain/computer analogy (or metaphor), why it’s wrong, where it touches upon what is right, etc. Some 20+ pages in, I realized that I wasn’t writing a blog post anymore. I was beginning a book. So, to provide the foundations needed for Part III, I’ll cheat. I’m going to simply quote various sources I can later build on, thus sparing us all the endless re-writing of countless pages I’d require without cheating.
We often find comparisons between the brain and the mind, or the brain and computers, or even between computers/programs and brain/mind. Among other things, an entire field within the cognitive sciences contributes to such misunderstandings: computational neuroscience. So let’s start by differentiating the views of even hardcore reductionist cognitive scientists and those who think the mind is just a program run by a biological computer:
“the computational theory of mind is very different from the computer metaphor that Professor Edelman has alluded to in his presentation. As he pointed out, there are many ways in which commercially available computers are radically different from brains. Computers are serial; brains are parallel. Computers are fast; brains are slow. Computers have deterministic components; brains have noisy components. Computers are assembled by an external agent; brains have to assemble themselves. Computers display screen-savers with flying toasters; brains do not. But the claim is not that commercially available computers are a good model for the brain. Rather, the claim is that the answer to the question ‘What makes brains intelligent?’ may overlap with the question ‘What makes computers intelligent?’ The common feature, I suggest, is information-processing, or computation. An analogy is that when we want to understand how birds fly, we invoke principles of aerodynamics that also apply to airplanes. But that doesn’t mean that we are committed to an airplane metaphor for birds and should ask whether birds have complimentary beverage service.”
Pinker, S. (1999). How the Mind Works. Annals of the New York Academy of Sciences, 882: 119–127.
What did Edelman say? Well, unfortunately in lieu of his speech the Annals published a paper Edelman wrote earlier, but it was much the same, and contains the following:
“the selectional system in the brain is capable of dynamic reconstruction of outputs under the constraints of value, selecting from a large number of degenerate possibilities. As such, this view rejects codes, representations, and explicit coded storage. Memory is nonrepresentational and is considered to reflect a dynamic capacity to recreate an act (or specifically to suppress one) under such constraints. The brain is not a computer, nor is the world an unambiguous piece of tape defining an effective procedure and constituting ‘symbolic information.’ Such a selectional brain system is endlessly more responsive and plastic than a coded system.”
Edelman, G. M. (1998). Building a picture of the brain. Daedalus, 37-69.
“The brain is not a computer”!?? Clearly, but that doesn’t mean it’s not similar enough to make comparison valuable, right?
“The brain isn’t a computer. The differences are so vast any similarities are undercut by them.
“no formal system is able to generate anything even remotely mind-like. The asymmetry between the brain and the computer is complete, all comparisons are flawed, and the idea of a computer-generated consciousness is nonsense.”
Torey, Z. (2009). The crucible of consciousness: An integrated theory of mind and brain. MIT press.
Ouch. Well, Torey’s probably the only nutcase arguing that a biological information processing system is completely different from an artificial one. Oh, wait.
“Why would the mind work like a computer? This book is aimed—like some other recent books (e.g., Kelso, 1995; Port & van Gelder, 1995; see also Fodor, 2000)—at responding to that question with the following answer: ‘It doesn’t.’”
Spivey, M. (2007). The Continuity of Mind. (Oxford Psychology Series Vol. 40). Oxford University Press.
Ok, let’s throw in something that is less opinion and more science:
“The biological “hardware” on which the brain is based is extremely slow. A typical interval between the spikes of an individual neuron is about 50 ms and the time needed to propagate a signal from one neuron to another is not much shorter than such an interval. This corresponds to a characteristic frequency of merely 100 Hz. Recalling that modern digital computers should operate at a frequency of 109 Hz and yet are not able to reproduce its main functions, we are lead to conclude that the brain should work in a way fundamentally different from digital information processing.
Simple estimates indicate that spiking in populations of neurons must be synchronized in order to yield the known brain operations. ‘Humans can recognize and classify complex (visual) scenes within 400-500 ms. In a simple reaction time experiment, responses are given by pressing or releasing a button. Since movement of the finger alone takes about 200-300 ms, this leaves less than 200 ms to make the decision and classify the visual scene’ [Gerstner (2001)]. This means that, within the time during which the decision has been made, a single neuron could have fired only 4 or 5 times! The perception of a visual scene involves a concerted action of a population of neurons. We see that exchange of information between them should take place within such a short time that only a few spikes are generated by each neuron. Therefore, information cannot be encoded only in the rates of firing and the phases (that is, the precise moments of firing) are important. In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated.”
Manrubia, S. C.; Mikhailov, A. S.; Zanette, Damian, H.(2004). Emergence of Dynamical Order: Synchronization Phenomena in Complex Systems. World Scientific.
We can find even more extreme views within biology and related fields that hold living (biological) systems are, in general, fundamentally different from computers. That is, it’s not just that the brain is different from a computer, it’s that single cells are too complex to simulate: A Living System Must Have Noncomputable Models
Maybe I should have called this one “Part II-A” and call the next installment “Part II-B”. We’ll see.