Is The Human Mind Algorithmic? By Stuart Kauffman My purpose in this blog is to argue that the human mind is not algorithmic. This is a contentious issue, for one of the received views in Cognitive Science and Neuroscience is that the mind is, and must be, algorithmic. I wil...
NPR logo Is The Human Mind Algorithmic?

Is The Human Mind Algorithmic?

My purpose in this blog is to argue that the human mind is not algorithmic. This is a contentious issue, for one of the received views in Cognitive Science and Neuroscience is that the mind is, and must be, algorithmic.

I will present what might be called "the Standard View" of the mind as algorithmic although calling it "the standard view" may be an overstatement, to make it clear what I shall be arguing against.

First, we need a clear statement of what an algorithm actually is. This definition can be made in terms of the famous Turing machine, the basis of all contemporary, physically classical, computers.

Turing wished to make a "formal machine" that was to be both a formalization of a Cartesian machine, yet also a formalization of human calculation. Indeed, his language is full of "doing", "moving" and so forth.

A general Turing machine consists of an infinite tape with squares on it. In each square is either nothing, that is the square is blank, or one of a finite discrete, definite and different symbols. In the familiar case, all can be done with the symbols, "1" and "0".

These symbols can be placed on the tape in specific positions initially. In addition, the machine has a "head" which has a discrete number of distinct and definitely different states, encoded by an alphabet of symbols that constitute its "internal states". At each moment of the computation, the head reads the square of the tape below it, blank, 1, or 0, and carries out two distinct, definite operations. Given its internal state and the read input from the tape, the machine stays where it is, or moves one square to the left or right, and writes a symbol, blank, 0 or 1 on the square below it. In addition, the head uses both its internal state and the symbol it read from the tape to move to a definite, discretely different, internal state.

This is an informal definition of the universal Turing machine, The initial distribution of symbols on the tape can be taken as both input data to the algorithm carried out by the Turing machine, and more symbols on the tape can constitute the computer program that the Turing machine uses to carry out its operations.

All contemporary computers are based on the Turing machine, which can be universal in the precise sense that all computable functions can be computed by the universal Turing machine, given input data and a program on the tape.

The triumph of this conception is easy to see in the computerized world we now live in.

I want to pause to note the idea of "definiteness" emphasized by Turning. There is no ambiguity at all in the Turing machine. Given the internal state of the machine, and a read discrete definite symbol on the tape, the machine can do only and exactly one definite thing, as desribed above.

Now, what is the presumed connection to the human mind? It comes in three steps. First, early in the 20th Century, philosophers such as Bertrand Russell, and the young Wittgenstein, sought to place knowledge of the world on the firmest possible empirical foundation. We could be wrong that a chair is in the room, they reasoned, but we could hardly be wrong that we experienced what seemed to be a chair. This was formally simplified to "atomic propositions" about "sense data", such as "red here","A flat now", or "hard here". There were two fundamental ideas in this effort. First, one could not be wrong about one's own sense data. Second, and critically, all empirical statements about the external world were to be build up by logical connections among such sense data statements. Thus "there is a chair in the room" was to be a formal shorthand, true if and only if a set of sense data statements linked together with AND, OR, and other logical connectors, along with "quantifiers" such as "There exists", and "For All", were true as the logical combination of sense data statements. In short, the idea was to reduce statements about the external world to a finite set of necessary and sufficent set of logically interconnected sense data statements. Note again the definiteness, a logical AND, as in A AND B is definite and is true if and only if both A and B statements are true.

But did this hopeful attempt work? No.

In the ensuing two decades it was realized that it was impossible to set the statement, "There is a chair in the room" into one to one correspondence with a finite set of logically connected sense data statements. The technical arguments are long. But they drove Wittgenstein, who wrote the culmination of this effort in his famous "Tractatus" as a young man, to reflect deeply and return, older and puzzled, to work out why the whole effort was so much bunk. To get the flavor of what Wittgenstein said, we need his notion of a "language game". It will be critical to the issue of whether mind is algorithmic.

Consider, said Wittgenstein, the jury foreman rises and says the words, "We find Jones guilty of murder in." Now, said Wittgenstein, notice that "found guilty" assumes that we know the meanings of an interdefined circle of words such as "evidence", "admissible evidence", "legally responsible", "guilty under the law", "innocent", and so forth. Now, he argues, can we "reduce" such language to language about ordinary human actions? Suppose you do not know the meanings of the legal terms I used above, you enter the courtroom and hear and see the following: A man/woman stands up and says the words, "We find Jones guilty of murder." You do not understand what has been said. More, argued Wittgenstein, there is no way to reduce the statement "We find Jones guilty of murder." into a logically necessary and sufficient set of statements about ordinary, non-legal termed, human actions. We cannot replace what the foreman says by a combination of statements about ordinary human actions! The legal set of words constitutes what Wittgenstein came to call a "language game."

Similarly, supposed we monitored sound vibrations in the court room, and made a pixel movie of the events. Could we find a necessary and sufficient set of sound vibrations and pixel confiurations to be logically true if and only if "We find Jones guilty" is true? No. Human action statements are a different language game than those reporting pixels and sound vibrations.

Bear language games in mind as we go forward, for we naively think we learn some fundamental set of concepts as children, and all other concepts are somehow logical constructions, again by definite logical rules, from the initial set of concepts that constitute a "basement language." We don't do so. We learn legal langauge on its own level. It cannot be stated in human action language alone. Philosophers have come to agree that there is no basement language.

Now, ignoring Wittgenstein, whose towering "Philsophic Investigations" came after Turing, who came after the "Tractus", back to steps to the human mind.

In 1943, Warren McCulloch and Walter Pitts published a seminal paper, "The Logical Calculus of Ideas Immanent in the Mind". What these authors showed is that any logical statement of the kind Russell wanted could be computed by an appropriate acyclic, feedforward network of "formal neurons". A formal neuron is a device capable of only two states, 1 and 0, and has inputs from other "upstream" neurons in the feedforward network, and computes a logical, or Boolean, function on its inputs. So an infinite formal neural network was as powerful as a universal Turing machine.

The connection to the mind came by identifying the 1 or 0 states of neurons with single or logical combinations of sense data statements, "A flat now", "hard here". Then the network could calculate any definite logical combination of the initial set of sense data statements, and if these represented the physical "firing" or "non-firing" states of specific real neurons, the identification was complete. The McCulloch Pitts paper was enormously influential, and a founding document in early cybernetics.

It is but a step to the mind and hoped for neuroscience. McCulloch and Pitts restricted themselves to feedforward networks, but one could consider networks with feedback loops. In general, such networks are a bit like a mountainous region with lakes in valleys, draining streams in their drainage basins. The "state" of a formal neural network is the current 1 or 0 activities of all its N neurons. In the simplest case, a central imaginary clock ticks off discrete time moments. At each moment all neurons look at the activities, 1 or 0, of their own inputs, consults the logical, Boolean, function governing its behavior, and assume the definite next 1 or 0 value. So the network passes from a state to a state to a state along what is called a trajectory, over time steps. But the system has a finite number of states, 2 to the Nth power, so eventually the network hits a state it was in before. Since the network is deterministic, it thereafter cycles repeatedly around a "state cycle", called mathematically "an attractor". The attractor is the analogue of the lake. The different sequences of states, that is, different trajectories, that flow to the same state cycle attractor, are like the streams flowing to the same lake. The lake and streams constitute the "basin of attraction" of the attractor.

Now, a network of formal neurons with feedback loops can have many attractors. The next step has been to identify attractors with things like "memories", or "classifcations" that the network carries out. All states in one basin of attraction are co-classified as the same. This is the heart of Connectionism in computer science, cognitive science and neuroscience, where in the last case, we identify a neuron firing or not as 1 or 0.

Then, if the mind is algorithmic, most workers would say it is something like the sketch above, with variations for asynchronous updating of the formal neurons, or stochastic noise in the behavior of the formal neurons so they sometimes do the "wrong" thing, given their Boolean function and the activities of their inputs.

This is not far from one dominant view. Will it work for the human mind? I think not. We should already be deeply suspicious given Wittgenstein's language games. All is definite in the above computations, and there is no way to get from a formal network categorizing normal human behavior, if that could be done, to categorizing the outcomes of legal proceedings. One language, the legal one, cannot be reduced to the human action level. Language games are not reducible, and no one has found a means to implement diverse language games in a network of formal neurons.

Another view of the non-algorithmic character of the human mind comes from trying to do it. For example, computer scientists have invented the idea of "affordances", for object oriented programming. Here a computer object representing a real carburator is characterized by a finite definite set of affordances, "Is a", "Has a", "Does a", "Needs a". This move is wonderful and much has been done with it.

But do formal affordances suffice? I am convinced that the answer is "No".

Consider the humble screw driver. Can you finitely list all the uses of screw drivers in all contexts? Great for screwing in screws, of course, but it can be used to open a paint can, prop open a door, wedge shut a door, scrape putty, tied to the end of a stick and used to spear fish, used as an object d'art, a paper weight, a roller, to prop up a piece of cardboard......Note that in this list are many relational features, such as proping up a piece of cardboard against a wall. We cannot list both all the relational features and purposes to which a screw driver might be put. Think not? First, consider James Bond in a pinch, or McGiver, then see if you can list all the uses. You cannot. Yet in any concrete case, you'd race to tie the screw driver to a bamboo stick to spear dinner on a South Pacific island. That is, we do these things, often easily, sometimes with effort, sometimes you don't think of it, but Jim invents a really novel use of the screw driver.

Now consider our finite list of affordances. If the affordances do not have in them, deducibly, "can be tied to the end of a stick to spear things like fish", you'll never deduce such a novel functionality for the screw driver.

I'll give a second example given before. A group of engineers want to invent the tractor, so have a huge engine block. They mount it on a chassis which promptly breaks. They mount the block on successively larger chassis, all of which break. At last, an engineer says, "You know, the engine block is so big and rigid, we can use the engine block itself as the chassis and hang everything off the engine block!" That counted as an invention - a new use for the engine block, using the rigidity of the block for a new functionality in the context of inventing the tractor. This is, in fact, how tractors are made. And the invention is the technological analogue of a Darwinian preadaptation, or exaptation, like the emergence of the swim bladder from lung fish, where again, a novel function, neutral bouyancy in the water column, emerged and changed evolution and we cannot prestate all possible human exaptations, as I have argued to break the Galilean Spell.

Neither the evolution of the biosphere nor the human mind is algorithmic, although the human mind can, of course, perform algorithmically. All this will bear on our philosophy of mind.