It is now conventional wisdom that the brain is the seat of the mind; it is alone through the brain's workings that we think and feel and know.
But what is a brain, anyway?
My thoughts turned to this question reading a recent New York Times piece about Sebastian Seung's project to map the brain by tracing out each of the trillions of links between individual neurons. This undertaking to map the system of connections which make us what we are — to map what Seung called "the connectome" in his 2011 book — can seem, from a certain point of view, like a glorious and heroic step backward.
Trying to understand how the brain works by looking at the behavior of individual cells — so observed David Marr, one of modern cognitive science's foundational figures writing in the late 1970s — would be like trying to understand how a bird flies by examining the behavior of individual feathers. To understand flight, you need to understand aerodynamics; only once you get a handle on that can you ask how a structure of feathers, or any other physical system — such as a manufactured airship — can harness aerodynamics in the service of flight.
And so it is with the brain: Before we can understand how it works, it would seem that we need to understand what it's doing. But you can't read that off the action of individual cells. Just try!
Imagine you were to stumble one day upon a computer on the beach and imagine (very unrealistically) that you have never seen or heard of a computer before. How would you go about figuring out how it works? Well, one thing you could do would be to make a map of how all the detachable parts of the machine are connected. This piece of metal is soldered to this piece, which is stapled to this piece of plastic. And so on. Suppose you finished the job. Would you know what the thing is before you? Or how it works? Would your complicated, Rube-Goldberg-esque map of the connections between the parts even count as a model of the computer? Keep in mind that there are lots of different kinds of computers, made of lots of different materials, with lots of different types of parts and networks of connections. In fact, if Alan Turing was right (and Turing was right), the basic and essential job of a computer — the computing of computable functions — can be specified in entirely formal terms; the physical stuff of the computing machine is irrelevant to the question of what computations are being computed and, so, really, it is also irrelevant to the question of how this — or any other computer — works.
The big upshot of Turing's insight (actually, the insight was first formulated in these terms by the philosopher Hilary Putnam, who was my teacher) is that the sorts of functional organization that make a computer what it is don't ever reduce to facts about material composition of computing machines. And that's true even if, as a matter of fact, physical systems can and do perform computations. The question we confront is: What is it we are trying to understand when we try to understand the thing we find on the beach? What is the computer anyway?
And so we can ask: What is the brain?
Is the brain a data processor? A biologically evolved computer? If so, then isn't it obvious that mapping brain cells is as wrong-headed as mapping out the atoms in a smartphone to understand how it works? And if it isn't a computer, then don't we need to frame some conception of what it is doing in order to understand how it manages to do it?
I'm raising both a practical point and a point of principle. The practical point is that we need some conception of what the whole is for before we have a ghost of a chance of figuring out how it works. This is Marr's point about feathers and flight. But there is also a matter of principle: When it comes to complex functional systems — like computers, for sure, and, probably, like brains — the laws and regularities and connections that matter are themselves higher-level; they don't bottom out in laws framed in terms of neuronal units any more than they do in laws framed in terms of quantum mechanical processes. The point is not just that it is hard to understand the brain's holistic operation in terms of what cells are doing but, instead, that it might be impossible — like trying to understand the stock market in terms of quantum mechanics. Surely, naturalism doesn't commit us to the view that it ought to be possible to frame a theory of the stock market in the terms of physics?
Gareth Cook, who wrote the recent New York Times Magazine article on Seung's quest, was wise to refer to Argentinian writer Jorge Luis Borges's cautionary tale, On Exactitude In Science, about a map being built as an exact, to-size replica of the domain being mapped. Such a map can't serve any explanatory purpose whatsoever. It won't be a useful map. My worry is that we already know that exactly the same thing is true of Seung's connectome.
Alva Noë is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe