Sometimes we have to work very hard just to realize that we don’t know what we think we know. This is one of philosophy’s first lessons. And it is a bitter one.
The point is relevant to our considerations of the brain basis of consciousness and cognition. We think we know that mental events happen in the brain, and so we seem to find confirmation for this everywhere we look.
We’ve known for centuries that injury to the brain produces psychological effects and in the last century, first using single-cell recording techniques on animals, and later using brand new “imaging” technologies — actually, it’s somewhat misleading to call them this, but that’s a topic for another day — we’ve been able to establish significant and apparently robust correlations between neural phenomena (both neurally localized and global) and psychological ones.
Given the existence of these kinds of robust correspondences between the neural and the mental, it is not surprising or controversial that we can tell what the brain is doing if we know what the mind is doing. For example, if you are seeing, there is activity in your visual cortex. And likewise, in principle at least, if we had methods for gathering neural data in real time — the sorts of methods that John Dylan Haynes and his colleagues have begun to develop with great success — then we ought to be able to tell, on that basis, what a person is thinking, or feeling, or deciding.
Yes. But be careful. At the moment we are very far away from being able to do anything like this. Moreover, there are reasons of principle why this sort of “brain reading” must remain limited. Let me explain these points in turn.
1. Take the lie-detector as an object of comparison. Standard lie detectors measure a change in galvanic skin response (GSR) that is believed to correlate with lying. In order to gain valuable results, the tester needs first to establish baseline GSR. So he or she begins by asking you simple questions — what is your name? etc — for which there is a presumption that you will answer truthfully. Once the tester has figured out what your GSR profile looks when you’re telling the truth, he or she can see whether the hard questions — questions where your veracity is in doubt — bring about any deviation from the baseline.
“Brain reading,” of the sort that we are envisioning thanks to the advances of Haynes and others, is much more complicated, but the basic structure of the task — first establishing baselines, and then making sense of deviations — is the same. But now we are dealing with a problem that is astronomically more complicated. After all, there is in an infinite number of possible objects of thought, and at least a very large number of ways in which one might think about things (are you thinking, worrying, wondering, hoping, expecting, intending, fearing, worshiping, doubting, wanting…?). But thought is itself just one of our very many mental conditions — sensation (feeling), emotion (affect), mood, these are some of the others.
So this means that before we can learn to read off, for example, your mind-states from your brain-states, we need to understand the basic, baseline patterns of regularities characteristic of you. For the reasons just stated, this seems like an enormous, maybe infinite task. What is called for is a model of how you respond to everything. But this is like trying to make a map which shows everything, and so ends up merely reduplicating the reality we were trying to get a grip on in the first place.
Perhaps we will be able to develop rough and ready ways to train up computers to make sense of individual cases, although, for reasons given, I am skeptical. But let us now confront the fact each of us changes over time, so it isn’t even clear that the imagined ability to “decode” your neural states today would allow us reliably to do so tomorrow. And even more daunting: every brain is different. It isn’t at all clear that a Brain Reader trained up on you will work on me. And would a Brain Reader trained up to work on me, also on work on a Siberian shepherd, or a cook in New Delhi, or a young child in Helsinki?
We are a very long way off from knowing how to answer these questions.
2. We see that the practical questions trail off into questions of principle. But the matters of principle ramify. Consider this: When we speak of establishing baselines and “training up the detector,” what we are acknowledging is that neural states and processes only have meaning or significance in context. This is very important. It shows not that absent information about context, we can’t make sense of what the brain is telling us. It shows that in the absence of context, the brain isn’t telling us anything at all. I will explain.
I mentioned “visual cortex” before. But what is this? Visual cortex does not refer to an anatomical structure (a bit of body like the heart or the feet). It refers to a functional (that is, a neurophysiological) structure. Now primary visual function is in fact anatomically realized, in normal humans, in a particular anatomical structure (at the back of the brain, the occipital lobe). But it needn’t be. In development the brain is plastic, as mentioned in earlier discussions on this blog, and it is possible for visual function to migrate to other cortical systems. Crucially, it is a nontrivial matter — in my view, an impossibility — to specify what the functions of vision are without considering the wider behavioral, environmental, social context of the perceiver. Seeing is something people and other animals do. It isn’t something the brain does.
Now granted, if we already know what seeing is, and already have a theory of the way genetic and environmental factors sculpt the brain, then yes, practical limitations aside, we can tell, by looking at what is going on in the brain, whether there is seeing going on. But we could not do this if we really confined ourselves to neural information alone.
But the in-principle limitations on brain-reading run deeper. We make sense of what the brain is doing by looking at its behavior in relation to our lives, and the meanings, facts, situations and interests that define our lives. Remember, the brain is a piece of meat. And its doesn’t come packaged and labelled. We give brain structures and processes labels, and we do so by thinking about ways in which we can link, correlate, and associate what interests us — in the current case, in our thinking, feeling, lives — with the meat. In the absence of this brain-mind two-step, there is no visual cortex, as we’ve seen. There’s just stuff.
The point is a general one. What is a brain state, anyway? The brain doesn’t tell us. Meaning in the brain does not get revealed. Remember, no two brains are alike, just as no two faces or fingerprints are alike. And no one brain stays the same over time. We can talk about brain states and we can make meaningful judgments about whether two brains are alike in respect of this or that feature. But we can do this only when we’ve carefully framed what interests us. The brain itself doesn’t give us the frame! And what interests us is the brain in relation to our lives. It is our lives — our thoughts, feelings, desires, interests, etc — that lends us the vocabulary we need to describe what the the brain is doing (as Daniel Dennett argued twenty-five years ago).
The idea that we could think that the brain is not only part of the story, but the whole story, is, well, it is unfounded. It is religious in its scope and reach. Mental phenomena are not neural phenomena. We have no better reason to think that mental lives happen in our brains than we do that speech happens in our mouths.