I've heard Chalmers talk about this to loads of different people (e.g. Tom Stoppard discussed his play about it with him), but the best conversation I've come across was with the physicist Sean Carroll on his podcast Mindscape - Episode 25. The first 50 minutes of the podcast are particularly relevant, so here are the most important lines from that:
- [Sean Carroll] David describes himself as a naturalist, someone who believes in just the natural world, not a supernatural one. Not a dualist who thinks there’s a disembodied mind or anything like that. But he’s not a physicalist. He thinks that the natural world not only has physical properties, but mental properties as well. He’s convinced of the problem, but he’s not wedded to any solutions yet.
- [David Chalmers] The hard problem of consciousness is the problem of explaining how physical processes in the brain somehow give rise to subjective experience. ... When it comes to explaining behaviour, we have a pretty good bead on how to explain that. In principle, you find a circuit in the brain, maybe a complex neural system, which maybe performs some computations, produces some outputs, generates the behaviour. Then, in principle, you’ve got an explanation. It may take a century or two to work out the details, but that’s roughly the standard model in cognitive science. This is what, 20 odd years ago, I called the easy problem. Nobody thinks they are easy in the ordinary sense. The sense in which they are easy is that we’ve got a paradigm for explaining them.
- [DC] The really distinctive problem of consciousness is posed not by the behavioural parts but by the subjective experience. By how it feels from the inside to be a conscious being. I’m seeing you right now. I have a visual image of colours and shapes that are sort of present to me as an element of the inner movie of the mind. I’m hearing my voice, I’m feeling my body, I’ve got a stream of thoughts running through my head. This is what philosophers call consciousness or subjective experience. I take it to be one of the fundamental facts about ourselves, that we have this kind of subjective experience.
- [SC] Sometimes I hear it glossed as "what it is like" to be a subjective agent.
- [DC] That’s a good definition of consciousness actually put forward by my colleague Thomas Nagel in an article back in 1974 called “What is it like to be a bat?” His thought was that we don’t know what it is like. We don’t know what a bat’s subjective experience is like. It’s got this weird sonar capacity that doesn’t correspond directly to anything we humans have. But presumably there is something it is like to be a bat. A bat is conscious. On the other hand, people would say there is nothing it is like to be a glass of water. If that’s right, the glass of water is not conscious. So, this “what it’s like” way of speaking is a good way of serving as an initial intuition pump for the difference we’re getting at between systems that are conscious and systems which are not.
- [SC] The other word that is sometimes invoked in this context is the “qualia” of the experiences we have. There is one thing that it is to see the colour red, and a separate thing to have the experience of the redness of red.
- [DC] This word qualia may have gone a little out of favour over the last 20 years, but you used to have a lot of people speaking of qualia as a word for the sensory qualities that you come across in experience. The paradigmatic one would be the experience of red vs. the experience of green. There are many familiar questions about this. How do I know that my experience of the thing we call red is the same as the experience you have? Maybe our internal experiences are swapped. That would be inverted qualia, if my red were your green. ... We know that some people are colour blind. They can’t make a distinction between red and green. ... I have friends that have this and I’m often asking them, what is it like to be you? Is it all just shades of blue and yellow? We know that what it is like to be them can’t be what it is like to be us.
- [DC] When it comes to consciousness, we’re dealing with something subjective. I know I’m conscious not because I’ve measured my behaviour or anybody else’s behaviour, but because it’s something I’ve experienced directly from the first-person point of view. You’re probably conscious, but it’s not like I can give a straight up operational definition of it. We could come up with an AI that says it’s conscious. That would be very interesting. But would that settle the question of whether it’s having subjective experience? Probably not.
- [SC] Alan Turing noted a “consciousness objection” [to his Turing test], but said he can’t possibly test for that so it’s not meaningful.
- [DC] Yes. But it turns out consciousness is one of the central things that we value. A) It’s one of the central properties of our minds. B) Many people think it’s what actually gives lives meaning and value. If we weren’t conscious, if we didn’t have subjective experience, then we’d basically just be automata for whom nothing has any meaning or value. So I think when it comes to the question of whether sophisticated AI’s are conscious or not, its going to be absolutely central to how we treat them, to whether they have moral status, whether we should care if they continue to live or die, whether they get rights, and so on.
- [SC] To get our cards fully on the table, neither of us are coming at this from a strictly dualist position. Neither of us are resorting to a Cartesian disembodied mind that is a separate substance. Right? As a first hypothesis, we both want to say that we are composed of atoms and obeying the laws of physics. Consciousness is somehow related to that but not an entirely separate category interacting with us. Is that fair to say?
- [DC] Yes, although there are different kinds and degrees of dualism. My background is in mathematics, computer science, and physics, so my first instincts are materialist. To try to explain everything in terms of the processes of physics: e.g. biology in terms of chemistry and chemistry in terms of physics. This is a wonderful great chain of explanation, but when it comes to consciousness, this is the one place where that great chain of explanation seems to break down. That doesn’t mean these are the properties of a soul or some religious thing which has existed since the beginning of time and will go on after our death. People call that substance dualism. Maybe there’s a whole separate substance that’s the mental substance and somehow that interacts and connects up with our physical bodies. That view, however, is much harder to connect to a scientific view of the world.
- [DC] The version I end up with is sometimes called property dualism. This is the idea that there are some extra properties of things in the universe. This is something we already have in physics. During Maxwell’s era, space and time and mass were seen as fundamental. Then Maxwell wanted to explain electromagnetism and there was a project that tried to explain it in terms of mass and space and time. That didn’t work. Eventually, we ended up positing charge as a fundamental property with some new laws of physics governing these electromagnetic phenomena and that became just an extra property in our scientific picture of the world. I’m inclined to think that something slightly analogous to this is what we have to do with consciousness.
- [SC] You think that even if neuroscientists got to the point where, for every time a person was doing something we would all recognise as having a conscious experience, even if it was silent—for example, experiencing the redness of red—they could point to exactly the same neural activity going on in the brain, you would say this still doesn’t explain my subjective experience?
- [DC] Yes. That’s in fact a very important research program going on right now. People call it the program of finding the neural correlates of consciousness (the NCC). We’re trying to find the NCC or neural systems that act precisely when you are conscious. This is a very important research program, but it’s one for correlation, not explanation. We could know when a special kind of neuron fires in a certain pattern that that always goes along with consciousness. But the next question is why. Why is that? As it stands, nothing we get out of the neural correlates of consciousness comes close to explaining that matter.
- [DC] We need another fundamental principle that connects the neural correlates of consciousness with consciousness itself. Giulio Tononi, for example has developed his Integrated Information Theory where he says consciousness goes along with a mathematical measure of the integration of information, which he calls phi. The more phi you have, the more consciousness you have. Phi is a mathematically and physically respectable quantity that is very hard to measure, but in principle you could find it and measure it. There are questions of whether this is actually well defined in terms of the details of physics and physical systems, but it’s at least halfway to something definable. But even if he’s right that phi—this informational property—correlates perfectly with consciousness, there’s still this question of why.
- [DC] Prima facie, it looks like you could have had a universe where the integration of information is going on, but no consciousness at all. And yet, in our universe, there’s consciousness. How do we explain that fact? What I regard as the scientific thing to do at this point is to say that in science, we boil everything down into fundamental principles and laws, and we need to postulate a fundamental law that connects, say phi, with consciousness. Then that would be great, maybe that’s going to be the best we can do. In physics, there’s a fundamental law of gravitation, or a grand unified theory that unifies all these different forces. You end up with some fundamental principles and you don’t take them further. Something has to be taken as basic. Of course, you want to minimise our fundamental principles and properties as far as we can. Occam’s razor says don’t multiply entities without necessity. Every now and then, however, we have necessity. Maxwell was right about this with electromagnetism. Maybe I’m right about the necessity in the case of consciousness too.
- [SC] You’ve hinted at one of your most famous thought experiments there by saying you can imagine a system with whatever phi you want, but we wouldn’t call it conscious. You take that idea to the extreme and say there could be something that looks and acts just like a person but doesn’t have consciousness.
- [DC] Yes. This is the philosopher’s thought experiment of the zombie. ... The philosopher’s zombie is a creature that is exactly like us functionally, behaviourally, and maybe physically, but it’s not conscious. It’s very important to say that nobody, certainly not me, is arguing that such zombies actually exist. ... I’m very confident there isn’t such a case now, but the point is that it at least seems logically possible. There’s no contradiction in the idea of there being an entity just like you without consciousness. That’s just one way of getting at the idea that somehow consciousness is something extra and special that is going on. You could put the hard problem of consciousness as, why aren’t we zombies?
- [SC] How can I be sure that I’m not a zombie?
- [DC] There’s a very good argument that I can’t be sure you’re not a zombie. All I have is access to your behaviour. But the first-person case is different. In the first-person case, I’m conscious, I know that more directly than I know anything else. Descartes said in the 1640’s this is the one thing I can be certain of. I can doubt everything about the external world, but I can’t doubt that I’m thinking. I think therefore I am. I think it’s natural to take consciousness as our primary epistemic datum. Whatever you say about zombies I know that I’m not one of them because I know I’m conscious.
- [SC] What makes me worried is that the zombie would give itself all those same reasons. So, how can I be sure I’m not that zombie?
- [DC] To be fair, you’ve put your finger on the weakest spot of the zombie hypothesis and for the ideas that come from it. In my first book, The Conscious Mind, I had a whole chapter about this called this "The Paradox of Phenomenal Judgment" that stems from the fact that my zombie twin would say, and do, and write all of the things I was. We shouldn’t take possible worlds too seriously, but what is going on in the zombie world is what philosophers call eliminativism, where there is no such thing as consciousness and the zombie is making a mistake. There is a respectable program in philosophy that says we’re basically in that situation in our world, and lately there has been an upsurge in people taking this seriously. It’s called illusionism.
- [DC] Illusionism is the idea that consciousness is some kind of internal introspective illusion. Think about what’s going on with the zombie. The zombie thinks it has special properties of consciousness, but it doesn’t. All is dark inside. Illusionists say, actually, that’s our situation. It seems to us we have all these special properties—those qualia, those sensory experiences—but in a way, all is dark inside for us as well. There is just a very strong introspective mechanism that makes us think we have those special properties. That’s illusionism.
- [DC] I’ve been thinking about this a lot and wrote an article called “The Meta Problem of Consciousness” that just came out. The hard problem of consciousness is why are we conscious, why do these physical processes give rise to consciousness. The meta problem of consciousness is: why do we think we’re conscious? Why do we think there’s a problem of consciousness? Remember, the hard problem says the easy problems are about behaviour, and the hard problem is about experience. Well, the meta problem is ultimately about behaviour. It’s about the things we do and the things we say. Why do people go around writing books about this? Why do they say, "I’m conscious", "I’m feeling pain"? Why do they say, I have these properties that are hard to explain in functional terms? That’s a behavioural problem. That’s an easy problem.
- [SC] Aside from eliminativism and illusionism, which are fairly hard core on one side, or forms of dualism on the other side, there is this kind of “emergent” position one can take that is physicalist and materialist at the bottom, but doesn’t say that therefore things like consciousness and subjective experiences don’t exist or are illusions. They are higher order phenomena like tables or chairs. They are categories that we invent to help us organise our experience of the world.
- [DC] My view is that emergence is sometimes used as a magic word to make us feel good about things we don’t understand. How do you get from this to this? It’s emergent! But what do you really mean by emergent? I wrote an article about this once where I distinguished weak emergence from strong emergence. Weak emergence is basically the kind you get from lower level structural dynamics explaining higher level structural dynamics: the behaviour of a complex system, the way traffic flows in a city, the dynamics of a hurricane etc. You get all sorts strange and surprising and cool phenomena emerging at the higher level. But still, once you understand the lower level mechanisms well enough, the higher-level ones just follow transparently. It’s just lower level structure giving you higher level structure according to the following simple rules. When it comes to consciousness, it looks like the easy problems may be emergent in this way. Those may turn out to be low-level structural and functional mechanisms that produce these reports and these behaviours that lead us to being awake, and no one would be surprised if these were weakly emergent in that way. But none of that seems to add up to an explanation of subjective experience, which just looks like something new. Philosophers sometimes talk about emergence in a different way. Strong emergence involves something fundamentally new emerging via new fundamental laws. Maybe there’s a fundamental law that says when you get this information being integrated then you get consciousness. I think consciousness may be emergent in that sense, but that’s not a sense that helps the materialist. If you want consciousness to be emergent in a sense that helps the materialist, you have to go for weak emergence and that is ultimately going to require reducing the hard problem to an easy problem.
- [DC] Everyone has to make hard choices here and I don’t want to let you off the hook by just saying, “Ah it’s all ultimately going to be the brain and a bunch of emergence.” There’s a respectable materialist research program here, but that involves ultimately turning the hard problem into an easy one. All you are going to get from physics is more and more structure and dynamics and functioning and so on. For that to turn into an explanation of consciousness, you need to find some way to deflate what needs explaining in the case of consciousness to a matter of behaviour and functioning. And maybe say the extra thing that needs explaining, that’s an illusion. People like Dan Dennett, who I respect greatly, has tried to do this for years, for decades. At the end of the day, most people look at what Dennett’s come up with and they say “Nope, not good enough. You haven’t explained consciousness.” If you can do better, then great.
- [DC] I’ve explored a number of different positive views on consciousness. What I haven’t done is commit to any of them. I see various different interesting possibilities, each of which has big problems. Big attractions, but also big problems to overcome.
Brief Comments
I've never given much weight to Chalmers' zombie problem. Relying on "conceivable worlds" strikes me as a reformulated ontological argument for the existence of God—i.e. if you can imagine it, it must be so. But our imaginations can be wrong in all sorts of ways; possibly even in ways we can't imagine. That's why Descartes was wrong too. Cogito ergo sum should have been I think, therefore I think I think.
In this interview, however, Chalmers has convinced me there is a "hard" problem, but I think it is misnamed. Hard implies that it could be cracked. But what Chalmers keeps retreating to is ultimately an unanswerable question. After every new explanation of consciousness that could ever come along—from believing that consciousness is in our bodies, all the way to defining a theoretically perfect neural correlates of consciousness—Chalmers continually just asks, "Why?" Why is there consciousness rather than none? I think this is perfectly analogous to asking "why is there something rather than nothing?" But As Arne Naess pointed out, all worldviews have to start with some hypotheses. You can never get outside of everything in order to see everything. To claim that you can, is like trying to blow a balloon up from the inside. And Chalmers' infinite regression of "why" sure seems a balloon we can never get outside of.
So, I'd like to make a distinction for Chalmers' hard problem between the how and the why. How do physical processes lead to subjective experience? Why do physical processes lead to subjective experience? The ultimate why is ultimately an impossible problem. The hows along the way to that ultimate why may be difficult, but we can make progress with them. And they can tell us important things about life. Maybe it will turn out that consciousness—whatever we mean by that—will be fundamental to the universe in the way that electromagnetism is right now. Or maybe we'll find something else. But let's spend our time studying those hows, rather than getting caught up debating impossible whys.
Of course, there are other problems with objectively studying these "easy" problems of subjective consciousness. And that's what we'll look at next time.
What do you think? Is the hard problem of consciousness hard? Impossible? Easy? Or something else?
--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery