Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Consciousness 3 — The Hard Problem

3/19/2020

44 Comments

 
Picture
In the last post in this series, I shared a couple of podcasts that knocked down the common / religious / folk views of consciousness, which sees it as something separate from our bodies, unchanging, or immortal. Close observations of the world—whether scientific or meditative—just don't find any evidence for that kind of consciousness. And yet, we seem confident that we, ourselves, have it. So where does consciousness come from? That has been the subject of the mind-body problem in philosophy for centuries. Most modern people (especially of non-religious persuasions) now see the mind as embedded in the body. But since bodies are made up of the same physical stuff as the rest of the observable universe, it's unclear how minds could possibly ever arise from such stuff. In 1996, in his book The Conscious Mind, the philosopher David Chalmers called this "the hard problem of consciousness" and it remains a deep sticking point for philosophers and scientists today.

I've heard Chalmers talk about this to loads of different people (e.g. Tom Stoppard discussed his play about it with him), but the best conversation I've come across was with the physicist Sean Carroll on his podcast Mindscape - Episode 25. The first 50 minutes of the podcast are particularly relevant, so here are the most important lines from that:
  • [Sean Carroll] David describes himself as a naturalist, someone who believes in just the natural world, not a supernatural one. Not a dualist who thinks there’s a disembodied mind or anything like that. But he’s not a physicalist. He thinks that the natural world not only has physical properties, but mental properties as well. He’s convinced of the problem, but he’s not wedded to any solutions yet.
  • [David Chalmers] The hard problem of consciousness is the problem of explaining how physical processes in the brain somehow give rise to subjective experience. ... When it comes to explaining behaviour, we have a pretty good bead on how to explain that. In principle, you find a circuit in the brain, maybe a complex neural system, which maybe performs some computations, produces some outputs, generates the behaviour. Then, in principle, you’ve got an explanation. It may take a century or two to work out the details, but that’s roughly the standard model in cognitive science. This is what, 20 odd years ago, I called the easy problem. Nobody thinks they are easy in the ordinary sense. The sense in which they are easy is that we’ve got a paradigm for explaining them.
  • [DC] The really distinctive problem of consciousness is posed not by the behavioural parts but by the subjective experience. By how it feels from the inside to be a conscious being. I’m seeing you right now. I have a visual image of colours and shapes that are sort of present to me as an element of the inner movie of the mind. I’m hearing my voice, I’m feeling my body, I’ve got a stream of thoughts running through my head. This is what philosophers call consciousness or subjective experience. I take it to be one of the fundamental facts about ourselves, that we have this kind of subjective experience.
  • [SC] Sometimes I hear it glossed as "what it is like" to be a subjective agent.
  • [DC] That’s a good definition of consciousness actually put forward by my colleague Thomas Nagel in an article back in 1974 called “What is it like to be a bat?” His thought was that we don’t know what it is like. We don’t know what a bat’s subjective experience is like. It’s got this weird sonar capacity that doesn’t correspond directly to anything we humans have. But presumably there is something it is like to be a bat. A bat is conscious. On the other hand, people would say there is nothing it is like to be a glass of water. If that’s right, the glass of water is not conscious. So, this “what it’s like” way of speaking is a good way of serving as an initial intuition pump for the difference we’re getting at between systems that are conscious and systems which are not.
  • [SC] The other word that is sometimes invoked in this context is the “qualia” of the experiences we have. There is one thing that it is to see the colour red, and a separate thing to have the experience of the redness of red.
  • [DC] This word qualia may have gone a little out of favour over the last 20 years, but you used to have a lot of people speaking of qualia as a word for the sensory qualities that you come across in experience. The paradigmatic one would be the experience of red vs. the experience of green. There are many familiar questions about this. How do I know that my experience of the thing we call red is the same as the experience you have? Maybe our internal experiences are swapped. That would be inverted qualia, if my red were your green. ... We know that some people are colour blind. They can’t make a distinction between red and green. ... I have friends that have this and I’m often asking them, what is it like to be you? Is it all just shades of blue and yellow? We know that what it is like to be them can’t be what it is like to be us.
  • [DC] When it comes to consciousness, we’re dealing with something subjective. I know I’m conscious not because I’ve measured my behaviour or anybody else’s behaviour, but because it’s something I’ve experienced directly from the first-person point of view. You’re probably conscious, but it’s not like I can give a straight up operational definition of it. We could come up with an AI that says it’s conscious. That would be very interesting. But would that settle the question of whether it’s having subjective experience? Probably not.
  • [SC] Alan Turing noted a “consciousness objection” [to his Turing test], but said he can’t possibly test for that so it’s not meaningful.
  • [DC] Yes. But it turns out consciousness is one of the central things that we value. A) It’s one of the central properties of our minds. B) Many people think it’s what actually gives lives meaning and value. If we weren’t conscious, if we didn’t have subjective experience, then we’d basically just be automata for whom nothing has any meaning or value. So I think when it comes to the question of whether sophisticated AI’s are conscious or not, its going to be absolutely central to how we treat them, to whether they have moral status, whether we should care if they continue to live or die, whether they get rights, and so on.
  • [SC] To get our cards fully on the table, neither of us are coming at this from a strictly dualist position. Neither of us are resorting to a Cartesian disembodied mind that is a separate substance. Right? As a first hypothesis, we both want to say that we are composed of atoms and obeying the laws of physics. Consciousness is somehow related to that but not an entirely separate category interacting with us. Is that fair to say?
  • [DC] Yes, although there are different kinds and degrees of dualism. My background is in mathematics, computer science, and physics, so my first instincts are materialist. To try to explain everything in terms of the processes of physics: e.g. biology in terms of chemistry and chemistry in terms of physics. This is a wonderful great chain of explanation, but when it comes to consciousness, this is the one place where that great chain of explanation seems to break down. That doesn’t mean these are the properties of a soul or some religious thing which has existed since the beginning of time and will go on after our death. People call that substance dualism. Maybe there’s a whole separate substance that’s the mental substance and somehow that interacts and connects up with our physical bodies. That view, however, is much harder to connect to a scientific view of the world.
  • [DC] The version I end up with is sometimes called property dualism. This is the idea that there are some extra properties of things in the universe. This is something we already have in physics. During Maxwell’s era, space and time and mass were seen as fundamental. Then Maxwell wanted to explain electromagnetism and there was a project that tried to explain it in terms of mass and space and time. That didn’t work. Eventually, we ended up positing charge as a fundamental property with some new laws of physics governing these electromagnetic phenomena and that became just an extra property in our scientific picture of the world. I’m inclined to think that something slightly analogous to this is what we have to do with consciousness.
  • [SC] You think that even if neuroscientists got to the point where, for every time a person was doing something we would all recognise as having a conscious experience, even if it was silent—for example, experiencing the redness of red—they could point to exactly the same neural activity going on in the brain, you would say this still doesn’t explain my subjective experience?
  • [DC] Yes. That’s in fact a very important research program going on right now. People call it the program of finding the neural correlates of consciousness (the NCC). We’re trying to find the NCC or neural systems that act precisely when you are conscious. This is a very important research program, but it’s one for correlation, not explanation. We could know when a special kind of neuron fires in a certain pattern that that always goes along with consciousness. But the next question is why. Why is that? As it stands, nothing we get out of the neural correlates of consciousness comes close to explaining that matter.
  • [DC] We need another fundamental principle that connects the neural correlates of consciousness with consciousness itself. Giulio Tononi, for example has developed his Integrated Information Theory where he says consciousness goes along with a mathematical measure of the integration of information, which he calls phi. The more phi you have, the more consciousness you have. Phi is a mathematically and physically respectable quantity that is very hard to measure, but in principle you could find it and measure it. There are questions of whether this is actually well defined in terms of the details of physics and physical systems, but it’s at least halfway to something definable. But even if he’s right that phi—this informational property—correlates perfectly with consciousness, there’s still this question of why.
  • [DC] Prima facie, it looks like you could have had a universe where the integration of information is going on, but no consciousness at all. And yet, in our universe, there’s consciousness. How do we explain that fact? What I regard as the scientific thing to do at this point is to say that in science, we boil everything down into fundamental principles and laws, and we need to postulate a fundamental law that connects, say phi, with consciousness. Then that would be great, maybe that’s going to be the best we can do. In physics, there’s a fundamental law of gravitation, or a grand unified theory that unifies all these different forces. You end up with some fundamental principles and you don’t take them further. Something has to be taken as basic. Of course, you want to minimise our fundamental principles and properties as far as we can. Occam’s razor says don’t multiply entities without necessity. Every now and then, however, we have necessity. Maxwell was right about this with electromagnetism. Maybe I’m right about the necessity in the case of consciousness too.
  • [SC] You’ve hinted at one of your most famous thought experiments there by saying you can imagine a system with whatever phi you want, but we wouldn’t call it conscious. You take that idea to the extreme and say there could be something that looks and acts just like a person but doesn’t have consciousness.
  • [DC] Yes. This is the philosopher’s thought experiment of the zombie. ... The philosopher’s zombie is a creature that is exactly like us functionally, behaviourally, and maybe physically, but it’s not conscious. It’s very important to say that nobody, certainly not me, is arguing that such zombies actually exist. ... I’m very confident there isn’t such a case now, but the point is that it at least seems logically possible. There’s no contradiction in the idea of there being an entity just like you without consciousness. That’s just one way of getting at the idea that somehow consciousness is something extra and special that is going on. You could put the hard problem of consciousness as, why aren’t we zombies?
  • [SC] How can I be sure that I’m not a zombie?
  • [DC] There’s a very good argument that I can’t be sure you’re not a zombie. All I have is access to your behaviour. But the first-person case is different. In the first-person case, I’m conscious, I know that more directly than I know anything else. Descartes said in the 1640’s this is the one thing I can be certain of. I can doubt everything about the external world, but I can’t doubt that I’m thinking. I think therefore I am. I think it’s natural to take consciousness as our primary epistemic datum. Whatever you say about zombies I know that I’m not one of them because I know I’m conscious.
  • ​[SC] What makes me worried is that the zombie would give itself all those same reasons. So, how can I be sure I’m not that zombie?
  • [DC] To be fair, you’ve put your finger on the weakest spot of the zombie hypothesis and for the ideas that come from it. In my first book, The Conscious Mind, I had a whole chapter about this called this "The Paradox of Phenomenal Judgment" that stems from the fact that my zombie twin would say, and do, and write all of the things I was. We shouldn’t take possible worlds too seriously, but what is going on in the zombie world is what philosophers call eliminativism, where there is no such thing as consciousness and the zombie is making a mistake. There is a respectable program in philosophy that says we’re basically in that situation in our world, and lately there has been an upsurge in people taking this seriously. It’s called illusionism.
  • [DC] Illusionism is the idea that consciousness is some kind of internal introspective illusion. Think about what’s going on with the zombie. The zombie thinks it has special properties of consciousness, but it doesn’t. All is dark inside. Illusionists say, actually, that’s our situation. It seems to us we have all these special properties—those qualia, those sensory experiences—but in a way, all is dark inside for us as well. There is just a very strong introspective mechanism that makes us think we have those special properties. That’s illusionism.
  • [DC] I’ve been thinking about this a lot and wrote an article called “The Meta Problem of Consciousness” that just came out. The hard problem of consciousness is why are we conscious, why do these physical processes give rise to consciousness. The meta problem of consciousness is: why do we think we’re conscious? Why do we think there’s a problem of consciousness? Remember, the hard problem says the easy problems are about behaviour, and the hard problem is about experience. Well, the meta problem is ultimately about behaviour. It’s about the things we do and the things we say. Why do people go around writing books about this? Why do they say, "I’m conscious", "I’m feeling pain"? Why do they say, I have these properties that are hard to explain in functional terms? That’s a behavioural problem. That’s an easy problem.
  • [SC] Aside from eliminativism and illusionism, which are fairly hard core on one side, or forms of dualism on the other side, there is this kind of “emergent” position one can take that is physicalist and materialist at the bottom, but doesn’t say that therefore things like consciousness and subjective experiences don’t exist or are illusions. They are higher order phenomena like tables or chairs. They are categories that we invent to help us organise our experience of the world.
  • [DC] My view is that emergence is sometimes used as a magic word to make us feel good about things we don’t understand. How do you get from this to this? It’s emergent! But what do you really mean by emergent? I wrote an article about this once where I distinguished weak emergence from strong emergence. Weak emergence is basically the kind you get from lower level structural dynamics explaining higher level structural dynamics: the behaviour of a complex system, the way traffic flows in a city, the dynamics of a hurricane etc. You get all sorts strange and surprising and cool phenomena emerging at the higher level. But still, once you understand the lower level mechanisms well enough, the higher-level ones just follow transparently. It’s just lower level structure giving you higher level structure according to the following simple rules. When it comes to consciousness, it looks like the easy problems may be emergent in this way. Those may turn out to be low-level structural and functional mechanisms that produce these reports and these behaviours that lead us to being awake, and no one would be surprised if these were weakly emergent in that way. But none of that seems to add up to an explanation of subjective experience, which just looks like something new. Philosophers sometimes talk about emergence in a different way. Strong emergence involves something fundamentally new emerging via new fundamental laws. Maybe there’s a fundamental law that says when you get this information being integrated then you get consciousness. I think consciousness may be emergent in that sense, but that’s not a sense that helps the materialist. If you want consciousness to be emergent in a sense that helps the materialist, you have to go for weak emergence and that is ultimately going to require reducing the hard problem to an easy problem.
  • [DC] Everyone has to make hard choices here and I don’t want to let you off the hook by just saying, “Ah it’s all ultimately going to be the brain and a bunch of emergence.” There’s a respectable materialist research program here, but that involves ultimately turning the hard problem into an easy one. All you are going to get from physics is more and more structure and dynamics and functioning and so on. For that to turn into an explanation of consciousness, you need to find some way to deflate what needs explaining in the case of consciousness to a matter of behaviour and functioning. And maybe say the extra thing that needs explaining, that’s an illusion. People like Dan Dennett, who I respect greatly, has tried to do this for years, for decades. At the end of the day, most people look at what Dennett’s come up with and they say “Nope, not good enough. You haven’t explained consciousness.” If you can do better, then great.
  • [DC] I’ve explored a number of different positive views on consciousness. What I haven’t done is commit to any of them. I see various different interesting possibilities, each of which has big problems. Big attractions, but also big problems to overcome.

Brief Comments
​I've never given much weight to Chalmers' zombie problem. Relying on "conceivable worlds" strikes me as a reformulated ontological argument for the existence of God—i.e. if you can imagine it, it must be so. But our imaginations can be wrong in all sorts of ways; possibly even in ways we can't imagine. That's why Descartes was wrong too. Cogito ergo sum should have been I think, therefore I think I think.

In this interview, however, Chalmers has convinced me there is a "hard" problem, but I think it is misnamed. Hard implies that it could be cracked. But what Chalmers keeps retreating to is ultimately an unanswerable question. After every new explanation of consciousness that could ever come along—from believing that consciousness is in our bodies, all the way to defining a theoretically perfect neural correlates of consciousness—Chalmers continually just asks, "Why?" Why is there consciousness rather than none? I think this is perfectly analogous to asking "why is there something rather than nothing?" But
As Arne Naess pointed out, all worldviews have to start with some hypotheses. You can never get outside of everything in order to see everything. To claim that you can, is like trying to blow a balloon up from the inside. And Chalmers' infinite regression of "why" sure seems a balloon we can never get outside of.

So, I'd like to make a distinction for Chalmers' hard problem between the how and the why. How do physical processes lead to subjective experience? Why do physical processes lead to subjective experience? The ultimate why is ultimately an impossible problem. The hows along the way to that ultimate why may be difficult, but we can make progress with them. And they can tell us important things about life. Maybe it will turn out that consciousness—whatever we mean by that—will be fundamental to the universe in the way that electromagnetism is right now. Or maybe we'll find something else. But let's spend our time studying those hows, rather than getting caught up debating impossible whys.

Of course, there are other problems with objectively studying these "easy" problems of subjective consciousness. And that's what we'll look at next time.

What do you think? Is the hard problem of consciousness hard? Impossible? Easy? Or something else?


--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
44 Comments
SelfAwarePatterns link
3/19/2020 10:29:51 pm

Similar to the recursive "why" you discuss, I think the hard problem is a category mistake. Chalmers makes the mistake just about every time he introduces the concept, by discussing the "easy" problems, that is the scientifically tractable ones, then dismissing them as not what he's talking about.

Gilbert Ryle actually addressed this several decades before Chalmers. His answer could be called "the hard problem of Oxford." A visitor, being given a tour of the various lecture halls, dormitories, and offices, maybe interviewing some of the faculty and students, then asks, "But where is the university?" (Oxford is a particularly striking example here since it's "campus" is dispersed throughout the city of Oxford.)

No matter what account the visitor is given for how "the university" emerges from everything they've been shown, they can always say, "Yes, but what about universityness?"

If we're not permitted to break a problem down into its constituent parts, then it does indeed become hard, perhaps impossible, but only because of the artificial constraints that have been imposed.

Put another way, there was once the hard problem of life, the mystery of the elan vital that somehow separated living things from non-living things. No ones talks about that anymore, because microbiology has progressed to the point that we can see how life emerges from chemistry. I suspect consciousness, at least in the way many people discuss it today, has a similar future.

Reply
Philosopher Eric link
3/20/2020 06:16:33 am

Looking a bit harder and more charitably through this, even if we someday get very specific about which causal processes create phenomenal existence (and perhaps able to mass produce it just as nature does), I guess technically Chalmers is merely saying that we humans probably won’t ever grasp why these processes function as they do. For example we’re quite aware of the circumstances which create gravity (unlike phenomenal consciousness), but will probably never grasp why such circumstances do so. From this perspective gravity might be considered a “hard problem”. If that’s his point then I will admit that phenomenal consciousness may effectively be considered such problem as well. (And let me be clear that I’m merely speaking epistemically rather than ontologically here. Going ontological should put a person into the “substance dualist” club. His “property dualism” term does seem suspicious however!)

Chalmers does disappoint me when he says that the philosopher’s zombie “seems logically possible”. Not to me it doesn’t! Clearly consciousness evolved as a trait of the human for functional purposes. So something without it should lack such virtues. (I theorize that the associated teleology permits an organism to better succeed under more “open” environments, which thus tend to be more difficult to program for.)

Mike,
I believe that this “hard problem of Oxford” can be solved through my first principle of epistemology. This is to say that there are no true or false definitions for any term, such as “Oxford University”. Observe that it was practically defined for the visitor, and yet apparently he was instead looking for a “true” definition. According to my EP1, no such definition exists for any of our terms. I believe that academia suffers tremendously today without such a formal understanding. Apparently we’ll need a community of respected professionals that’s able to lay down some sensible principles of metaphysics, epistemology, and axiology to straighten this stuff out.

Reply
Ed Gibney link
3/20/2020 09:56:47 am

Thanks guys! I do think there's a difference here between the category mistake (which I love) and the hard / impossible question. The category mistake is an ontological one. There literally is no "university", whereas there may be an underlying "why" behind consciousness (or everything for that matter), but there's an epistemological barrier that stops us from ever being sure we know it.

I wrote about the category error in this blog post:

http://www.evphil.com/blog/response-to-experiment-49-the-hole-in-the-sum-of-the-parts

And I often quote this article about how "conscious-ness" would be "conscious-ing" in Chinese where verbs are more prevalent than nouns. Thus, we may be making the category mistake of reifying consciousness partly because of our language.

https://www.nybooks.com/daily/2016/06/30/the-mind-less-puzzling-in-chinese/

The discussions with neuroscientists later in this series who have been exploring what we do consciously vs. non-consciously definitely sheds some light on the evolved purpose of higher levels of consciousness. I'm not happy with the words being used in that last sentence, but I hope to define them better later.

Reply
Philosopher Eric link
3/20/2020 02:56:44 pm

On category errors, I like to keep “What color are triangles” in my back pocket. It doesn’t surprise me that an “ordinary language” philosopher like Gilbert Ryle made up that “Oxford University” scenario. They think of terms as having “true” definitions. I think of them as having only more and less useful definitions, or my EP1. So I don’t consider the cab passengers to have made a category error, but rather that they didn’t grasp, or perhaps even try to grasp, the cabbie’s definition. It’s not like, “Show us what color triangles are”. There’s nothing to show there because it’s inconsistent with how we all perceive triangles. But if one wanted to define the “triangle” term such that they’re blue, well fine. It just wouldn’t generally be considered very useful.

I’m quite agreed that “nouns” give English speakers problems. And as an English speaker, there I’ve just displayed it. I should actually say, “I’m quite agreed that “nouning gives English speakers problems”. My EP1 could help fix this I think.

Reply
SelfAwarePatterns link
3/20/2020 03:16:51 pm

Eric,
"I believe that this “hard problem of Oxford” can be solved through my first principle of epistemology."

Why doesn't the same analysis dissolve the other hard problem? Might it be because we have no single agreed upon definition of consciousness? Because we're dealing with an inconsistent and hazy collection of concepts?


Ed,
"Thus, we may be making the category mistake of reifying consciousness partly because of our language."

I think there's something to be said for this. We often affix "ness" to verbs or adjectives and turn them into nouns, but that doesn't mean in reality that there's anything more than the action or property.

Many writers make the point that we, often unwittingly, use "consciousness" as a code word for "soul". It's interesting that the Greek word for "soul" is "psyche", which means "to breathe", the <i>action</i> that appears to separate animals from other things.

Reply
Ed Gibney link
3/20/2020 06:44:54 pm

Eric,

We're getting a little sidetracked with epistemology, but I'm with you on your EP1 that no truth can be known. I recently published an article / book review on this in "The Philosopher" (a peer-reviewed magazine of public philosophy that I think is worth the subscription). I was asked by the editor there to review Naomi Oreskes' book "Why Trust Science?" which basically said there is no truth. I agreed with this and offered up some positions from evolutionary epistemology (some from Donald Campbell and some from me) which help make that position workable. In other words, when you say definitions can only be more or less useful, Naomi Oreskes' position plus some evolutionary epistemology shows this is correct and then offers some criteria on how to judge things as more or less useful. Reading it now will scoop me a little bit on some things I intend to write later in this series, but that article is here:

https://www.thephilosopher1923.org/oreskes-review

I still think there's somewhat of a difference between this and the category error though. It doesn't have to be the trueness of the definition of university that trips people up. One could {theoretically) still hold a fuzzy definition that changes with the construction and demolition of specific campus buildings, but still look too literally for something representing that fuzzy definition. That's how I think the category error could be separate from your (and my) epistemology position(s) about truth. It's probably related though. It's hard for me to imagine someone taking "university" that literally without also thinking definitions must have literal truth value.

And excellent comment about all the nouning! Ha ha ha.

Reply
Philosopher Eric link
3/21/2020 03:53:26 pm

Mike,
Though ordinary language philosophers in general may think they can classify a cab ride displaying “Oxford University” as a category error, and then use this same reasoning in order to assert that David Chalmers’ hard problem of consciousness displays a category error as well, I dispute the merits of their epistemology. If we associate Oxford University with various public places that students enrolled in the school study, then claiming that visiting some of these places provides a tour of the school, is no category error. It’s a reasonable definition that’s quite possible to provide with a car ride — no “hardness” here. This can be contrasted with trying to show someone what color “triangles” are. As the term is universally used, this shape can be colored or colorless.

When we ask ourselves why mass attracts mass, this starts to give us the flavor of a hard problem. Presumably we’ll never grasp this very well, though in an ontological sense the naturalist will believe that there are causal dynamics which account for it in the end.

But when I compare the hardish problem of gravity to the problem of why existence can be horrible or wonderful to me, unlike gravity, this one doesn’t even seem to make sense. What the hell is going on in my head to produce phenomenal experience? You propose information processing. I propose that, and also the electromagnetic radiation associated with neuron firing. Regardless I consider this reality’s most amazing stuff.

I agree with you that the institution of science will need to develop an agreed upon definition for “consciousness”. Indeed, my single principle of metaphysics, two principles of epistemology, and single principle of axiology, should all need to come into play to help straighten science out in this regard.

Ed,
Yes I love where our ideas remain consistent, since they do differ ultimately. Hopefully the evolution of science will sort the differing bits out in favor of at least one of us in the end, whether your “life based” position, or my own “sentience based” position.

Reply
SelfAwarePatterns link
3/21/2020 05:03:34 pm

Eric,
This is the perennial debate between us. I see a tour of functionality as a tour of consciousness, including the affective aspects you mention. You don't. The difference between a functionalist and a mysterian.

Reply
Ed Gibney link
3/21/2020 05:31:44 pm

Either of you guys ever read any Raymond Tallis? He's the guy I'm planned to debate next February that I consider a mysterian too. (Although I've only read a review of his latest book, not the full 450-page thing yet.)

Reply
SelfAwarePatterns link
3/21/2020 06:02:31 pm

The name is familiar, but skimming his Wikipedia, I can't recall ever reading any of his stuff, although if he's written articles I imagine I've read at least some of them. I see Aeon has a video interview of him.

Reply
Philosopher Eric link
3/21/2020 09:12:02 pm

I doubt that I’ve read anything from Tallis, but yes the wiki on him does seem extremely Mysterian. I guess it’s standard for all who find it difficult to make associations between the human brain and the computers that we build. Massimo Pigliucci from the next post has traditionally been quite this way, though I think I read something more recently which didn’t seem quite as adamant as his old “Brains ain’t computers!” refrain.

Unfortunately my EP1 won’t help you in your February debate Ed. Ordinary language philosophy has “categorically” failed, though things seem business as usual in the field of philosophy. It’s often considered more of a gratuitous art to potentially appreciate for those who can afford such an education than something which needs to develop accepted understandings. I consider this a shame — not just for it, but also for the parts of science which fail given this void.

On mysterianism, I’d rather go that way than be associated with beliefs that could be interpreted as arrogant. Will the human figure out “everything” before it dies off? Of course not. Will it gain much of an explanation for why mass attracts mass? Maybe, though I doubt it. If it does reach a point regarding sentience that now has with gravity, and so the human is able to build conscious entities from the materials which nature provides, will it grasp why those processes mandate the existence of sentient entities? I’d rather remain humble than predict such an understanding.

Reply
Ed Gibney link
3/22/2020 02:54:11 pm

Well I don't think brains are computers either. To make this point, I like to say you can upload my consciousness when you can give a computer an orgasm. Biochemistry has a lot more to it than zeroes and ones. In fact, this might be the difference between Mary reading all about red, and her actually seeing red, i.e. having the information about red vs. having all of the chemical reactions that go along with seeing red.

I also don't think it's arrogant to not be a mysterian. Quite the opposite. Mysterians are too confidently predicting that knowledge will never be achieved. That's another kind of arrogance. Positive truth claims are arrogant in the other direction, but evolutionary epistemologists accept the provisional nature of all knowledge and humbly wait for searches to turn up those changes.

Reply
Philosopher Eric link
3/22/2020 05:36:00 pm

Ed,
The point of my EP1 is that there are no true or false definitions for any term, including “computer”. So if mental and behavioral professional find the brain to computer analogy useful, who has the authority to tell them that their mistaken? Don’t brains accept information through nerves, process it by means of the AND, OR, and NOT function of neurons, and so produce output function? That’s certainly in line with what I consider the computers that we build do. Without formal acceptance of my EP1 however, inconsistent terms (which unfortunately tend to be considered “true” by each side) stop people from effectively grasping the ideas of others.

On the “uploading consciousness” thing, John Searle and I are with you on that. There should be more than just information here, but also mechanisms which need to be preserved as well. So if all of the information from your nervous system were uploaded to a computer, as such it should be functionless. But if that information were downloaded to a some machine across space that was similar enough with your body to effectively use it, then fine, there would be both you, as well as that other thing which thinks it’s you. Perhaps he’d celebrate with a good orgasm. :-)

I agree that all positions can potentially be arrogant, mysterian or not. But after it makes sense to the human why it is that each of the four forces do what they do given even more basic physics, and after we’re building conscious machines, only then will I entertain the notion that we might also begin to grasp why phenomenal existence emerges from the stuff we do to create these entities. So I’m certainly skeptical.

Reply
James of Seattle
3/24/2020 03:04:04 am

To answer Ed’s question, I think the hard problem is in fact an easy problem. I’m still working up the details for an explanation because it involves some things I only fairly recently learned about (representation and unitrackers and mutual information, Oh my!), but I’m willing to field questions as to what I have so far. If I can get my ...stuff... together, I’ll summarize it in a blog post, and the longer version would be a book.

That said,

Mike, you said we should be allowed to break a problem into its constituent parts, but what are the constituent parts of “the qualia of seeing red”?

Eric, you mention the problem of “mind uploading” as uploading information, but apparently without uploading the mechanisms. At least some of us who see Consciousness as information processing would say that you have to upload both the information and the AND’s, OR’s, and NOT’s, but that is still doable. A computer simulating a neuronal AND is still doing an AND. There is no simulated water/wetness issue. Simulated thumb pain, being just processing of AND’s, OR’s, and NOT’s , is still thumb pain, as long as there is a thumb attached appropriately causing the inputs. If there is no thumb, then it’s just phantom limb pain.

*

Reply
Ed Gibney link
3/25/2020 09:38:01 am

James — I'd like to press you on this uploading of mechanisms. I see the mechanisms (in the only example we have) as biochemical reactions. You can tell a computer to feel pleasure or pain about something. All the logical circuits can be there. But where is the endorphin rush or the secretions of glutamate and substance P? As i said in an earlier comment, uploading these things sounds akin to Mary reading about red rather than having the actual experience of seeing it. Information processing alone sounds entirely 2D in a 3D world. My uploaded consciousness would mimic my function and behaviour, but without any of the actual feelings behind it.

Reply
SelfAwarePatterns link
3/25/2020 12:04:45 pm

Ed,
Just jumping in here on this upload question. The question is whether those biochemical reactions are functionally more than information processing. If you think they are information processing, then the idea that that processing could happen in another substrate seems to follow. If you think they are more than information processing, my question is then, what beyond information processing are they doing?

It is true that currently our only example of conscious systems are electrochemical ones. But not that long ago, our only examples of computers were the women who calculated ballistic, actuarial, and similar types of tables, the only examples of chess players, face recognizers, or navigators were human.

Of course, it's always possible that consciousness is different. But after reading dozens of books on consciousness and neuroscience, I see no real evidence for it, except for our strong inclination to see it that way.

Not that I'm one of those who think uploading is right around the corner. I personally think it's likely centuries away.

SelfAwarePatterns link
3/24/2020 03:50:36 pm

James,
The qualia of red is subjectively irreducible. Many philosophers stop there and declare it a mystery. But subjective irreducibility is not objective irreducibility.

The experience of red involves the sensory discrimination that happens to identify an object reflecting light that triggers the retinal L receptor cones. But it also involves everything that information means to us, all the associations, the things that make red vivid. (Probably related to the importance of ripe fruit for primates.)

These associations happen below the level of consciousness. What makes it into consciousness is the overall irreducible content, that is, content whose constituents we have no introspective access to. But that lack of access doesn't mean it's not there.

Does that help?

Reply
Ed Gibney link
3/24/2020 09:07:12 pm

Mike — I guess "the qualia of red" or "the redness of red" have a lot of different meanings to a lot of different philosophers, but in what sense do you think they are "subjectively irreducible." I agree with your breakdown of the objective components. I'd say some of them are 1) visually seeing red; 2) chemically responding innately to red; and 3) chemically responding to personal lifetime experiences with red. But aren't some of those associations open to subjective interrogation too? Could I say, in a poetic mood, that the redness of this particular apple (for example) is redolent of....my favourite toy fire engine truck as a child? Is that a subjective "piece" of the redness of that red? Doesn't "redness of red" imply a comparison to other reds that I can subjectively be aware of? Or are you maybe saying that that precise shade, luminosity, etc. of red is always a singular experience at that moment? Or something else altogether? I've not read a good description of what a philosopher really means by that redness of red phrase. It strikes me as a handwaving exercise whenever I hear it.

Reply
SelfAwarePatterns link
3/24/2020 10:06:19 pm

Ed,
The general idea is that our access to the components of conscious content has limits. No matter how hard we try to introspect, we can't get past those limits.

That's not to say that, in the case of seeing red, we don't have access to some explicit learned associations. But many (most?) of the associations are below the level of consciousness. What makes it into consciousness is a combination of the sensory information and innate and conditioned affective reactions.

It's also not to say that philosophers don't do a lot of hand waving about this. Much is made of the fact that we can't connect our experential content to brain mechanisms, the infamous explanatory gap. But given that we don't have access to the lower level processing that produces it, it shouldn't surprise us.

For me, it's roughly equivalent to the fact that the browser software you're using to read this probably doesn't have access to the transport layer details of your device's network access. It only gets pre-processed data from that network connection before it receives it and does its own processing.

Reply
Ed Gibney link
3/24/2020 10:24:26 pm

Yeah, I’m with you on all that about the explanatory gap. Even if the “redness of red” isn’t subjectively irreducible, I suppose some other element of qualia could be defined that we can’t see below.

Reply
Ed Gibney link
3/25/2020 12:28:24 pm

Mike,

I'll need more on "information processing" to see it that way. To me, a blueprint has all the information of a house. But it's not a house. You can't live in a blueprint. Your question of "what beyond information processing are they doing?" is answered by "existing". The information is abstract. The biochemical reactions are concrete. What do you think I'm missing? Do you have a different understanding of what information is?

Reply
SelfAwarePatterns link
3/25/2020 01:24:44 pm

Ed,
To make the blueprint comparison valid, you have to compare it to an equivalent entity. Comparing it to a stored computer program that is not currently running would be valid. Both are, by themselves, causally inert patterns.

But if you compare it to a program running on a hardware engine, then the comparison is no longer valid. One is the inert pattern, the other is a dynamic physical system.

Put another way, the blueprint is not information *processing*, it's just information. Information processing is a physical process. So now the comparison is between one type of physical system and another.

It is possible that neural processing is so close to the thermodynamic and information theoretic limits, that no alternate substrate can ever implement what it does. But it doesn't seem to have been true for the tasks computers have been able to take over so far.

Reply
Ed Gibney link
3/25/2020 02:52:20 pm

I guess, to use your words, I was comparing a static blueprint to a static house. Or I could "walk" through my 3D CAD model of the house to get some processing going. I really don't see how information processing could be the same as a physical process though. Can you build a world out of information alone? I don't see how that makes sense. I don't see information as existing in this way and that seems to be a fundamental difference.

To put it another way, you say "information processing is a physical process" and "the comparison is between one type of physical system and another." But one of those systems (the information-only realm) is a physical system with most of the physics and chemistry stripped away from it. I would think that would make a big difference.

Reply
SelfAwarePatterns link
3/25/2020 03:35:12 pm

The comparison of the walkthough CAD model (virtual reality) with the physical version of the house is a little better. However, we can identify what functionality is missing from the VR version: it can't be physically entered, offers no protection from the elements, etc. Can you identify what would be missing from a computational model of a mind (assuming a virtual or replacement body is provided)?

All information processing is physical, 100% physical. I think the relevant question is, is there something in the physics of a central nervous system whose functionality can't be replicated in a computer system? (Aside from support mechanisms tied to that particular substrate.) We have a lot of evidence for information processing in nervous systems. What evidence do we have for something else?

Reply
Ed Gibney link
3/25/2020 04:13:50 pm

--> "Can you identify what would be missing from a computational model of a mind (assuming a virtual or replacement body is provided)?"

Consciousness! : ))))

--> "What evidence do we have for something else?"

The entire science of biochemistry.

I was just kidding about begging the question with consciousness, but I do see a big gap here between computing (as I understand it) and living. I just don't see how "a computational model of a mind" can provide the pleasure, pain, and other chemical emotions that drive our behaviour. My case is Hume's — i.e. reason (computation) is the slave of the passions (chemicals). If you want to say that a virtual or replacement body can actually replace all that extra physics and chemistry, then maybe our minds can live in other substrates, but mere informational representations of that physics and chemistry doesn't seem like it will get us there. Take away our current senses one by one until you get to the consciousness of Helen Keller. What is it like to be her? It sure seems like a very different thing.

Did you see my short story about uploading a consciousness to a thermostat? This isn't a definitive argument, but it's a fun way to think about it.

http://www.evphil.com/blog/thermostat-2ba

Reply
SelfAwarePatterns link
3/25/2020 06:19:26 pm

"Consciousnss" is similar to a lot of the typical answers I get. Others are "feeling", "pain", or specific examples like the feeling of viewing a sunset. Thanks for acknowledging that they beg the question.

Unfortunately, I think saying "biochemistry" or "passions" is also question begging. It assumes that the electrical and chemical processing in the nervous system, or the high level descriptions of them such as "feeling", aren't information processing. But that's what needs to be established.

I totally get that the idea of feelings being information processing is not intuitive. But many things in science, such heliocentrism, natural selection, relativity, or quantum mechanics, don't accord with our initial intuitions. What I'm waiting for is an identification of some non-information processing element.

Cool story! It seems like it would have been kinder for the scientist to keep uploading the same person until he got it right. Granted, it wouldn't have had the same emotional punch as what he did. (Although, see Westworld season 2.) Also, that must have been a very sophisticated thermostat!

Reply
Ed Gibney link
3/26/2020 12:28:50 pm

Kinder Russian scientists. As if! Sure, *we* would do the kind thing to these uploaded consciousnesses. But I was inspired by Russian space race efforts that were willing to waste animals and people if necessary to win. Thanks for reading, though! Really nice to hear your feedback on it.

When you say you are waiting for "some non-information processing element", I think that's an impossible ask. Everything that exists can be described, and so therefore has information. But I'll say again that the description, aka the information, is not the thing itself. It strikes me as a category error to think these can be completely separable and still able to do the same sorts of things. It seems to me you are trying to separate the information being processed from any and every information processor without any consequences to these systems. I don't see how that could be possible. Isn't at least some of the information being processed embodied in the physical makeup of the information processor itself?

You say that what needs to be established is "that the electrical and chemical processing in the nervous system, or the high level descriptions of them such as "feeling", aren't information processing." Well, they involve information processing of course, but they aren't just information processing that can be abstracted to any information processor with different chemicals. Some things are going to change. No information about chlorophyl in a silicone computer is going to absorb energy from light. You need organic chemistry for that.

(I think that example holds. I'm not an organic chemist.)

Are we still making progress here? Or do you maybe have a good book to immerse me in this idea and help me see things differently?

Reply
SelfAwarePatterns link
3/26/2020 01:41:06 pm

On progress, I think it's been productive but we're probably at the point where we'd just be reiterating.

On reading, Chalmers writes about this in some of his papers (including the one where he introduces the hard problem). http://consc.net/ai-and-computation/
Dehaene and Graziano also address it, although it's not central to either of their theses.

For a wide variety of philosophical viewpoints on uploading in particular, there's the book: 'Intelligence Unbound'.

But if you really want to get hard core, check out 'Principles of Neural Design' or 'Principles of Neural Information Theory'. (Warning: these last two are very technical.)

Reply
Philosopher Eric link
3/27/2020 04:49:48 am

<i>“At least some of us who see Consciousness as information processing would say that you have to upload both the information and the AND’s, OR’s, and NOT’s, but that is still doable.” </i>

Right James, beyond just uploading information associated with a central nervous system, I should have mentioned that the uploaded computer then processes this information by means of AND, OR, and NOT gates, essentially as a human brain might. You believe that such processing itself could constitute phenomenal experience. I believe that such processing would need to animate phenomenal experience mechanisms in order to have such an effect, and that these mechanisms might exist as the electromagnetic radiation produced through neuron firing. So we disagree. You support the majority view in science today, while I consider this position to violate causal dynamics (and demonstrate this through a thought experiment that you’ve heard me repeat several times.)

Reply
Philosopher Eric link
3/27/2020 05:36:25 am

Ed,
It sounds to me like each of us are aligned with John Searle’s “Chinese room” thought experiment. But given 40 years of prominence, if we’re right then why does the majority view in science remain in opposition? Could it be that a more focused iteration of Searle’s logic would help right science here? Perhaps. And to me you seem like a great candidate to help popularize my own such version. Consider this:

When my thumb gets whacked, it’s presumed that information about this event is transferred to my brain through nerves, and that my brain then does various things which cause me to feel what I know of as “thumb pain”. If my thumb pain exists by means of information processing alone however, then it stands to reason that symbols on paper which correlate with the information provided to my brain, could with a sufficient data base be converted into other symbols on paper to thus produce something which feels what I do when my thumb gets whacked! (Wow, the whole thing in two sentences!)

Is this implication of what many prominent scientists and philosophers today believe, not spooky? Symbols on paper processed into other symbols on paper, thus create “thumb pain”? And observe that in order for this standard position to remain consistent with a natural variety of metaphysics, they’d simply need to assert that such information processing could only produce thumb pain if it were to animate the proper sort of mechanisms. Such mechanisms surely exist in the brain, just as processed information from my computer animates its screen — without a screen there can be no screen imagines regardless of how much processing is done. (As I’ve said, I’m currently intrigued by the possibility that the electromagnetic radiation associated with neuron firing exists as phenomenal experience.)

Reply
Ed Gibney link
3/27/2020 10:19:47 am

Mike — Thanks. I think we did make good progress. At least I know I did with thinking through some things for myself. You present a challenging position well and I don't for a minute pretend that it might not be right. I only brought up the progress thing because I felt that I just wasn't getting a much deeper consderation of information that you have and didn't think this exchange of short comments was the best way to get that. The book recommendations are exactly what I would probably need to take this further. I'll add it to the list and get back to you...

Eric — It seems to me your thought experiment is the same as what I first said to Mike about blueprints and houses, but I think he addressed that by saying blueprints aren't processing anything. So, neither are the symbols on your paper. The real question is whether those mechanisms in the brain that do the processing are portable to other substrates. Now, I personally think computer substrates with AND's, OR's, and NOT's are only digital approximations of an analog world. And I also think the chemical makeup of the information *processor* may be an irreplaceable component of the way the information is *processed*. But I admit it's an open question as to whether these make a difference to conscious subjective experience. Another problem is that in a material universe, there's an apparent epistemic barrier to ever "know what it's like" in another substrate to actually answer that question. So, we may just have to err on the side of treating functionally equivalent passers of the Turing test as if they actually do have lights on on the inside, interests, and therefore moral standing. That doesn't mean I'd upload myself voluntarily. But I suppose I'd consider it when the alternative is imminent physical death.

Reply
Astronomer Eric
3/28/2020 01:29:10 pm

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3644539/

This discussion lead me to this article. I’ve been struggling through it, looking up seemingly every other word! Haha. Does anyone think it has valuable information on the neurological mechanisms of how we feel?

Reply
SelfAwarePatterns link
3/28/2020 02:16:14 pm

AE,
Based on the abstract and a quick scan, this paper looks pretty good. I like the distinction they make between the cause and the representation of the affect, with the cause generally being subcortical and the representation being cortical.

One of the debates in affective neuroscience is where the affect itself, the actual experience of the feeling, takes place. A lot of basic emotion advocates situate it in those subcortical regions. But most neuroscientists situate it in the cortex, particularly driven by the orbitofrontal and anterior cingulate regions.

I think this paper strikes the right balance. The causal origins of affects are subcortical but not themselves conscious. That only happens once the representation of that subcortical processing is formed in the cortex and widely shared.

I will just note that using the word "cause" can be problematic, since an affect can be invoked "in reverse". For example, if you think about something that once made you angry, it forms the representation in the cortex, which can then propagate downward and lead to the subcortical circuits firing and you actually becoming angry.

Astronomer Eric
4/4/2020 06:42:01 am

Thanks for commenting on this Mike! Sorry I've been AWOL, I get distracted easily as I am trying to absorb information like I am in the Matrix movie. I've been going back to try to actually understand Quantum Mechanics this time around instead of how poorly I undertook it my last year in college. Eric Weinstein just released his Geometric Unity theory (more about this coming on your website, on the post about Space Travel!) and even though I'm not sure I have enough time to get to the point where I can even come close to understanding it, I want to at least get as close as I can.

Your point about how an "angry" representation is formed in the cortex makes me think of Maslow's motivation theory (doesn't everything these days...haha). I think one of the more important parts of his theory was in his attempt, as a psychologist, to try to explain all the neuroses and pathologies that people can form in their lifetimes that motivate them to act in ways that can be harmful to themselves and others. My understanding is that he felt that traumatic needs-deprivation episodes were the primary cause, as people would form traumatic memories from these experiences and these memories would influence their behaviors going forward...like how you mentioned about how the feeling of anger would arise when thinking about something in the past, which could very well cause one to act out in anger in a situation that didn't actually warrant it. Maybe as we come to better understand the neurological aspect of this (such as the interplay between the cortex and subcortex you mentioned here), we will better be able to help people heal from the various stressful "survival events gone wrong".

Philosopher Eric link
3/28/2020 09:00:37 pm

Actually Ed, unlike blueprints my thought experiment does get into “information processing”. This is proposed to occur when a data base is used to convert input information on paper into output information on paper. Mike has even vouched for my scenario by asserting that he suspects that if the information which goes to my brain when my thumb gets whacked were effectively represented on paper, as well as processed into other information on paper by means of a sufficient data base set of instructions regarding how to handle it, then this would cause something to feel the sensations that I do when my thumb gets whacked. (Though please do correct me if I’m wrong about this Mike.) And why would he assert such a thing? Because he has integrity. When faced with the uncomfortable implications of various beliefs, many seem to look for excuses. Mike instead asks for evidence to the contrary. While I don’t have that, I do consider my argument itself to be a reasonable demonstration that the majority explanation in science today regarding “the hard problem of consciousness” (or whatever), violates the metaphysics of causality.

What I’ll need in order for my position here to potentially be taken seriously in the academic community, is for people beyond me to repeat it themselves as an interesting contrary observation (and once it hits home, I see you as a prime candidate for this). It’s Dennett’s “meme” thing. I’m also having a great conversation with professor Eric Schwitzgebel in this regard right now. You might find it helpful, and remember that we’re essentially on the same page, though I’ve some modern nuances:
https://schwitzsplinters.blogspot.com/2020/03/snail-and-slug-consciousness-and-semi.html?showComment=1584816606837&m=1#c7044329176468716132

Astronomer Eric,
It’s nice to meet you, and especially in light of my desire for more “meme power”. I know exactly what you mean about needing to look up so many terms. That was me when I started blogging in 2014. In fact I still look things up frequently. But in the early stages the real bitch here is that definitions are commonly provided through words that are also quite esoteric. Why not attempt to use standard English rather than run people through a maze of other terms that are also defined by means of esoteric in need of explanation? Or if that’s how it must be, why not add an ordinary term explanation in the side notes as well? And in truth, who’s going to skip the ordinary language account? Standard intellectualism makes it more difficult for people in general to participate in their educations, and it seems to me that many soft scientists use their personal mastery of language in order to insulate themselves from criticism.

You’ve mentioned “epistemology”, or a term which tripped me up for quite a while. I currently think of this as conscious understandings which are developed to approximate what actually exists. None of our senses for example provide us with information about what actually exists, though what they do provide does generally seem useful anyway. It’s “ontology” which is instead meant to represent what actually exists. There’s only one element of reality which I’m able to assert exists ontologically rather than epistemologically. This that I currently exist in some form or another.

If you haven’t already, make sure that you also check out Mike’s Wordpress site some time.
https://selfawarepatterns.com/

Reply
Astronomer Eric
4/4/2020 07:10:24 am

Hi Philosopher Eric!

I don't know if you saw my comment on another post, but I chose my name here modeled after yours, since both our names are Eric and I'm guessing you ran into another Eric at some point in the past and added "Philosopher" to yours to differentiate between yourself and another Eric.

Yes! I am currently a "meme vacuum" if that term even makes any sense. Evolution rates seems to increase during periods of environmental stability, and we seem to be finding ourselves in a very unstable environment currently. I think that meme evolution is going to be very active for awhile in the current state of affairs. I'd like to have as many good ones swirling around in my neural network as I can. :)

Thank you for giving me your definitions of epistemology and ontology. Can I give you an example to evaluate to see if I understood you?

Example:
epistemology: A flower appears yellow to our eye, and thus we may suppose that bees are attracted to the color yellow.

Ontology: Looking at the same flower in different wavelengths of light reveals different features. In particular, under ultraviolet light one can see extra patterns near the pollen parts of the flower. Realizing that there are many invisible aspects to the flower's nature that we can't see with our normal senses forces us to realize that we may never know "exactly" the underlying relationship between the bee and the flower.

Thanks!

Reply
Astronomer Eric
4/4/2020 07:11:24 am

Woops! * ...during times of environmental instability.

Philosopher Eric link
4/4/2020 09:46:50 pm

Astronomer Eric,
It sounds to me like you’ve got the hang of “epistemology” and “ontology”. The more that you see these terms used, the more comfortable you should feel using them yourself. You do seem quite interested in this stuff.

I didn’t choose my name because I worried about duplicate Erics. It was more a worry that if I used my legal name, angry people might track me down! I figured that some would be quite unhappy with my various positions. I’ve been critical of philosophy because I believe that the field needs to develop various generally accepted understandings. So I guess I chose this name as a respectful gesture, and certainly given that some would naturally consider me “anti philosophy”.

Earlier today I left a comment over at The Splintered Mind that gets into this a bit.
http://schwitzsplinters.blogspot.com/2020/04/wisdom-and-chaos.html?showComment=1586013978884&m=0#c3438945923529408081

Actually the philosophy professor who runs it is also named Eric, and he originally mistook me for another philosophy professor! So for your own pseudonym, why did you choose “Astronomer”?

SelfAwarePatterns link
3/28/2020 10:01:05 pm

P Eric,
Thanks for the compliment, and the site plug!

On the thumb pain scenario, as you know, my stipulation on that (which hopefully you won't see as an excuse) is that any process which produces actual feeling will require hundreds of billions of instructions. Brains do it because they're massively parallel, and it's conceivable with computers because they're fast, and probably massively parallel by the time they can do it. A person doing it manually with pencil and paper would take thousands of years (at least), seasoning that I think makes that bullet easier to bite.

Reply
Philosopher Eric link
3/28/2020 11:24:34 pm

No Mike, by “seasoning” such a jagged pill I don’t consider you to be weaseling out. You’re simply providing a different opinion given your own understandings. For rational people this leaves things debatable.

Actually in my most recent version of this thought experiment over at The Splintered Mind, I’ve made an adjustment given the professor’s “symbol” observation that I think helps in general. Furthermore as an afterthought I added something that you might appreciate given your just mentioned concerns.

Note that to me any practical virtues of a given thought experiment will remain extraneous features of it. To me these things are simply about exploring conceptual and definitional boundaries, not for experimental purposes. For this version however I did remove the stipulation of “human with a pencil and paper”. It simply doesn’t matter. So go let’s go with the presented scenario by means of a “super duper” computer. And let me know what you think of this particular iteration:
https://schwitzsplinters.blogspot.com/2020/03/snail-and-slug-consciousness-and-semi.html?showComment=1585410668583&m=0#c4359793176434215030

Reply
Ed Gibney link
3/29/2020 05:04:23 pm

More random observations since I can't keep up at the moment:

1. Here's something I wrote about the Chinese Room:

[A standard criticism of the Chinese Room is the "Systems Reply."] This reply concedes that the man in the room doesn't understand Chinese, but the output of the room as a whole reflects an understanding of Chinese so you could think of this room as its own system. As Baggini pointed out in his discussion of this, the Systems Reply "isn't quite as crazy as it sounds. After all, I understand English, but I'm not sure it makes sense to say that my neurons, tongue, or ears understand English. But the booth, John, and the computer do not form the same kind of closely integrated whole as a person, and so the idea that by putting the three together you get understanding seems unpersuasive." This line of thinking agrees with Searle, who argued that no reasonable person should be satisfied with the Systems Reply without making an effort to explain how this pile of objects has become a conscious, thinking being.

From an evolutionary philosophy perspective, this criticism of the "Systems Reply" also chimes with what theoretical evolutionary biologists John Maynard Smith and Eors Szathmary said in The Origins of Life in their analysis of ecosystems:

"Consider a present-day ecosystem—for example, a forest or a lake. The individual organisms of each species are replicators; each reproduces its kind. There are interactions between individuals, both within and between species, affecting their chances of survival and reproduction. There is a massive amount of information in the system, but it is information specific to individuals. There is **no additional information concerned with regulating the system as a whole**. It is therefore misleading to think of an ecosystem as a super-organism."
----

2. I'm glad Mike was here to comment on AstroE's neuroscience paper so I didn't have to!

3.PhilE — I don't think I'm as good a candidate as you think I am. I despair all the time that I'm shouting into the wind without a good relationship to the academy. I've longed considered a PhD to remedy this. Where I have gained traction is with articles published in peer-reviewed journals. Maybe consider that for your thought experiment. As it stands now, though, thought experiments are supposed to be intuition pumps, but your's just isn't hitting home with me or priming any a-ha moments. Maybe this is because, we're "on the same side", or because we're just not tilting at the same windmills, or because it's too complicated for me and I can't spend enough time with it. Maybe those observations alone will tell you something useful for now. Sorry I can't give you anything more.

Reply
Ed Gibney link
3/29/2020 05:34:37 pm

Okay PhilE, I can give just a little more.

Thanks, by the way, for linking to the Splintered Mind's review of The Evolution of the Sensitive Soul. I thought I was going to have to wait for Mike's review of this book to learn more about it, but I really enjoyed Schwitzgebel's review. I'm with him that there are no joints in nature. And his love of snails is hilarious!

Upon re-reading your thought experiment in the comments there, I realised that here is precisely where you've lost me:

<Paper symbol 1>
"could with a sufficient data base be converted"
<Paper symbol 2>

I have no idea how that middle phrase is defined. And since that's where the consciousness is supposedly occurring, that's the really important part. The fact that it's paper symbols on either end doesn't matter at all. They could be the actual bodily thumb pains, computer representations of that, or stone chiseled pictograms. This renders the thought experiment less ridiculous because it's not obvious on the face of it what exactly is occurring in the middle.

Reply
Philosopher Eric link
3/30/2020 05:44:54 am

Ed,
You’re going to love Eric Schwitzgebel — he’s about as nice a guy as you’re likely to meet. No idea what his thoughts would be for your own themes, though you might explore that over there as I do mine.

On “…with a sufficient data base be converted…” I was merely referencing what computers do to convert incoming information into outgoing information. In John Searle’s version he had a giant filing cabinet of instructions from which to personally convert incoming Chinese characters into appropriate outgoing Chinese characters, and hypothetically by means of preforming countless pencil and paper operations.

Some may object to Searle’s conclusion because it would take thousands or millions of years for John to run through enough operations to reflect the processing that a computer should need to do in order to convincingly pass a Turing test. I don’t consider this to be valid criticism however since people shouldn’t associate the slowness of John to the function of an advanced future computer. They’d be doing the same thing at different speeds.

Regardless I’ve changed my own version so that the information on paper which correlates with the information that my brain receives when my thumb gets whacked, is not manually processed into output information by a human with instructions and writing instruments. Instead it’s processed by means of an advanced future computer that has instructions and writing instruments. It’s all the same as I see it.

A tl;dr assessment: The majority position in science today is that it doesn’t matter what the information medium happens to be; neuron firing, conventional electrical current, paper, or anything else. When certain information is properly converted into other information, it’s thought that what results will be the very thing that I experience when my thumb gets whacked. Why? Because from this perspective, whatever processing is done to convert the first set of information into the second set of information, will itself exist as a given phenomenal experience. Do we know of anything else that exists as information processing sans output mechanisms? No, phenomenal experience would be the first example if ever validated. And why do so many naturalist scientists, such Patricia Churchland, continue to support this premise? I suspect that they do so as a convenience which hasn’t yet been sufficiently assessed. Like Searle, I aim to help people evaluate the premise upon which their current work is based. (Furthermore, it’s not like they’d inherently need to discard ALL that they’ve built to date. It’s just that an additional step would theoretically be needed as well. So come on people, let’s get natural!)

Reply
Ed Gibney link
4/4/2020 02:07:46 pm

AstroE,

A good essay about epistemology vs. metaphysics was recently posted by Massimo Pigliucci. He and I had a bit of a twitter debate about some of the details in it, but I think it would be helpful for you here:

https://medium.com/the-philosophers-stone/the-crucial-difference-between-metaphysics-and-epistemology-7943158aba52

Reply

Your comment will be posted after it is approved.


Leave a Reply.

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    February 2025
    August 2024
    July 2024
    June 2024
    April 2024
    December 2023
    November 2023
    October 2023
    September 2023
    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012

Powered by Create your own unique website with customizable templates.