Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Consciousness 7 — More On Evolution

3/27/2020

12 Comments

 
Picture
Picture
In the last post, I introduced Dan Dennett's evolutionary perspective on consciousness. I mentioned that he's been working on this for decades, and during that time he has been a ...productive... philosopher to say the least. That sometimes makes him challenging to keep up with, but I personally think his quality is very high, so I wanted to spend one more post with him before making the transition to hearing from neuroscientists.

In this post, I'll be relying on another podcast with Sean Carroll — Episode 78: Dan Dennett on Minds, Patterns, and the Scientific Image. In a recent January 2020 tweet, Dan Dennett himself said that this was, "
Another excellent interview, this time with Sean Carroll. If you haven't overdosed on Dennett in the last few days, this will clarify key points." Here, then, are some of those key clarifying points:
  • [Do you have a simple definition of consciousness?] No. But that’s okay. That’s the way science works too. There’s no perfect definition of time or energy, but scientists get on with it.
  • Consciousness emerges (in the innocent sense, not the woo one), and the idea that consciousness is one thing, that everything in the universe is either conscious or not, that the light is either on or off—that is a fundamental error. But it is very widespread.
  • The search for the simplest form of consciousness, therefore, is a snipe hunt. Starfish have some elements of consciousness, so do trees, and bacteria. (But not electrons.) We can argue about motor proteins. The question of “where do you draw the line?” is an ill-motivated question. Where do you draw the line between night and day?
  • Electrons can’t accrue memories. They do not change over billions of years. They do not participate in the arrow of time, so there is no way for them to be said to have intentions, feelings, purposes, or goals.
  • Human consciousness is much different from the consciousness of other species. This is an embattled view, but I’m pretty sure of it. It’s hard to see this because consciousness has a moral dimension and we want to be kind to animals. But don’t worry. The conscious properties we share with mammals and birds, and to some degree with reptiles and fish, are significant. Moral significance itself is also a graded notion.
  • UK law says it is now illegal to throw a live octopus onto a hot grill. This one species is an honorary vertebrate. It’s not all cephalopods, although maybe it should be. Lobsters can be boiled. Squid can be grilled live. Vertebrates must all be treated humanely. The law has to draw a line and these need to be reasonable to a vast majority of the people.
  • Human minds are profoundly different from other minds, because they are obliged to articulate reasons. This is why I’m interested in the history and evolution of language.
  • If I ask you to picture a rope and climbing up it, you can do it. I specifically chose those objects and actions because it is exactly what a chimp in a zoo is familiar with. If I asked a chimp to do the same thing, could it? We don’t know, but I suspect not, because you can’t do it wordlessly. You need to be able to interact using language. Without language, I don’t think you have the cognitive systems for self-simulation and self-probing that we have. ...  Language allows us to be conscious of things we otherwise wouldn’t be able to be conscious of. If you believe that recursion and self-representation are crucial to consciousness, then language is a huge part of that as a useful tool.
  • Degrees of freedom is something I’m using more lately. It is an opportunity for control. Degrees of freedom can be clamped or locked down to be removed. How many degrees of freedom do humans have? Millions and millions of things we can think of. We have orders of magnitude more that we can think of than a bear does, even with roughly the same number of cells. So, our complexity is higher. The options a bear has are a vanishing subset of the options that we have. Learning to control these options is not now a science. It is an art.
  • Many theories of consciousness only have half of the theory. The upward stream. But what then? What does consciousness enable or take away from? The answer is that almost anything can happen [with consciousness]. But we need a neuroscientific theory as to how that happens.

​Brief Comments
I can't say that Dennett puts a foot wrong here. His commitment to evolutionary thinking and following evidence leads to some conclusions that are out of step with much of society, but I find myself pretty much right there with him. I would question his point about electrons not having any elements of consciousness, but that's probably just based on terminology, and speculation that we may someday get from physics to chemistry to biology (where Dennett finds conscious elements). Without a good theory of abiogenesis (i.e. the origin of life), Dennett seems happy to pragmatically confine himself to studying consciousness as if it were a material phenomenon. I agree that's a useful hypothesis to hold until something better comes along.

I also really liked Dennett's use of the engineering terminology "degrees of freedom". This reminds me of "the parable of the immune system" that the evolutionary scientist David Sloan Wilson often uses to make a point. For example, on The Psychology Podcast (Episode 167: Evolution and Contextual Behavioral Science), Wilson said:


"The human immune system is immensely modular. We inherit it, and it does not change during our lifetime. It is something that evolved by genetic evolution, but it is triggered by environmental circumstances just as the evolutionary psychologists like to point out. The adaptive component of the immune system is highly evolutionary. That’s the ability of antibodies to vary and for the successful antigens to be ramped up. So that’s an evolutionary process that takes place during the lifetime of the organism. The whole thing is densely modular but also amazingly open-ended. Why can’t we say the same thing about the human behavioral system?"

It seems obvious (to me anyway) that we can say the same thing about our behavior—that it adapts during our lifetimes to successful and unsuccessful interactions with the environment. And it seems that more and more consciousness might give life more and more degrees of freedom as it helps an organism make more and more sense of its environment. But to really consider that, we'll need to consider Dennett's questions, "But what then? What does consciousness enable or take away from?" And do to that, it's time to turn to the neuroscientific theories of consciousness being developed and explored by scientists.

What do you think? Does Dennett's evolutionary perspective continue to make sense? Are there any gaps in the story that need more explanation? Let's discuss that in the comments below.

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
12 Comments
SelfAwarePatterns link
3/27/2020 05:58:09 pm

I'm mostly on board with Dennett. In particular, the point about the fallacy of things either being or not being conscious is a crucial one. That fallacy encourages people to think there's some magical point where the lights suddenly come on.

I've often said there was no first conscious creature. There were only gradually accumulated capabilities until a point was reached where we might be tempted to use the label "conscious". But the first one to reach that level wouldn't have been very different from its parents.

On his point about language, I like the way you describe it, basing it in recursive metacognition and symbolic thought, the foundations that language is built on. (Granted, those capabilities likely co-evolved with and in support of language, but it's the underlying capabilities, not the language itself, that is causal.)

But it's important to realize that a human can lose the ability to understand language from damage to Wernicke's Area in their brain, and we'd still consider them to be conscious, albeit in a disabled manner. So this capability definitely sets humans apart, but most people's intuitions of consciousness, at least primary consciousness, don't require it.

Reply
James of Seattle
3/27/2020 07:07:47 pm

I, of course, am also mostly with Dennett, but I guess we can talk about tsom3 tweaks here. Like Mike, I also think that language is not so much a cause of conscious capability as a result of the underlying mechanisms.

I also agree with Ed that Dennett’s dismissal of electrons having any feature of consciousness is not necessarily correct, especially with regards to the statement that they do not participate in the arrow of time.

Finally, I’m not sure I agree with Mike that there is no delineation between conscious and unconscious. There certainly is no consensus delineation, but I think for any given theory there will be a delineation, even if it is arbitrary. From his comment on “no first conscious creature” he is putting the term “Consciousness” in the “heap”-type-term heap, as it were. But I think there is a series of discernible stepwise advances from existence to function to representation to computation to pattern recognition (unitrackers) to conceptualization to ...

*
[I’m gonna blog a summary soon, I promise].

Reply
SelfAwarePatterns link
3/27/2020 10:34:05 pm

James,
My point was indeed primarily about the lack of consensus on which capabilities are necessary and sufficient for consciousness. But even if we look at those capabilities individually, they are complex mechanisms, and unlikely to have just popped into existence.

For example, how much information do we need before we have a representation? An individual light sensor can indicate the presence or absence of light, and perhaps its intensity. If the light sensor has two receptors, then the direction of the light can be ascertained. Four provides more information, 16 even more. But at what point do we have enough of an image for a visual representation? And what makes that point an objective fact?

Reply
James of Seattle
3/28/2020 01:08:43 am

How much information do we need? We need a vehicle that has some mutual information, i.e., some correlation with, some other physical system. An individual light sensor is enough to produce a representation to represent light or no light. More sensors can provide correlations with more refined patterns, such as light above v. light below. What is actually represented depends on whatever created/arranged the sensing mechanism.

*

Reply
James of Seattle
3/28/2020 01:16:02 am

Actually, need to change that last line of my other response. What is represented is determined by whatever creates the responding mechanism. The representing vehicle is an affordance of representation for anything with which it shares mutual information. Any given representing vehicle could be used as representation for more than one thing. A sign that reads “Get great food here” could represent a restaurant, but it could also represent someone who understands English well enough to create the sign.

*

Reply
SelfAwarePatterns link
3/28/2020 12:42:37 pm

James,
That conception of representation seems pretty liberal. When I use "representation" in terms of mental phenomena, I generally mean mental imagery. What would you say distinguishes a representation from a symbol, or just plain information?

Reply
James of Seattle
3/29/2020 02:48:24 am

I guess I have a preferance for “liberalizing” a concept over having to create new words, like proto-representation.

As for your question, exactly what do you mean by “symbol”?

*

Reply
SelfAwarePatterns link
3/29/2020 12:54:29 pm

I think of a symbol as something that stands in for something else, but in a manner that is somewhat arbitrary. But language is the issue here, because it's tempting to say the symbol "represents" something else, which invites us to see it as synonymous with "representation".

Maybe I just need to sharpen my language. I've been using "representation" to mean either mental imagery or mental models of some type or another, which have a more isomorphic relation to their subject matter than straight out symbols do.

James of Seattle
3/29/2020 07:08:14 pm

I guess from your point of view I am using symbol and representation as synonyms, except when referring to representation as a process, in which case representation is the functional response to a symbol.

So my question now is what do you think “mental imagery” is? I’m going to suggest it’s nothing other than the activation of pattern recognition mechanisms, unitrackers, which activation creates symbols representing the target of the (Unitracker) Mechanisms, which symbols are used in a representation process for some purpose. One purpose might be to combine the symbols and create a new, temporary mechanism (unitracker, working memory) whose target is the combined patterns/targets of the original unitrackers.

Perfectly clear, yes?

*

Reply
SelfAwarePatterns link
3/29/2020 08:16:24 pm

James,
I think of mental images as prediction frameworks, which could be seen as simply another way of saying a pattern recognition mechanism.

The rest seems plausible to me. I often wonder if unitracker isn't just a functional label of a neuron.

Ed Gibney link
3/29/2020 04:20:24 pm

Just some random observations since I can't keep up with you all right now.

When I watch my dog tilt his head wildly to understand what I'm saying to him (and he does understand lots of words, and even some basic grammar with his responses to novel noun+verb combinations), I too wonder about Dennett's emphasis on language in minds. Maybe my dog developed his mind by being around my language his whole life, but he must have some innate abilities to grasp this that many nonhuman animals seemingly don't. Dogs are such good social communicators among themselves (e.g. reading tail positions, facial expressions, play bounce body movements) without needing words to communicate.

I've also wondered about my own ability to hear in my head noises I can't possibly recreate. (Listen to a birdsong or steam whistle in your head right now.) Does that mean other nonhuman animals might be able to do the same? Could my dog "hear" me saying treat to him when he's on his own? Or what about while he's sleeping and his tail starts thumping?

Regarding words such as "consciousness" "heaps" "symbols" etc., I do think there can be a fundamental problem here of trying to map an ever-evolving, always-changing universe onto any fixed and immutable word. This is another example of what I have labeled "the static-dynamic problem" that infects all of philosophy, including logic in a non-obvious way. Give me a static picture; I can arbitrarily label it with something useful. Give me a dynamic universe; those labels might become less useful or completely fall apart. These are acknowledgements that have to be made and they help avoid dogmatism and semantic squabbles.

Reply
James of Seattle
3/30/2020 01:40:15 am

Ed, I see your point (I hope) about definitions. But that’s essentially what I’m trying to get at with Mike here. In order to progress in a dialog you need to determine the other persons definitions and/or make clear your own.

I’ve gotten to the point where I can translate Mike’s “predictive processing” and “mental models” to my pattern recognition systems (and unitrackers), but I hadn’t nailed down his use of “mental image” or “symbol”. Based on the above, mental image = mental model = pattern recog. mech.

Now the term “symbol” has a lot of philosophical baggage. See C.S.Peirce and semiotics. I think Mike is using “symbol” as Peirce would use “sign vehicle”, but I’m trying to figure that out. I think it’s important because I think Consciousness is best explained by leveraging the work of Peirce and semioticians. That’s what “representation” is all about. That’s what neurotransmitters are: sign vehicles.

Quick explanation of why “mental image” threw me off. Peirce, at the most basic level (which is enough, trust me), talks about three kinds of “signs”. (In quotes because “sign” = “whole pattern recognition process” = “representation process”). The three kinds are iconic, indexical, and symbolic. Iconic means isomorphic (I think), so a photo of a person can be a sign vehicle for representing that person. That’s why “mental image” suggests iconic to me. I think very little sign usage in the brain is iconic, although Mike mentioned he thought some part might involve isomorphism, so I would like to know how that might work.

For indexical signs, the paradigms are smoke representing fire, or someone pointing toward the exit representing where the person should go, or at least look. I haven’t worked this out thoroughly, but my first instinct was to say that this is what single neurons are doing. A neuron firing creates a sign vehicle (neurotransmitter) that represents that whatever the input requirements for that neuron are, they have been met. On further thought, I think a neuron may be creating symbols.

Traditionally, a symbol is an arbitrary sign vehicle that has a meaning which is coordinated between the generator and the interpreter. I think this applies to the single neuron, as the neurotransmitter used is somewhat arbitrary, and the meaning is coordinated between the generating neuron and the receiving neuron by selection, either genetic or “learned”. [Haven’t spent a whole lot of time working this out. Can you tell?]

Things get interesting when you have one mechanism that could potentially represent multiple targets. A set of a few hundred neurons could represent, depending on their firing patterns, a multitude of different things. But I think these representations may be indexical, which is essentially pointers to specific unitrackers located elsewhere. Thus, semantic pointers.

*

Reply

Your comment will be posted after it is approved.


Leave a Reply.

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    February 2025
    August 2024
    July 2024
    June 2024
    April 2024
    December 2023
    November 2023
    October 2023
    September 2023
    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012

Powered by Create your own unique website with customizable templates.