Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Consciousness 15 — What is a Theory?

4/16/2020

8 Comments

 
Picture
In the last post, I finished my series of reviews about what I consider to be the best theories and data about consciousness that are currently available from philosophers and scientists. I was planning to start laying out my own thoughts about this subject in today's post, but as luck would have it, I happened to come across an amazing lecture last night that I thought would be helpful as a transition and setup before I continue.

A few days into this coronavirus lockdown, I stumbled across an app called Kanopy that lets you log into it using your local library account, and then watch stuff online that you could normally check out of your library. All for free! It's such a great idea. As it happens, my wife's university library account also gave us free access to The Great Courses, which is a real treasure trove of university-level lectures. For reasons I don't need to go into now, I started watching a class called An Introduction to Formal Logic by Professor Steven Gimbel of Gettysburg College. Last night, I made it through lesson 7 on inductive reasoning. (Quick recap: deductive reasoning narrows down from a big rule to small facts, while inductive reasoning grows out from small observances to general rules. Of course, the problem of induction is well known as "the glory of science and the scandal of philosophy.")

Towards the end of this lecture, Gimbel went over the difference between using inductive reasoning for a theory versus using it for a hypothesis. This ended up being one of the best passages I've seen for explaining why Darwin's great idea is called the theory of evolution rather than the fact of evolution. This will also come in handy for anyone who wants to put together a theory of consciousness. Enjoy.


--------------------------------------------
Take Newton’s theory of gravity, which is comprised of three laws: 1) the law of inertia; 2) the force law; and 3) the action-reaction law. Put them together, and you have a full theory of motion. But what we have here are three general propositions, not specific observable claims. These general laws are then combined to form a system from which we can derive specific cases by plugging in the conditions of the world.

These proposed laws of nature, which function as the axioms of the theory, should not be confused with hypotheses. Hypotheses are proposed individual statements of possible truth. They are more specific than the axioms, and we get evidence for them individually. The axioms work together as a group. We may be able to derive hypotheses when working within the theory, but the parts of the theory themselves are not hypotheses.

For example, a hypothesis would be, “If I drop a 10-pound bowling ball and a 16-pound bowling ball off the roof of my house, they will land at the same time." I could test this with a ladder and two bowling balls. Hypotheses are open to such direct testing. The purported laws of nature in Newton's theory, however, are different. Consider Newton’s First Law. If I have an object, and there’s no external force applied to it, then it will move in a straight line at a constant speed. At first glance, this seems like it should be just as testable as the hypothesis about the bowling ball. But the problem is that there can be no such object without an external force applied to it! As soon as there’s any other object in the universe, the object we're examining would feel the pull of gravity, which is an external force. So, Newton’s law of inertia, a vital part of his theory of motion, holds for no actual object. If we treat it like we do hypotheses, it would be kind of like having a biological law about unicorns. So, we have to have different inductive processes for hypotheses and for theories.

[ Karl Popper gave us the idea that hypotheses must be falsifiable. Hypotheses are tested using independent and dependent variables, i.e. the things we adjust and the things we measure.]

What about theories? Here, the philosopher Hans Reichenbach drew a distinction between discovery and justification. What this distinction has come to mean is that there is a difference between the context in which scientists come up with their theories, and the context in which they provide good reasons to believe those theories are true. The context of discovery is genuinely thought to be free. There’s no specific logic of discovery, no turn-the-crank method for coming up with scientific theories. The great revolutionaries are considered geniuses because they were able to not only think rigorously, but also creatively in envisioning a different way the world could work. There’s no logic that tells scientists what to consider when coming up with new theories.

While there’s no set method, surely there is induction in there somewhere. Scientists are working from their experiences and their data. They have a question about how a system works, they consider what they know, and they make inductive leaps. They look for models and analogies where the system could be thought to work like a different system that is better understood. So, while there’s no set means of using induction in the context of discovery, it usually is playing some kind of role.

The most important place in scientific reasoning that we find induction is in the context of justification. Once a theory has been proposed, why should we believe it? Theories are testable. They have effects, results, and predictions that come from them. These observable results of a theory are determined deductively. That is, if a theory is true, then, in some given situation, let's say that observable consequence O should result. We go to the lab, set up the situation, and see if we observe O as expected. If not, then the theory has failed, and, as it stands, it is not acceptable. It will either have to be rejected or fixed. But, if the theory says to expect O, and we actually do observe O, now we have evidence in favour of the theory. That evidence is inductive. It may be that theory T1 predicts O, but there will also be other theories, like T2, which is different from T1, which is also supported by O. As such, neither T1 nor T2 are certain. (To the degree that inductive inferences could be anyway.)

How then do we go from supporting evidence (which makes a theory more likely), to conclusive evidence (which makes a theory probably true)? We need lots of evidence. We also need evidence of different types. It’s good for a theory if it can account for everything we already know. We call this retrodiction. This is particularly true if everything we knew was previously unexplained. For example, before Einstein’s theory of general relativity, we knew that not only did Mercury orbit the Sun, but each time Mercury would make it around the Sun, the farthest point in its orbit would be in a different place. In other words, Mercury did not make the same exact trip around the Sun every time. But we had no idea why! Once Einstein gave us a new theory of gravitation, this effect was naturally explained. The fact that it solved the mystery was taken as strong inductive evidence.

Even better than explaining what we already know, prediction is also taken as strong evidence. Newton’s theory predicted that a comet would appear around Christmastime in 1758. When this unusual sight appeared in the sky on Christmas day, the comet (named for Newton’s close friend Edmund Halley) was taken as very strong evidence for his theory.

Beyond even prediction, the best evidence for a theory can bring forth what William Whewell termed consilience. Whewell was a philosopher of science, an historian of science, and also a scientist. In fact, he was the person who coined the term scientist. Consilience is when a theory that is designed to account for phenomena of type A, turns out to also account for phenomena of type B. If you set out to explain one thing, and are also able to explain something completely different, then that is extremely strong evidence that your theory is probably true.

The reigning champ in this realm is Darwin’s theory of evolution. It accounts for biodiversity. It accounts for fossil evidence. It accounts for geographical population distribution. There’s just a huge range of all sorts of observations that evolution makes sense of. This is stunning, and stands as extremely strong evidence for its likely truth.

This consilience is no accident. In his college days, Darwin was a student of Whewell’s. When he later began to develop his ideas, Darwin was extremely nervous about them. He knew how explosive his view was, so he spent many, many years accumulating a broad array of different sources of evidence in order to demonstrate his theory’s consilience. Some people today contend that evolution is not proven. Well of course it isn’t! The only things that are proven are the results of deductive logic. Darwin’s theory—like everything else in science—is confirmed by inductive logic, which never gives proof, but which offers high probability, and thereby firm grounds, for rational belief.

--------------------------------------------

What do you think? Does this understanding of a theory help you see how science can actually posit ideas that cannot be tested on their own, yet still help us make sense of the world? Are we ready for a theory of consciousness that uses analogies from things we understand to explain everything we know, make some predictions, and offer a consilient view of a wide variety of observations? And might it fit in with the theory of evolution too? Maybe not 100% ready, but I'm going to sketch out a new theory next time and give this all a go.

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
Consciousness 11 — Neurobiological Naturalism
Consciousness 12 — The Deep History of Ourselves
Consciousness 13 — (Rethinking) The Attention Schema
Consciousness 14 — Integrated Information Theory
8 Comments

Consciousness 14 — Integrated Information Theory

4/11/2020

19 Comments

 
Picture
Picture
IIT. Simple summary. Devil in the details.
​

We're finally here! The end of my literature review on consciousness. In the last post, we heard Michael Graziano lump the work of all of the other neuroscientists I've profiled into one "growing standard model." This is by no means comprehensive for the entire field, so there are still people working outside of this model, but there was one particularly glaring omission that Graziano went out of his way to exclude — Integrated Information Theory (IIT). In the final interview in her four-part series on consciousness, Dr. Ginger Campbell spoke with one of the leading proponents of IIT, Christof Koch, about his latest book The Feeling of Life Itself: Why Consciousness is Widespread but Can't Be Computed. There's a lot to consider here so let's get to the highlights:
  • My background is in physics and philosophy. I worked with Francis Crick after his Nobel Prize. We looked for “the neural correlates of consciousness,” i.e. what are the minimal physical / biophysical neuronal mechanisms that are jointly necessary for any one conscious perception? What is necessary for me to “hear” that voice inside my head? Not necessarily to sense it, or process it, but to have that experience.
  • We now know it’s really the cortex—the outer-most shell of the brain, size and thickness of a pizza, highly convoluted, left and right hemispheres, the most complex and highly organised piece of matter in the known universe—which gives rise to consciousness.
  • This study of the neural correlates of consciousness is fantastic. For example, whenever you activate such and such neurons, you see your mom’s face or hear her voice. And if you artificially stimulate them, you will also have some vague feeling of these things. There is no doubt that scientists have established this close one-to-one relationship between a particular experience and a particular part of the brain.
  • Correlates don’t, however, answer why we have this experience. Or how. Or whether something like a bee can be conscious. For mammals it's easy to see the similarity to ourselves. But what about the further away you go? Or what about artificial intelligence? Or how low does it go? Panpsychism has said it is everywhere. Maybe it is a fundamental part of the universe.
  • To answer these questions, we need a fundamental theory of consciousness.
  • I’ve been working on this theory with Giulio Tononi, which is called the Integrated Information Theory.
  • IIT goes back to Aristotle and Plato. In science, something exists to the extent that it exerts causal power over other things. Gravity exists because it exerts power over mass. Electricity exists because it exerts power over charged particles. I exist because I can push a book around. If there is no causal power over anything in the universe, why postulate they exist
  • IIT says fundamentally what consciousness is, is the ability of any physical system to exert causal power over itself. This is an Aristotelian notion of causality. The present state of my brain can determine one of the trillion future states of my brain. One of the trillion past states of my brain can have determined my current state so it has causal power. The more power the past can exert over the present and future, the more conscious the thing that we are talking about is.
  • In principle, you can measure this system. The exact causal power, a number we call phi, is a measure of how much things exist for themselves, and not for others. My consciousness exists for itself; it doesn’t depend on you, it doesn’t depend on my parents, it doesn’t depend on anybody else but me.
  • Phi characterises the degree to which a system exists for itself. If it is zero, the system doesn’t exist. The bigger the number, the more the system exists for itself and is conscious in this sense. Also the type and quality of this conscious experience (e.g. red feels different from blue) is determined by the extent and the quality of the causal power that the system has upon itself.
  • Look for the structure within the brain, or the CPU, that has the maximal causal power, and that is the structure that ultimately constitutes the physical basis of consciousness for that particular creature.
  • How does this relate to panpsychism? They share some intuitions, but also differ. One of the great philosophical problems with panpsychism is the superposition problem. I’m conscious. You are conscious. Panpsychism says there should be an uber-consciousness that is you and me. But neither of us have any experience of that. Also, every particle of my body has its own consciousness, and there is the consciousness of me and the microphone, or my wife and whatever, or even me and America. But there isn’t anything of what it feels like to be America. This is the big weakness of panpsychism.
  • IIT solves the superposition problem by saying only the maximum of this measure of IIT exists. Locally, there is a maximum within my brain or your brain. But the amount of causal interaction between me and you is minute compared to the massive causality within. Therefore, there is you and there is me.
  • If we ran wires between two mice or two humans, IIT predicts some things. For example, between my left and right hemispheres there are connections called the corpus callosum. If you cut them, you get split brain syndrome—two conscious entities. If you could do the opposite, you would build an artificial corpus callosum between my brain and your brain. If you added just a few, I would slowly start to see some things that you see, but there would be no confusion as to who is who. As more wires are added, though, IIT says there is a precise point in time when the phi across this system will exceed the information within either single brain, and at that point, the individuals will disappear and the new conscious entity will arise.
  • What is right about this as opposed to the Global Neuronal Workspace Theory or other approaches? GNWT only claims to talk about those aspects of consciousness that you can actually speak about. This is called "access consciousness." Once information reaches the level of consciousness, all areas of the brain can use it. If it remains non-conscious, only certain parts of the brain use it.
  • There is an "adversarial collaboration" just beginning where IIT and GNWT proponents have agreed on a large set of experiments to see which theory is supported by fMRI, EEG, subjective reporting, etc. In principle this will be great, but practically, we will see.
  • Where the theories really disagree is the fundamental nature of consciousness. GNWT embodies the dominant zeitgeist (Anglo-Saxon philosophy, scientists, Silicon Valley, sci-fi, etc), which says if you build enough intelligence into a machine, if you add feedback, self-monitoring, speaking, etc, sooner or later you will get to a system that is not only intelligent, but also conscious. Ultimately, consciousness is all about behaviour. It’s a descendent of behaviourism saying behaviour is all we can talk about.
  • The other view says no, consciousness is not magical, it’s a natural property of certain systems, but it’s about causal power. To the extent you can build something with causal power, that will be conscious, but you cannot simulate it. E.g. weather simulations don’t cause your computer to get wet. The same thing holds for perfect simulations of the human brain. The simulation will say it is conscious, but it will all be a deep behavioural fake. What you have to do is build a computer in the image of a brain with massive overlapping connectivity and inputs. In principle, this could give rise to consciousness.
  • Could a single cell or an atom be conscious? In the limit, it may well feel like something to be a bacterium. It doesn’t have a psychology, feel fragile, or hungry, etc. But there are already a few billion molecules and a few thousand proteins. We haven’t yet modelled this, but yes, most biological systems may feel like something.
  • Has any consciousness of my mitochondria been subsumed into my own? Yes. On its own, mitochondria has phi, but IIT says that once it is put together with something else, that consciousness dissolves. If your brain is disassembled, for example when you die, there may be a few fleeting moments where each part again feels like something. In each case you have to ask what is the system that maximises the integrated information. Only that system exists for itself, is a subject, and has some experience. The other pieces can be poked and studied, but they aren’t conscious.
  • The zap and zip technique is being used to look for consciousness in patients who may be locked in or anesthetised irregularly. You zap the brain, like striking a bell, and look at the amount of information that reverberates around the brain. A highly compressed response, one that is “zipped up” so there is almost no information response, is more unconscious (or even dead if there is no response) than one where much response around the brain is noted. This is progress in the mind-body problem. (Note, you don’t have to believe in IIT or GNWT to use this.)
  • Right now, we don’t have strong experimental evidence to think that quantum physics has anything to do with the function of brain systems. Classical physics is enough to model everything so far, but you still have to keep an open mind since we don’t understand all causations.
​
Brief Comments

Although I found this interview to be a good overview, it still left me with a lot of questions about IIT. So, before I make any comments, I want to share a bit more research that I found helpful.

From the Wikipedia Entry on Integrated Information Theory:
  • ​If we are ever going to make the link between the subjective experience of consciousness and the physical mechanisms that cause it, IIT assumes the properties of the physical system must be constrained by the properties of the experience.
  • Therefore, IIT starts by attempting to identify the essential properties of conscious experience (called "axioms"), and then moves on to the essential properties of the physical systems underneath that consciousness (called "postulates").
  • Every axiom should apply to every possible experience. The most recent version of these axioms states that consciousness has: 1) intrinsic existence, 2) composition, 3) information, 4) integration, and 5) exclusion. These are defined below.
  • 1) Intrinsic existence — By this, IIT means that consciousness exists. Indeed, IIT claims it is the only fact I can be sure of immediately and absolutely, and this experience exists independently of external observers.
  • 2) Composition — Consciousness is structured. Each experience has multiple distinctions, both elementary and higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • 3) Information — Consciousness is specific. Each experience is the particular way that it is because it is composed of a specific set of possible experiences. The experience differs from a large number of alternative experiences I could have had but am not actually having.
  • 4) Integration — Consciousness is unified. Each experience is irreducible and cannot be subdivided. I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). Seeing a blue book is not reducible to seeing a book without the colour blue, or the colour blue without the book.
  • 5) Exclusion — Consciousness is definite. Each experience is what it is, neither less nor more, and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book. I am not having an experience with less content (say, one lacking colour), or with more content (say, with the addition of feeling blood pressure).
  • These axioms describe regularities in conscious experience, and IIT seeks to explain these regularities. What could account for the fact that every experience exists, is structured, is differentiated, is unified, and is definite? IIT argues that the existence of an underlying causal system with these same properties offers the most parsimonious explanation. The properties required of a conscious physical substrate are called the "postulates" because the existence of the physical substrate is itself only postulated. (Remember, IIT maintains that the only thing one can be sure of is the existence of one's own consciousness).​

From two articles (1,2) about the "adversarial collaboration" between IIT and Global Workspace Theory (GWT):
  • Both sides agree to make the fight as fair as possible: they’ll collaborate on the task design, pre-register their predictions on public ledgers, and if the data supports only one idea, the other acknowledges defeat.
  • Rather than unearthing how the brain brings outside stimuli into attention, the fight focuses more on where and why consciousness emerges.
  • The GWT describes an almost algorithmic view. Conscious behavior arises when we can integrate and segregate information from multiple input sources and combine it into a piece of data in a global workspace within the brain. According to Dehaene, brain imaging studies in humans suggest that the main “node” exists at the front of the brain, or the prefrontal cortex, which acts like a central processing unit in a computer.
  • IIT, in contrast, takes a more globalist view where consciousness arises from the measurable, intrinsic interconnectedness of brain networks. Under the right architecture and connective features, consciousness emerges. IIT believes this emergent process happens at the back of the brain where neurons connect in a grid-like structure that hypothetically should be able to support this capacity.
  • Koch notes, "People who have had a large fraction of the frontal lobe removed (as it used to happen in neurosurgical treatments of epilepsy) can seem remarkably normal." Tononi added, “I’m willing to bet that, by and large, the back is wired in the right way to have high Φ, and much of the front is not. We can compare the locations of brain activity in people who are conscious or have been rendered unconscious by anesthesia. If such tests were able to show that the back of the brain indeed had high Φ but was not associated with consciousness, then IIT would be very much in trouble.”
  • Another prediction of GWT is that a characteristic electrical signal in the brain, arising about 300-400 milliseconds after a stimulus, should correspond to the “broadcasting” of the information that makes us consciously aware of it. Thereafter the signal quickly subsides. In IIT, the neural correlate of a conscious experience is instead predicted to persist continuously while the experience does. Tests of this distinction, Koch says, could involve volunteers looking at some stimulus like a scene on a screen for several seconds and seeing whether the neural correlate of the experience persists as long as it remains in the consciousness.
  • It may also turn out that no scientific experiment can be the sole and final arbiter of a question like this one. Even if only neuroscientists adjudicated the question, the debate would be philosophical. When interpretation gets this tricky, it makes sense to open the conversation to philosophers.

Great! So let's get on with some philosophising.

Right off the bat, the first axiom of IIT is problematic. It is trying to build upon the same bedrock that Descartes did. But that is an infamously circular argument that rested on first establishing that we are created by an all-perfect God rather than an evil demon. Descartes said this God wouldn't let him be deceived about seeing things "clearly and directly," which led to his claim that therefore, I am. Now, the first axiom of IIT claims consciousness is the only fact one can be sure of "immediately and absolutely." This is the same argument, and it still doesn't hold up. The study of illusions and drug-altered states of experience shows us that consciousness is not perceived immediately and absolutely. And as Keith Frankish pointed out in my post about illusionism, once that wedge of doubt is opened up, it cannot be closed.

Regardless, let's grant that the subjective experience each of us thinks we are perceiving does actually constitute a worthwhile data point. (Even if this isn't a certain truth, it's a pretty excellent hypothesis.) Talking to one another about all of our individual data points is how IIT comes up with its five axioms. But would it follow from that that ALL conscious experiences have the same five characteristics? No! That would be an enormous leap of induction from a specific set of human examples to a much wider universal rule.

However, despite the universal pretensions of IIT and its definition of phi that could theoretically (though not currently) be calculated for any physical system, when Koch is talking about consciousness, he occasionally is only referring to the very restricted human version of it that requires awareness and self-report. This makes him confusing at times, but that's certainly the consciousness he's talking about for the upcoming "adversarial collaboration" that will test predictions about consciousness by proponents of IIT and GWT. It's great to see such falsifiable predictions being made and tested, and of course the human report of consciousness is where we have to start our scientific studies of consciousness, but it's hard to see how these tests will actually end the debate any time soon. Why? Because as we have seen throughout this series, we just don't have a settled definition for the terms being used in this debate. One camp's proof of consciousness is another camp's proof of something else. They could all seemingly just respond to one another, "but that's not really consciousness."

So, What does IIT say consciousness really is? Koch reports:

>>> "IIT says fundamentally what consciousness is, is the ability of any physical system to exert causal power over itself."

I've heard Dan Dennett say that vigorous debates occur about whether tornadoes fit this kind of definition about consciousness. Their prior states influence their current and future states. That's a kind of causal power. They are also a physical system that acts as one thing even though none of the constituent parts act the way the system as a whole does. But does anyone really think a tornado is conscious? Koch continues:

>>> "My consciousness exists for itself; it doesn’t depend on you, it doesn’t depend on my parents, it doesn’t depend on anybody else but me."

This isn't strictly true, of course. Everything is interrelated. We have no evidence of any uncaused causes in this universe, so Koch's consciousness clearly depends on lots of outside factors. If I shouted that at him, would his consciousness be able to stop him from hearing it? I imagine that's not exactly what Koch meant, but between this and the similarity to Descartes' argument using God to see the world clearly and directly, IIT strikes me as practically a religious viewpoint. Tellingly enough, I found out that it is.

In an essay at Psychology Today titled, "Neuroscience's New Consciousness Theory Is Spiritual", there was this passage:
  • Most rational thinkers will agree that the idea of a personal god who gets angry when we masturbate and routinely disrupts the laws of physics upon prayer is utterly ridiculous. Integrated Information Theory doesn't give credence to anything of the sort. It simply reveals an underlying harmony in nature, and a sweeping mental presence that isn't confined to biological systems. IIT's inevitable logical conclusions and philosophical implications are both elegant and precise. What it yields is a new kind of scientific spirituality that paints a picture of a soulful existence that even the most diehard materialist or devout atheist can unashamedly get behind.

I'll let the "inevitability" of IIT's logical conclusions slide for now, but is this "sweeping mental presence" just another form of idealism, which George Berkeley used to argue that the mind of God was everywhere and caused all things? It's not from the same source or for exactly the same reason, but it's related. As an essay at the Buddhist magazine Lion's Roar points out, "Leading neuroscientists and Buddhists agree: 'Consciousness is everywhere'." Here we find that:
  • Buddhism associates mind with sentience. The late Traleg Kyabgon Rinpoche stated that while mind, along with all objects, is empty, unlike most objects, it is also luminous. In a similar vein, IIT says consciousness is an intrinsic quality of everything yet only appears significantly in certain conditions — like how everything has mass, but only large objects have noticeable gravity."
  • In his major work, the Shobogenzo, Dogen, the founder of Soto Zen Buddhism, went so far as to say, “All is sentient being.” Grass, trees, land, sun, moon, and stars are all mind, wrote Dogen.
  • Koch, who became interested in Buddhism in college, says that his personal worldview has come to overlap with the Buddhist teachings on non-self, impermanence, atheism, and panpsychism. His interest in Buddhism, he says, represents a significant shift from his Roman Catholic upbringing. When he started studying consciousness — working with Nobel Prize winner Francis Crick — Koch believed that the only explanation for experience would have to invoke God. But, instead of affirming religion, Koch and Crick together established consciousness as a respected branch of neuroscience and invited Buddhist teachers into the discussion.
  • At Drepung Monastery, the Dalai Lama told Koch that the Buddha taught that sentience is everywhere at varying levels, and that humans should have compassion for all sentient beings. Until that point, Koch hadn’t appreciated the weight of his philosophy. "I was confronted with the Buddhist teaching that sentience is probably everywhere at varying levels, and that inspired me to take the consequences of this theory seriously," says Koch. "When I see insects in my home, I don't kill them."

These religious motivations don't necessarily mean that the motivated reasoning behind IIT is unsound. But it sure makes me skeptical. The cracks I see in IIT's logic—e.g. starting with seeing consciousness immediately and absolutely, making leaps from human experience to all experience, seeing islands of uncaused causes everywhere—are enough to give me pause. Despite all the fancy math plastered on top of these ideas, I'm still fundamentally unconvinced that consciousness is the integration of information, yet somehow "can't be computed and is the feeling of being alive." As for what I think consciousness really is, it's finally time for me to say. Hope I can get it down clearly!

What do you think? Is IIT flawed to you too? What useful concepts or calculations might it offer?

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
Consciousness 11 — Neurobiological Naturalism
Consciousness 12 — The Deep History of Ourselves
Consciousness 13 — (Rethinking) The Attention Schema
19 Comments

Consciousness 13 — (Rethinking) The Attention Schema

4/8/2020

8 Comments

 
Picture
Graziano with his ventriloquist puppet orangutan named Kevin. Consciousness studies sure do draw renegades.
In the last post, I noted that Dr. Ginger Campbell conducted one-on-one interviews with three prominent neuroscientists during the final episodes of her Brain Science podcast series on consciousness. We've already covered the first interview with Joseph LeDoux. Today, I'm going to go over the second interview with Michael Graziano about his book Rethinking Consciousness: A Scientific Theory of Subjective Experience. Graziano is currently a professor of Psychology and Neuroscience at Princeton University where he has had a lab studying consciousness since 2010. Here are the highlights from his interview:
  • In 10 years of lab work, I have worked to put my ideas into an evolutionary context (i.e. how they developed), in order to give us an idea of the components that go into this thing we call consciousness.
  • More and more, people in the science of consciousness are beginning to coalesce around a coherent set of ideas. My work fits into this growing standard model of consciousness. This core set of scientists realise that we are machines and the brain is an information processing machine that thinks it has magic inside it because it builds somewhat imperfect models of the world inside it. This includes Higher Order Thought Theory, Global Workspace Theory, and even some Illusionists who talk of consciousness as an illusion. My theory is not a rival to these. We are moving past rivalry and towards an integrating picture of it all.
  • The realisation is coming that everything you think derives from information. No claims can be put out by the brain without information upon which to base it. This is just basic logic. The question then is how and why did the brain construct a particular piece of information? The brain can construct all sorts of seemingly crazy ideas (e.g. “I have a squirrel in my head instead of a brain.”)
  • I study movement control, which requires a whole model. If the brain wants to control the arm, it needs a model of the arm. It needs an internal model, a simulation of what an arm is and where it is at any one time. This is an engineering perspective, which is useful for the study of consciousness. Similar to the moving arm, the brain is continually shifting its focus of attention. So, how do you control that? The same way as the arm. The brain needs a model or simulation of attention, of what it means to focus resources on something.
  • This is called “attention schema theory”, which follows the “body schema” developed 100 years ago. Phantom limbs are good examples of “body schema”. By analogy, there must be a schema for attention—the brain's model for seeing information and processing it deeply.
  • Like all complex traits, you can go back very, very far and see this gradual transition where it becomes impossible to draw a line and say “the trait exists after this but not before this.” For example, you couldn't draw clear lines in evolution for hands, feet, and flippers. Consciousness is the same.
  • I start with attention—a basic ability of a nervous system to focus on a few things at a time and process them deeply. Some forms of this attention go back possibly all the way to the beginnings of nervous systems. Attention is at the root of intelligence. At the heart of intelligence is a very pragmatic problem: you only have so much energy and space for a brain, but you need to use it as efficiently as possible to process deeply and intelligently. How do you do that? Don’t occupy the brain with processing all of the million and a half things going on around you. Focus on one or two things at a time. Without that level of attention, any kind of intelligence is impossible.
  • Attention comes in very early in evolution, and over time it becomes more and more complex. There’s central attention, sensory attention, more cognitive kinds of attention, and they emerge gradually over this sweep of history from about half a billion years ago up to the present. Piggybacking off of this, what people call consciousness also emerged, and also as a gradual process.
  • Attention can be separable from consciousness. At what point might it be consciousness?
  • Bodies have been involved from the beginning. Schemas only came once nervous systems were capable of building models of these bodies. A body schema stands hierarchically above the body. It isn’t the same thing, and they can be dissociated (e.g. phantom limbs). Similarly, this is the relationship between attention and consciousness. Attention is literally what the brain is focusing its resources on. The Attention Schema is what the brain thinks it is focusing its resources on, what the brain thinks focusing is, and what the brain thinks the consequences of focusing are. And those are dissociable too. Typically, they don’t. Typically, they track quite well (like the body schema), but you can trick them and get them to peel off from one another.
  • Global Workspace Theory is basically a theory about attention. How do you become conscious of an apple you are looking at? GWT says you attend to the signals. They become stronger from your visual system at the expense of other signals. At some point, the signals become so strong that they reach a state called “ignition” when they can then influence wide networks around the brain. Now attention has been reached, you can talk about it, you can move toward it, you can remember it later. The apple information reaches the global workspace and becomes available all around the brain systems. GWT says that is consciousness. The weakness of GWT is that it doesn’t explain why we claim to have a subjective experience. It doesn’t say why I have an inner experience of the apple.
  • The attention schema says great for GWT, but you need one more component—a system in the brain that says “Ah, I am attending to the apple. I have a global workspace that has taken in that apple information.” You need something in the brain that can model itself and build some kind of self-description. GWT is the attention. Attention Schema is the consciousness riding on top of that.
  • To control something, you need a model of it. But an overly complicated one is wasteful. A “cartoonish” one is good enough.
  • Why does it feel non-physical? This is one of the most successful points about the Attention Schema. The brain models itself, but it doesn’t need to include little physical details. It doesn’t need to know anything about the little implementation details. Therefore, the brain’s self-models depict something that has no physical components. It depicts a vague non-physical thing that has a kind of location within us, but that’s the only physical property it has. Efficiency dictates the models be as stripped down as possible. This is why introspection, informed by internal models, tells us there is something inside us but it feels like a non-physical essence.
  • With this Attention Schema, we don’t need another explanation for the philosopher’s qualia because there it is. Chalmers, after the Hard Problem, now talks about the Meta Problem. The Hard Problem is how do we get qualia, or that inner subjective feeling. The Meta Problem is why do we think there is a Hard Problem? The Attention Schema solves the Meta Problem. It explains why people think there is this magical non-physical thing inside us. It does an end run around the Hard Problem.
  • The ability to attribute consciousness to others is important. In this evolutionary process, we start out evolving an ability to model and keep track of ourselves, which helps make predictions about ourselves and control our behaviour. At some point, as social interactions become more sophisticated, we develop the ability to use the same machinery to model others. This social use probably came in very early in evolution. There is a lot of sophistication in reptiles, birds, and mammals. We not only keep track of and model our own attention, but we keep track of and model others’ attention. That allows me to predict your behaviour.
  • Ventriloquist dummies are great examples of our souped-up drive to model conscious minds in the world around us.
  • We seem to model attention as if it were a fluid flowing out of their eyes, which explains all kinds of folk beliefs about feeling eyes on the back of the neck, telekinesis, the Force in Star Wars, the evil eye, etc., etc.
  • Integrated Information Theory is kind of the opposite of this. IIT belongs to theories where you start with an axiomatic assumption. IIT starts with “consciousness exists” stating there is this non-physical feely thing inside us. The magical thing is there, so how does it emerge and under what conditions? So right from the outset there is a divergence. On my end, the starting point is that the brain cannot put out a claim unless there is information for that claim on which it is based. There is no reason to assume this information is accurate. When people feel they have magic, the job of scientists isn’t to find out how the brain produces magic; it’s to find out why the brain builds that model to describe itself. IIT is a fundamentally magical theory.
  • According to IIT, consciousness arises from information and everything in the universe has some information in it. So, you end up with panpsychism that consciousness exists in everything and everywhere. That seems like you’ve used faulty logic to paint yourself into a corner. If everything is conscious, what does consciousness even mean anymore?
​
(Not So Brief) Brief Comments

When Graziano opened his interview talking about putting consciousness into an evolutionary context, he had me hooked. When he stated the field was coalescing around a growing standard model of consciousness that brought together Higher Order Thought Theory, Global Workspace Theory, and even some Illusionists, I got excited because those were the theories I most agreed with in the prior posts in this series. When Graziano said this core set of scientists think that we are machines and the brain is an information processing machine that thinks it has magic inside it because it builds somewhat imperfect models of the world inside it, this made a lot of sense. But when Graziano tried to offer his picture to integrate all of this, he finally lost me. To see why, let me go through some of his points one by one.

>>> "No claims can be put out by the brain without information upon which to base it."

This is an excellent place to start. I'll use this later in the series when making connections between the evolution of consciousness and evolutionary epistemology, which charts the way knowledge-gathering has grown incrementally over evolutionary history.

>>> "If the brain wants to control the arm, it needs a model of the arm. It needs an internal model, a simulation of what an arm is and where it is at any one time. This is an engineering perspective, which is useful for the study of consciousness. Similar to the moving arm, the brain is continually shifting its focus of attention. So, how do you control that? The same way as the arm. The brain needs a model or simulation of attention, of what it means to focus resources on something. ... By analogy, there must be a schema for attention—the brain's model for seeing information and processing it deeply."

I believe Graziano is making a poor analogy here. When an arm moves, it moves through space and time by contracting muscles that cannot see anything. When a focus of attention shifts, no such physical movement or navigation issues occur. I think it's a mistake to think of models being required to control both of these different things in the same kind of way.

>>> "Attention is at the root of intelligence. At the heart of intelligence is a very pragmatic problem: you only have so much energy and space for a brain, but you need to use it as efficiently as possible to process deeply and intelligently. How do you do that? Don’t occupy the brain with processing all of the million and a half things going on around you. Focus on one or two things at a time. Without that level of attention, any kind of intelligence is impossible."

This isn't the way evolution works. It doesn't start with information about "a million and a half things" and then pare back from that. Early nervous systems would have begun by sensing just one or a few things, with lots of trial and error going on about which few things. The most successful senses would have been naturally selected for, and then gone on to (blindly) experiment with adding a few new bits of information to sense and process. This evolution never stops, but it only gets as far as it needs to in order to remain alive and reproduce. As Michael Ruse wrote in The Oxford Handbook of Philosophy of Biology, "Consider the much-discussed example of the frog, which snaps at anything suitably small, dark, and moving, regardless of whether it is frog food. A frog cannot discriminate between moving flies and small plastic pellets tossed in front of it no matter how many pass its way."

So, contrary to Graziano's claims, attention is NOT at the root of intelligence. And intelligence IS possible without attention. Intelligence can be very slowly built up by very narrow increments of additional information. Attention — the way that Graziano is using it — is really another word for choice, i.e. choosing which stimuli to "pay attention" to. But such choices do not need control; they can be made non-consciously by simply responding to the loudest signals, where evolutionary trials and errors shape what "loud signals" actually are. Think of the bees flying back from explorations for nectar and doing their wiggle dance to "convince" others to "listen" to them. It's just the most excited dances that "get paid attention to" by the rest of the hive. That doesn't require conscious choice. So, it's not obvious to me that attention is what consciousness is or is required for.

>>> "A body schema stands hierarchically above the body. It isn’t the same thing, and they can be dissociated (e.g. phantom limbs). Similarly, this is the relationship between attention and consciousness. Attention is literally what the brain is focusing its resources on. The Attention Schema is what the brain thinks it is focusing its resources on, what the brain thinks focusing is, and what the brain thinks the consequences of focusing are."

I think there is an excellent point here about body schemas and brain schemas both being separate from the actual bodies and brains. I just don't think attention is at the heart of it.

>>> "Global Workspace Theory is basically a theory about attention. How do you become conscious of an apple you are looking at? GWT says you attend to the signals. They become stronger from your visual system at the expense of other signals. At some point, the signals become so strong that they reach a state called “ignition” when they can then influence wide networks around the brain. Now attention has been reached, you can talk about it, you can move toward it, you can remember it later. The apple information reaches the global workspace and becomes available all around the brain systems. GWT says that is consciousness. The weakness of GWT is that it doesn’t explain why we claim to have a subjective experience. It doesn’t say why I have an inner experience of the apple."

>>> "The attention schema says great for GWT, but you need one more component—a system in the brain that says “Ah, I am attending to the apple. I have a global workspace that has taken in that apple information.” You need something in the brain that can model itself and build some kind of self-description. GWT is the attention. Attention Schema is the consciousness riding on top of that."

See. Graziano unwittingly contradicts himself here by describing GWT as the attention without the consciousness. All of the choices of attention can be made (through evolutionarily-learned ignition) without a schema sitting on top of it and controlling it. Again, I think he's right that a schema is needed, but it isn't about attention alone.

>>> To control something, you need a model of it. But an overly complicated one is wasteful. A “cartoonish” one is good enough.

I think this may be a big source of Graziano's errors on this. He is thinking like an engineer who is concerned with top-down "control" rather than thinking like an evolutionary biologist who sees bottom-up emergence. There is no top-down control or design in nature.

>>> "Why does it feel non-physical? This is one of the most successful points about the Attention Schema. The brain models itself, but it doesn’t need to include little physical details. It doesn’t need to know anything about the little implementation details. Efficiency dictates the models be as stripped down as possible."

This is more thinking like an engineer. Nature doesn't strip down; it builds up. And if more building provides an advantage, then that building up gets selected for. Why wouldn't an Attention Schema ever build up these little physical details? Graziano raises an excellent point, but I think there's a better answer just ahead.

>>> "The ability to attribute consciousness to others is important. In this evolutionary process, we start out evolving an ability to model and keep track of ourselves, which helps make predictions about ourselves and control our behaviour. At some point, as social interactions become more sophisticated, we develop the ability to use the same machinery to model others. This social use probably came in very early in evolution. There is a lot of sophistication in reptiles, birds, and mammals. We not only keep track of and model our own attention, but we keep track of and model others’ attention. That allows me to predict your behaviour."

Making models is vital, but I think Graziano has it backwards here. Life wouldn't have started with models of itself; it would have started with models of the outside world, with models of others. As we saw in my post about Antonio Damasio, "Valence / value evolved much earlier. Even bacteria can go toward food and away from danger." What is a model other than a set of if / then rules? What rules would a bacteria have in place about itself before it developed rules for going towards food and away from danger? I can't think of any.

Graziano says that "at some point, as social interactions become more sophisticated, we develop the ability to model others." But long before social interactions mattered, the predator / prey relationship would have dominated the natural selection of minds that could make models of others. And here is a big realisation. Those models....would not have had any physical inputs for them! To say it like a philosopher, I cannot know what it feels like to be a bat, but I may need to know how a bat might attack or elude me, so I will build a model in my head of that bat, even though I have no physical inputs into that model. In more philosophical jargon, the epistemic barrier created by living in a physical world where mental phenomena do not just leap across organisms is exactly the reason why our theories of minds have to feel non-physical.

[I feel like I hit on something big there.]

By the time our model-building of others could turn inwards, these models would have experienced a runaway arms race between predators and prey that shaped them into sophisticated, but non-physical, models. Such sophisticated external models would do just fine for understanding our internal selves, so there would be no need to develop a new model using all of the internal physical processes going on. In fact, there would likely be evolutionary harm to even try because the resources expended on such a project would be wasted with no chance to catch up to the existing model-making skill. (Note: even if the internal models were being built at the same time, the external ones would have faced much stiffer competition and developed more rapidly.)

>>> "With this Attention Schema, we don’t need another explanation for the philosopher’s qualia because there it is. Chalmers, after the Hard Problem, now talks about the Meta Problem. The Hard Problem is how do we get qualia, or that inner subjective feeling. The Meta Problem is why do we think there is a Hard Problem? The Attention Schema solves the Meta Problem. It explains why people think there is this magical non-physical thing inside us. It does an end run around the Hard Problem."

As we saw in my post about Chalmers, that's not an accurate description of the Hard and Meta problems. You can't make an "end run" around the Hard Problem. Chalmers doesn't consider the Meta Problem to be beyond it. (He called it another "easy" problem about behaviour.) I think my explanation works better as to why this magical thing inside of us feels non-physical. And it's an impossible question to ever answer all the whys behind the Hard Question.

>>> "We seem to model attention as if it were a fluid flowing out of their eyes, which explains all kinds of folk beliefs about feeling eyes on the back of the neck, telekinesis, the Force in Star Wars, the evil eye, etc., etc."

I think Graziano is mixing up the possible uses of attention here. His Attention Schema is about choosing to pay attention to *some* senses rather than others. Modelling the attention of another being is about modelling *everything* that that being can see. We model the fluid as if it were on all the time, not as if it were being paid attention to only occasionally. My idea — lets call it an ExteroSchema for now — may still build its model of vision as a fluid flowing out of others' eyes. That might be the easiest way to do it and it's a cool explanation of that range of folk beliefs.

>>> "IIT is a fundamentally magical theory." 


Finally, Graziano finishes with a critique of Integrated Information Theory that sounds pretty dismissive. Our next post will be all about IIT though, so I look forward to diving into it and seeing how it is presented by a strong proponent.

What do you think? Do you agree with me that Graziano has some evolutionary ideas backwards? Does my explanation of modelling others first make more sense? I'd love to hear what you think of this in the comments below.

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
Consciousness 11 — Neurobiological Naturalism
Consciousness 12 — The Deep History of Ourselves
8 Comments

Consciousness 12 — The Deep History of Ourselves

4/6/2020

1 Comment

 
Picture
Good thing that soul patch is only used in one of his specialties.
We're in the home stretch now for this series on consciousness. In the last three posts, I went over the summaries of books that Dr. Ginger Campbell provided in one of her Brain Science podcast episodes. That one episode was particularly useful, but it was just the first of a four-part series on consciousness. The next three episodes were one-on-one interviews with three more neuroscientists about their own studies of consciousness. Those interviews will provide the last three pieces of external research for my series.

The first interview was with Joseph LeDoux about his book 
The Deep History of Ourselves: The Four-Billion-Year Story of How We Got Conscious Brains. What a great evolutionary title! LeDoux is a Professor of Neural Science and Psychology at NYU who has spent the last thirty years studying the brain mechanisms of fear and emotional memory. He's also the guitarist and songwriter for a funky band called The Amygdaloids who gave us the hep-cat, jazzy, yet informative little number Fearing. (Pretty awesome.) For a more straightforward lesson about consciousness, however, here are the highlights from LeDoux's interview with Dr. Campbell:
  • Higher-order representations is the category LeDoux prefers from among the 20 different theories of consciousness.
  • How far back in evolution does the ability to detect and respond to danger go? Other nonhuman animals do this. Even bees. But it’s much older still. Protozoa like paramecia or amoeba do it. Even bacteria do. In fact, it goes all the way back to the beginning of life.
  • It's not just detecting danger either — incorporating nutrients, balancing fluids and ions, thermoregulation, reproduction for the species to survive — all of these behaviours exist in animals, but also in single-cell microbes. Value / valence / affect has also been present since the beginning of life (e.g. bacteria swim toward or away from things).
  • So, behaviour and even learning and memory do not require nervous systems.
  • When we do those things, we have subjective experiences about them, but those subjective experiences are not essential to the actions.
  • What is the relationship between behaviour and consciousness? We see behaviour in others so we attribute the same thoughts and feelings that we do. This makes sense for other human brains, but it is more and more dissimilar for other brains.
  • When we detect danger, we feel fear. But that may not always be the case. Split brain cases show one side getting a signal, the body acts, but then the other side can’t say why.
  • I hypothesised that emotional systems could generate non-conscious behaviours. I was able to trace the pathways through the amygdala to do this. Other research showed the amygdala is involved in implicit / non-conscious memories as opposed to conscious memories about detecting and responding to danger. I used this model for memories and applied it to emotions—i.e. implicit vs. explicit emotions. I thought of conscious explicit emotions as the product of cortical areas. Non-conscious emotions come out of the amygdala. The amygdala doesn’t experience fear; it just produces responses.
  • When stimuli are presented to patients, but masked so they can’t detect it consciously, the visual cortex and amygdala are activated and that’s it. When the stimulus is not masked, you get activation in the visual cortex, the amygdala, and the prefrontal cortex as well. ... In order to be conscious of an apple, it not only needs to be represented in your visual cortex, it needs to be re-represented, which involves the prefrontal cortex. ... So, the prefrontal cortex is emerging as an important area in the consolidation of our conscious experiences into what they are.
  • In other words, the ability to respond to and detect danger may be as old as life, but the feeling of fear may be a much more recent addition.
  • [Here's my 1st crazy idea.] What came first was cognition not emotion. I’m defining cognition as the ability to form internal representations of stimuli and to perform behaviours based on those representations. Cues are enough to stimulate the behaviour independent of the presence of the stimuli themselves. The representation alone is enough to guide the behaviour. That capacity exists in invertebrates, and on into all vertebrates, e.g. fish and reptiles. When you get to mammals, you have a much more complex form of cognitive representation, where it begins to look deliberative, i.e. the ability to form mental models that can be predictive of things not existing. It’s a much more complicated thing than having a static memory of what was there.
  • We assume that because mammals behave in much the same way that we do, they must be experiencing the same things. But the amygdala example of fear gives us some reason to be cautious about that. The short summary is that you should actually assume behaviour is unconscious unless proven otherwise.
  • In humans, we all know that we have these conscious experiences. In an experiment, we ask, “Can the response in this experiment be explained by a conscious state?” We have to rule out that the response is not coming from a non-conscious state. But we have a vast cognitive unconscious repository of information that allows us to get through the day without having to consciously evaluate everything we do (e.g. speaking grammatically, anticipating what we are looking at before we see it, completing patterns on the basis of limited information). To separate these conscious and non-conscious responses you can do experiments, and these have indeed happened.
  • The gold standard for whether a response is conscious or not is whether you can talk about it. This doesn’t mean language and consciousness are identical, just that you have access to the experience to think about it (and we use language to discuss that access with one another). In non-human animal research, that doesn’t exist. It would be good for animals to treat them as if they had conscious experiences, but it’s not a scientific demonstration to watch behaviour and say that they do.
  • Darwin, when faced with resistance about humans evolving from animals, responded not by saying that people have bestial qualities, but by saying that animals have human qualities. This set the debate on a track that has been difficult to get past. There was tremendous anthropomorphism in the late 19th century. That led to the radical behaviourist movement in psychology where all cognitive experience was eliminated from research. The cognitive revolution brought back the mind, but as an information processing system with inputs being conscious and unconscious. This gave us the “cognitive unconscious”, which is a middle ground between the choice the behaviourists gave us between conscious vs. reflex machines.
  • Anthropomorphism may be an important innate human quality, but that doesn’t mean it’s an accurate concept. And maybe we just can’t know either.
  • As a brief aside, usages of the limbic system, triune brain, and serial evolution of additive brain functions are all outdated now.
  • [Here's my 2nd crazy idea.] Emotions are not initially a product of natural selection. Emotions are conscious experiences constructed by cognitive processes. The possibility then exists that the cognitive abilities that are unique in the human brain might be responsible for those emotions. Maybe emotions came in with the early humans. Maybe they came in as byproducts, or what Stephen J. Gould called exaptations. If this cognitive model is correct, then emotions are based on mental schema (bodies of memories about certain categories of experiences), for example, a fear schema. When in danger, a template is activated. This has implications for medicine to treat emotions. For example, people taking medicine for social anxiety find it easy to go to parties (they are less timid), but they still feel anxious when there. ... Drugs alone won’t be enough to treat problems. Cognitive Behavioural Therapy is required in the end.
  • A particular human experience is where you know the experience is happening to you. We can’t rule that out in other animals, but neurological evidence suggests that it’s not happening. This "autonoetic consciousness" represents the view of the self as the subject. It enables mental time-travel (i.e. you can review past experiences and possible future states). Other animals can learn from the past, but in a simple way. They can also have shifts in perspectives to those of others, but they don’t have this notion of the self that is part of these experiences. Non-conscious alternatives can always account for the behaviour in animals.
  • Every person has the same human brain. There are things in our prefrontal cortex, structures (“frontal pole”), and connections that are unique to humans. But mice have their own unique brain area. Other animals may also have their own unique ways of experience. We have to be subtle and not simply say conscious or nonconscious. Consciousness isn’t one thing. There's autonoetic consciousness. There's noetic consciousness (an awareness of facts and the world). Working memory, for example, is very similar in other primates but not other mammals. There's anoetic consciousness, which is a body awareness (i.e. Jaak Panksepp's core consciousness, which is a primitive, almost unconscious level of consciousness). Understanding brain structures and pathways might help us understand what forms of consciousness are possible, even if we can never measure it.

​Brief Comments
LeDoux seems to draw a pretty narrow definition around consciousness, but then shows the clear evolutionary history of aspects of consciousness along the way, and really advocates for a more subtle use of the term. I'll present my own subjective labelling system for all this at the end of the series (because we sure could use another!), but hopefully the contents of facts within that system will be uncontroversial, and they will surely draw on LeDoux's work.

Like Damasio, whose strange inversion was that emotions preceded feelings, LeDoux's first crazy idea is his own inversion, where he says cognition preceded emotion. In one respect, these guys are actually saying the same thing, that the "subjective experience of moods" came last. But Damasio calls that "feelings" while LeDoux calls it "emotion". Clearly there is a split here between the chemical changes that cause behaviour, and the subjective experience of these changes, but it's frustrating that the field hasn't settled on consistent terminology yet of what's on each side of this divide, which makes discussing these ideas so much more difficult. (It's another good example of the value that philosophers of science can be to scientists.)

What I don't see from LeDoux in this crazy idea is any discussion of affect or value. The amygdala may be able to non-consciously produce behaviour in response to stimuli. It may even learn to do this differently throughout a lifetime. But it could only do so (successfully) by valuing some responses positively and others negatively. Since LeDoux does state that valence goes all the way back to the beginning of life, maybe he just lumps this in as part of "cognition", which then looks even more like Damasio's "emotions", which both men claim came first during evolution.

As for LeDoux's second crazy idea, it's hard for me to see how he can advocate for the need for Cognitive Behavioural Therapy to regulate emotional feelings, but then suggest that these emotional feelings weren't initially a product of natural selection. Perhaps it comes down to how narrowly one defines "initially" but if CBT can improve one's life, then it sure seems plausible that the advent of emotional feelings would have provided an advantage that could have been selected for. Maybe I'm just being overly critical of anyone quoting Gould, though, since I'm of the opinion that he generally lost the Darwin Wars.

Finally, as an evolutionary thinker, I note that LeDoux offers a really good critique of anthropomorphism and the role that Darwin may have played in going down that path. Such attributions to non-human animals can obviously be taken too far. But so can anthropodenial (as Franz de Waal has coined it) for the people who go in the other direction and tout human exceptionalism. I really appreciate LeDoux's openness about this and his search for hard  evidence. I also like his recognition that it would be better for us to treat animals as if they had valuable internal experiences, since we are currently faced with the barrier that we may never know about that. So, one form of human exceptionalism that exists may just be that we are profoundly ignorant of life....except for what we can know about ourselves. Perhaps it would be better to pay attention sometimes to that wide ignorance rather than any narrow knowledge.


What do you think? Are LeDoux's two crazy ideas really that crazy? What else jumped out at you from his deep history of ourselves?

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
Consciousness 11 — Neurobiological Naturalism
1 Comment

Consciousness 11 —Neurobiological Naturalism

4/4/2020

5 Comments

 
Picture
Picture
Two books that look pretty applicable to this series...
​

In the last post, I mentioned that Dr. Ginger Campbell reviewed three books about consciousness in her magnificent Brain Science podcast that were written by neuroscientists. The first two were written by Stanislas Dehaene and Antonio Damasio, which I covered in the last two posts. Now, we get to a book written by Todd Feinberg and Jon Mallatt called Consciousness Demystified. This is their most recent book, published in 2018, so that's the one Campbell covered in depth. However, since this is a refined and perhaps popularised version of the book they published in 2016 called The Ancient Origins of Consciousness (which sure sounds appropriate for this series), I thought I should pull a couple of summary points from that book too. Here, then, are the most important items I found:
  • Feinberg and Mallatt use a much broader view of consciousness than Dehaene or Damasio.
  • They use the term "neurobiological naturalism" to address the hard problem, which is an elaboration of John Searle’s biological naturalism.
  • F&M's goal is to bridge the gap between what the brain does and subjective experience.
  • Neurobiological naturalism rests on three principles: 1) Life. F&M say consciousness is grounded in the unique features of life. 2) Neural features. This consciousness correlates with neural activity. 3) Naturalistic manner. Nothing supernatural is needed.
  • Primary consciousness is broken down into three elements: 1) Exteroceptive—Damasio’s mapping of the outer world. 2) Interoceptive—signals from inside the body. 3) Affective—the experience of feeling, emotion, or mood.
  • The intercommunicating axons of affective pathways branch a lot more than in the exteroceptive pathways, sending signals to many different parts of the system. Another difference is that affective circuits communicate less through short-distance neurotransmitter chemicals and more through far-diffusing neuromodulator chemicals than do exteroceptive circuits.
  • Four problems arise then: 1) Referral—we don’t experience anything inside our brain. It’s all referred to from the outside world or from our bodies. 2) Mental unity—how is it all put together into a single experience. 3) Mental causation—how do thoughts cause action. 4) The perceived qualia of objects.
  • Breaking the hard problem into four smaller problems makes things more manageable.
  • E.g. mental unity is a process, not locatable to a single brain region. It requires synchronised oscillations to unify multiple networks.
  • There is evidence that all vertebrates and some invertebrates enjoy consciousness. This is from a combination of anatomical and behavioural evidence, including operant learning.
  • F&M see qualia (subjective experience) as having two unique features: 1) a unique neurobiology; and 2) the fact that it is exclusively first-person. So, therefore, we need two answers. They argue that the first person subjectivity comes from 1) the life process, combined with 2) the neurobiological pathways.
  • Responding to Chalmers' famous question "Why is experience one way rather than another?" they write: "Our theory of neurobiological naturalism argues that animal experience is fundamentally and inextricably built on the foundation of life. Therefore, we must distinguish purely computational mechanisms, for example computers and any other known non-living computational device, as well as cognitive theories of consciousness that likewise centre on information processing, from the theories that invoke the biological and neural properties of a living brain. We hypothesise that experience and qualia are living processes that cannot be explained solely by non-biological computation. Our view of the hard problem begins and rests on the essential role that biology plays in making animal experience and qualia possible."
  • There are several keys to the mystery of consciousness and subjective experience. One is that consciousness is incredibly diverse, coming from a multi-factorial combination of life and various unique neurobiological structures and processes. They also argue that qualia should not be treated as a single thing and that subjective experiences emerge when a sufficient level of neural complexity evolves. They argue repeatedly that the neurobiological problems should NOT be conflated with the philosophical problem.

  • In The Ancient Origins of Consciousness, Feinberg and Mallatt conted that consciousness is about creating image maps of the environment and oneself. But systems that do it with orders of magnitude less sophistication than humans can still trigger our intuition of a fellow conscious being.
  • After assembling a list of the biological and neurobiological features that seem responsible for consciousness, and considering the fossil record of evolution, Feinberg and Mallatt argue that consciousness appeared much earlier in evolutionary history than is commonly assumed. About 520 to 560 million years ago, they explain, the great “Cambrian explosion” of animal diversity produced the first complex brains, which were accompanied by the first appearance of consciousness. Simple reflexive behaviours evolved into a unified inner world of subjective experiences. From this they deduce that all vertebrates are and have always been conscious—not just humans and other mammals, but also every fish, reptile, amphibian, and bird. Considering invertebrates, they find that arthropods (including insects and probably crustaceans) and cephalopods (including the octopus) meet many of the criteria for consciousness. The obvious and conventional wisdom–shattering implication is that consciousness evolved simultaneously but independently in the first vertebrates and possibly arthropods more than half a billion years ago.
  • To Feinberg and Mallatt, real consciousness is indicated by the optic tectum making a multi-sensory map of the world, attending to the most important object in this map, and then signalling behaviours based on the map.
  • Isomorphic maps are the cornerstone of image-based sensory consciousness. These maps evolved in early vertebrates more than 520 million years ago, and this process was the natural result of the extraordinary innovations of the camera eye, neural crest, and placodes. These events led to the mental images that mark the creation of the mysterious explanatory gaps and the subjective features of consciousness.
  • The Defining Features of Consciousness are: Level 1) General Biological Features: life, embodiment, processes, self-organising systems, emergence, teleonomy, and adaption. Level 2) Reflexes of animals with nervous systems. Level 3) Special Neurobiological Features: complex hierarchy (of networks); nested and non-nested processes, aka recursive; isomorphic representations and mental images; affective states; attention; and memory.
  • The Ancient Origins of Consciousness does not address higher levels of consciousness: full-blown self-awareness, meta-awareness, recognition of the self in mirrors, theory of mind, access to verbal self-reporting.

​Brief Comments
These books are apparently rammed full of good details about the internal brain structures involved with lots of discretely-named aspects of consciousness, and the evolutionary history of these anatomical features. That's certainly helpful for my project. However, the philosopher in me can't also help agreeing with the top Amazon review for Consciousness Demystified, which called it a disappointing bait and switch. The reviewer said, "In other words, in spite of their stated 'main goal' to address the explanatory gap between a third-person, objective description of how the brain works and the mystery of why that gives rise to (or amounts to) subjective, conscious experience, in fact they finally conclude that this explanatory gap is only a 'philosophical problem' instead of a 'neurobiological problem' and thus not really what their book was ever intended to explain anyway."

I have already gone over how the "philosophical problem" raised by Chalmers is actually an impossible problem so it doesn't bother me that Feinberg and Mallatt didn't tackle it. But by naming their books as they have, and promising early on to clear up the so-called hard problem, Feinberg and Mallat have disappointed more than a few readers. Then, by merely asserting that consciousness only arises from natural living processes, they lose credibility by failing to acknowledge (as Searle did) the possibility that alternate arrangements of matter, other than biological brains, could bring forth consciousness. While I'd still put money on the uniqueness of biology leading to the uniqueness of the consciousness that we recognise (think about how that consciousness changes for tiny changes in the biology), I don't pretend that this is a sure bet.

Feinberg and Mallat's addition of "affect" to the mix of "exteroception" (what Damasio calls mind) and "interoception" (what Damasio calls self) is interesting, but probably due to their expanded conception of consciousness. I agree with them it is certainly something that is a part of this full range of experiences that can get lumped into "consciousness", but the note about how the affective circuits communicate "through far-diffusing neuromodulator chemicals" reminds me of the brain being awash in an emotion, which presumably Damasio would say can occur in a non-conscious fashion, which is why it is not a part of his more limited definition of consciousness.

What do you think? Did anything else in Feinberg and Mallatt's research or hypotheses add to your thinking about consciousness? As always, let me know in the comments below.

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
5 Comments

Consciousness 10 — Mind + Self

4/2/2020

5 Comments

 
Picture
Photo by Alberto Gamazo (https://is.gd/KVwanB)
In the last post, I noted that I was going to be relying on Dr. Ginger Campbell's Brain Science podcast for summaries of the latest work on consciousness by neuroscientists. She kicked off her recent four-part series on consciousness with an episode called What is Consciousness? where she gave summaries of some of the latest and best books on this subject. Three of the five books she covered were written by neuroscientists. (The other two were by Sean Carroll and Dan Dennett who I've already covered.) The first of those was by Stanislas Dehaene, which I discussed in the last post. Next up, is Antonio Damasio's book The Strange Order of Things: Life, Feeling, and the Making of Cultures. Here are the most important points from that:
  • Damasio defines consciousness as: mind + self.
  • A mind emerges from the brain when an animal is able to create images and to map the world and its body.
  • Consciousness requires the addition of self-awareness. This begins at the level of the brain stem, with “primordial feelings.” The self is built up in stages starting with the proto self made up of primordial feelings, affect alone, and feeling alive. Then the core self is developed when the proto self is interacting with objects and images such that they are modified and there is a narrative sequence. Finally comes the autobiographical self, which is built from the lived past and the anticipated future.
  • Mind precedes consciousness.
  • Consciousness includes wakefulness, mind, and self.
  • Consciousness is the feeling that my body exists independent of other objects.
  • Affect or feelings came first. Long before consciousness. (A la Panksepp.) Feelings evolve from homeostatic signals and so affect evolved very early. Damasio called this “the strange order of things” because it’s the opposite of what many scientists assume.
  • Damasio stresses the importance of embodiment because homeostasis is the primary mechanism driving life. Feelings are mental experiences that are conscious by definition. The emotive response triggered by sensory stimuli are the qualia of philosophical tradition. This subjectivity is the critical enabler of consciousness.
  • Emotions are chemical reactions. Feelings are the conscious experience of emotions. (This can be slightly confusing as it is not always used consistently in Damasio's work.)
  • Early life was regulated without feelings and there was no mind or consciousness. Then, during the Cambrian explosion, vertebrates appeared and all vertebrates have feelings.
  • Valence / value evolved much earlier. Even bacteria can go toward food and away from danger.
  • Feelings are not neural events alone. They are interpretations of body signals (such as a fast heartbeat). Feelings are, through-and-through, simultaneously, and interestingly, phenomena of both bodies and nervous systems.

For just a bit more on this, Antonio Damasio gave a TED talk in 2011 called, The quest to understand consciousness. Here are a few extra details from slides he used during this talk:
  • Three levels of self to consider: proto self, core self, and autobiographical self.
  • Autobiographical self has prompted: extended memory, reasoning, imagination, creativity, and language.
  • Out of these came the instruments of culture: religions, justice, trade, the arts, science, and technology.

​Brief Comments
I may be jumping the gun here, but Damasio's distinction between the mind and the self appear to me to map neatly onto the two brain networks scientists just proved are key to consciousness. The DAT (dorsal attention network) sounds like it produces the streaming images of the outside world, which Damasio calls mind. And the DMN (default mode network) monitors the internal states of our bodies, generating the sense of a relatively stable but historically changing identity, which Damasio calls the self. As the article I linked to says, consciousness is reported when the DAT and DMN are both activated. In other words, when both mind and self are active. This is something to consider as we go forward. (And, by the way, default mode networks have been detected in macaques, chimpanzees, and even rats.)

I also like Damasio's distinctions between emotions, feelings, and valences. This fits very well with my own system for mapping cognitive appraisals (i.e. judging if something is good, bad, or unknown, aka valenced) onto different events in the past, present, or future, in order to generate the things we typically call emotions (but which Damasio would distinguish as feelings). I can certainly get behind his distinction here. I could also adopt his labelling. And I think he's got "the strange order of things" right by saying the chemical emotional responses would have come first before the feelings in our self became able to identify them. This would clearly be the order of things in a material universe where physics led to chemistry, biology, and then psychology. This is another thing to consider as we put together the evolutionary story of consciousness.

Finally, I'll just explain the brief reference Damasio made to Panksepp. 
In my first peer-reviewed philosophy paper about Bridging the Is-Ought Divide, I mentioned Panksepp's work when I said: "
Evolutionary neuroscientist Jaak Panksepp of Bowling Green State University has identified seven emotional systems in humans that originated deeper in our evolutionary past than the Pleistocene era. The emotional systems that Panskepp terms Care (tenderness for others), Panic (from loneliness), and Play (social joy) date back to early primate evolutionary history, whereas the systems of Fear, Rage, Seeking, and Lust, which govern survival instincts for the individual, have even earlier, premammalian origins." I cited this work as potential evidence for the evolution of morality from care of the self to care for others, but of course it is also evidence of the development of the concept of the self too. 

What do you think? Do Damasio's distinctions make sense to you? Do they map onto concepts you find helpful or not? Let me know what you think of this in the comments below.

--------------------------------------------

Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
5 Comments

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    RSS Feed


    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012


    Click to set custom HTML
Powered by Create your own unique website with customizable templates.