Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Consciousness 14 — Integrated Information Theory

4/11/2020

19 Comments

 
Picture
Picture
IIT. Simple summary. Devil in the details.
​

We're finally here! The end of my literature review on consciousness. In the last post, we heard Michael Graziano lump the work of all of the other neuroscientists I've profiled into one "growing standard model." This is by no means comprehensive for the entire field, so there are still people working outside of this model, but there was one particularly glaring omission that Graziano went out of his way to exclude — Integrated Information Theory (IIT). In the final interview in her four-part series on consciousness, Dr. Ginger Campbell spoke with one of the leading proponents of IIT, Christof Koch, about his latest book The Feeling of Life Itself: Why Consciousness is Widespread but Can't Be Computed. There's a lot to consider here so let's get to the highlights:
  • My background is in physics and philosophy. I worked with Francis Crick after his Nobel Prize. We looked for “the neural correlates of consciousness,” i.e. what are the minimal physical / biophysical neuronal mechanisms that are jointly necessary for any one conscious perception? What is necessary for me to “hear” that voice inside my head? Not necessarily to sense it, or process it, but to have that experience.
  • We now know it’s really the cortex—the outer-most shell of the brain, size and thickness of a pizza, highly convoluted, left and right hemispheres, the most complex and highly organised piece of matter in the known universe—which gives rise to consciousness.
  • This study of the neural correlates of consciousness is fantastic. For example, whenever you activate such and such neurons, you see your mom’s face or hear her voice. And if you artificially stimulate them, you will also have some vague feeling of these things. There is no doubt that scientists have established this close one-to-one relationship between a particular experience and a particular part of the brain.
  • Correlates don’t, however, answer why we have this experience. Or how. Or whether something like a bee can be conscious. For mammals it's easy to see the similarity to ourselves. But what about the further away you go? Or what about artificial intelligence? Or how low does it go? Panpsychism has said it is everywhere. Maybe it is a fundamental part of the universe.
  • To answer these questions, we need a fundamental theory of consciousness.
  • I’ve been working on this theory with Giulio Tononi, which is called the Integrated Information Theory.
  • IIT goes back to Aristotle and Plato. In science, something exists to the extent that it exerts causal power over other things. Gravity exists because it exerts power over mass. Electricity exists because it exerts power over charged particles. I exist because I can push a book around. If there is no causal power over anything in the universe, why postulate they exist
  • IIT says fundamentally what consciousness is, is the ability of any physical system to exert causal power over itself. This is an Aristotelian notion of causality. The present state of my brain can determine one of the trillion future states of my brain. One of the trillion past states of my brain can have determined my current state so it has causal power. The more power the past can exert over the present and future, the more conscious the thing that we are talking about is.
  • In principle, you can measure this system. The exact causal power, a number we call phi, is a measure of how much things exist for themselves, and not for others. My consciousness exists for itself; it doesn’t depend on you, it doesn’t depend on my parents, it doesn’t depend on anybody else but me.
  • Phi characterises the degree to which a system exists for itself. If it is zero, the system doesn’t exist. The bigger the number, the more the system exists for itself and is conscious in this sense. Also the type and quality of this conscious experience (e.g. red feels different from blue) is determined by the extent and the quality of the causal power that the system has upon itself.
  • Look for the structure within the brain, or the CPU, that has the maximal causal power, and that is the structure that ultimately constitutes the physical basis of consciousness for that particular creature.
  • How does this relate to panpsychism? They share some intuitions, but also differ. One of the great philosophical problems with panpsychism is the superposition problem. I’m conscious. You are conscious. Panpsychism says there should be an uber-consciousness that is you and me. But neither of us have any experience of that. Also, every particle of my body has its own consciousness, and there is the consciousness of me and the microphone, or my wife and whatever, or even me and America. But there isn’t anything of what it feels like to be America. This is the big weakness of panpsychism.
  • IIT solves the superposition problem by saying only the maximum of this measure of IIT exists. Locally, there is a maximum within my brain or your brain. But the amount of causal interaction between me and you is minute compared to the massive causality within. Therefore, there is you and there is me.
  • If we ran wires between two mice or two humans, IIT predicts some things. For example, between my left and right hemispheres there are connections called the corpus callosum. If you cut them, you get split brain syndrome—two conscious entities. If you could do the opposite, you would build an artificial corpus callosum between my brain and your brain. If you added just a few, I would slowly start to see some things that you see, but there would be no confusion as to who is who. As more wires are added, though, IIT says there is a precise point in time when the phi across this system will exceed the information within either single brain, and at that point, the individuals will disappear and the new conscious entity will arise.
  • What is right about this as opposed to the Global Neuronal Workspace Theory or other approaches? GNWT only claims to talk about those aspects of consciousness that you can actually speak about. This is called "access consciousness." Once information reaches the level of consciousness, all areas of the brain can use it. If it remains non-conscious, only certain parts of the brain use it.
  • There is an "adversarial collaboration" just beginning where IIT and GNWT proponents have agreed on a large set of experiments to see which theory is supported by fMRI, EEG, subjective reporting, etc. In principle this will be great, but practically, we will see.
  • Where the theories really disagree is the fundamental nature of consciousness. GNWT embodies the dominant zeitgeist (Anglo-Saxon philosophy, scientists, Silicon Valley, sci-fi, etc), which says if you build enough intelligence into a machine, if you add feedback, self-monitoring, speaking, etc, sooner or later you will get to a system that is not only intelligent, but also conscious. Ultimately, consciousness is all about behaviour. It’s a descendent of behaviourism saying behaviour is all we can talk about.
  • The other view says no, consciousness is not magical, it’s a natural property of certain systems, but it’s about causal power. To the extent you can build something with causal power, that will be conscious, but you cannot simulate it. E.g. weather simulations don’t cause your computer to get wet. The same thing holds for perfect simulations of the human brain. The simulation will say it is conscious, but it will all be a deep behavioural fake. What you have to do is build a computer in the image of a brain with massive overlapping connectivity and inputs. In principle, this could give rise to consciousness.
  • Could a single cell or an atom be conscious? In the limit, it may well feel like something to be a bacterium. It doesn’t have a psychology, feel fragile, or hungry, etc. But there are already a few billion molecules and a few thousand proteins. We haven’t yet modelled this, but yes, most biological systems may feel like something.
  • Has any consciousness of my mitochondria been subsumed into my own? Yes. On its own, mitochondria has phi, but IIT says that once it is put together with something else, that consciousness dissolves. If your brain is disassembled, for example when you die, there may be a few fleeting moments where each part again feels like something. In each case you have to ask what is the system that maximises the integrated information. Only that system exists for itself, is a subject, and has some experience. The other pieces can be poked and studied, but they aren’t conscious.
  • The zap and zip technique is being used to look for consciousness in patients who may be locked in or anesthetised irregularly. You zap the brain, like striking a bell, and look at the amount of information that reverberates around the brain. A highly compressed response, one that is “zipped up” so there is almost no information response, is more unconscious (or even dead if there is no response) than one where much response around the brain is noted. This is progress in the mind-body problem. (Note, you don’t have to believe in IIT or GNWT to use this.)
  • Right now, we don’t have strong experimental evidence to think that quantum physics has anything to do with the function of brain systems. Classical physics is enough to model everything so far, but you still have to keep an open mind since we don’t understand all causations.
​
Brief Comments

Although I found this interview to be a good overview, it still left me with a lot of questions about IIT. So, before I make any comments, I want to share a bit more research that I found helpful.

From the Wikipedia Entry on Integrated Information Theory:
  • ​If we are ever going to make the link between the subjective experience of consciousness and the physical mechanisms that cause it, IIT assumes the properties of the physical system must be constrained by the properties of the experience.
  • Therefore, IIT starts by attempting to identify the essential properties of conscious experience (called "axioms"), and then moves on to the essential properties of the physical systems underneath that consciousness (called "postulates").
  • Every axiom should apply to every possible experience. The most recent version of these axioms states that consciousness has: 1) intrinsic existence, 2) composition, 3) information, 4) integration, and 5) exclusion. These are defined below.
  • 1) Intrinsic existence — By this, IIT means that consciousness exists. Indeed, IIT claims it is the only fact I can be sure of immediately and absolutely, and this experience exists independently of external observers.
  • 2) Composition — Consciousness is structured. Each experience has multiple distinctions, both elementary and higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • 3) Information — Consciousness is specific. Each experience is the particular way that it is because it is composed of a specific set of possible experiences. The experience differs from a large number of alternative experiences I could have had but am not actually having.
  • 4) Integration — Consciousness is unified. Each experience is irreducible and cannot be subdivided. I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). Seeing a blue book is not reducible to seeing a book without the colour blue, or the colour blue without the book.
  • 5) Exclusion — Consciousness is definite. Each experience is what it is, neither less nor more, and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book. I am not having an experience with less content (say, one lacking colour), or with more content (say, with the addition of feeling blood pressure).
  • These axioms describe regularities in conscious experience, and IIT seeks to explain these regularities. What could account for the fact that every experience exists, is structured, is differentiated, is unified, and is definite? IIT argues that the existence of an underlying causal system with these same properties offers the most parsimonious explanation. The properties required of a conscious physical substrate are called the "postulates" because the existence of the physical substrate is itself only postulated. (Remember, IIT maintains that the only thing one can be sure of is the existence of one's own consciousness).​

From two articles (1,2) about the "adversarial collaboration" between IIT and Global Workspace Theory (GWT):
  • Both sides agree to make the fight as fair as possible: they’ll collaborate on the task design, pre-register their predictions on public ledgers, and if the data supports only one idea, the other acknowledges defeat.
  • Rather than unearthing how the brain brings outside stimuli into attention, the fight focuses more on where and why consciousness emerges.
  • The GWT describes an almost algorithmic view. Conscious behavior arises when we can integrate and segregate information from multiple input sources and combine it into a piece of data in a global workspace within the brain. According to Dehaene, brain imaging studies in humans suggest that the main “node” exists at the front of the brain, or the prefrontal cortex, which acts like a central processing unit in a computer.
  • IIT, in contrast, takes a more globalist view where consciousness arises from the measurable, intrinsic interconnectedness of brain networks. Under the right architecture and connective features, consciousness emerges. IIT believes this emergent process happens at the back of the brain where neurons connect in a grid-like structure that hypothetically should be able to support this capacity.
  • Koch notes, "People who have had a large fraction of the frontal lobe removed (as it used to happen in neurosurgical treatments of epilepsy) can seem remarkably normal." Tononi added, “I’m willing to bet that, by and large, the back is wired in the right way to have high Φ, and much of the front is not. We can compare the locations of brain activity in people who are conscious or have been rendered unconscious by anesthesia. If such tests were able to show that the back of the brain indeed had high Φ but was not associated with consciousness, then IIT would be very much in trouble.”
  • Another prediction of GWT is that a characteristic electrical signal in the brain, arising about 300-400 milliseconds after a stimulus, should correspond to the “broadcasting” of the information that makes us consciously aware of it. Thereafter the signal quickly subsides. In IIT, the neural correlate of a conscious experience is instead predicted to persist continuously while the experience does. Tests of this distinction, Koch says, could involve volunteers looking at some stimulus like a scene on a screen for several seconds and seeing whether the neural correlate of the experience persists as long as it remains in the consciousness.
  • It may also turn out that no scientific experiment can be the sole and final arbiter of a question like this one. Even if only neuroscientists adjudicated the question, the debate would be philosophical. When interpretation gets this tricky, it makes sense to open the conversation to philosophers.

Great! So let's get on with some philosophising.

Right off the bat, the first axiom of IIT is problematic. It is trying to build upon the same bedrock that Descartes did. But that is an infamously circular argument that rested on first establishing that we are created by an all-perfect God rather than an evil demon. Descartes said this God wouldn't let him be deceived about seeing things "clearly and directly," which led to his claim that therefore, I am. Now, the first axiom of IIT claims consciousness is the only fact one can be sure of "immediately and absolutely." This is the same argument, and it still doesn't hold up. The study of illusions and drug-altered states of experience shows us that consciousness is not perceived immediately and absolutely. And as Keith Frankish pointed out in my post about illusionism, once that wedge of doubt is opened up, it cannot be closed.

Regardless, let's grant that the subjective experience each of us thinks we are perceiving does actually constitute a worthwhile data point. (Even if this isn't a certain truth, it's a pretty excellent hypothesis.) Talking to one another about all of our individual data points is how IIT comes up with its five axioms. But would it follow from that that ALL conscious experiences have the same five characteristics? No! That would be an enormous leap of induction from a specific set of human examples to a much wider universal rule.

However, despite the universal pretensions of IIT and its definition of phi that could theoretically (though not currently) be calculated for any physical system, when Koch is talking about consciousness, he occasionally is only referring to the very restricted human version of it that requires awareness and self-report. This makes him confusing at times, but that's certainly the consciousness he's talking about for the upcoming "adversarial collaboration" that will test predictions about consciousness by proponents of IIT and GWT. It's great to see such falsifiable predictions being made and tested, and of course the human report of consciousness is where we have to start our scientific studies of consciousness, but it's hard to see how these tests will actually end the debate any time soon. Why? Because as we have seen throughout this series, we just don't have a settled definition for the terms being used in this debate. One camp's proof of consciousness is another camp's proof of something else. They could all seemingly just respond to one another, "but that's not really consciousness."

So, What does IIT say consciousness really is? Koch reports:

>>> "IIT says fundamentally what consciousness is, is the ability of any physical system to exert causal power over itself."

I've heard Dan Dennett say that vigorous debates occur about whether tornadoes fit this kind of definition about consciousness. Their prior states influence their current and future states. That's a kind of causal power. They are also a physical system that acts as one thing even though none of the constituent parts act the way the system as a whole does. But does anyone really think a tornado is conscious? Koch continues:

>>> "My consciousness exists for itself; it doesn’t depend on you, it doesn’t depend on my parents, it doesn’t depend on anybody else but me."

This isn't strictly true, of course. Everything is interrelated. We have no evidence of any uncaused causes in this universe, so Koch's consciousness clearly depends on lots of outside factors. If I shouted that at him, would his consciousness be able to stop him from hearing it? I imagine that's not exactly what Koch meant, but between this and the similarity to Descartes' argument using God to see the world clearly and directly, IIT strikes me as practically a religious viewpoint. Tellingly enough, I found out that it is.

In an essay at Psychology Today titled, "Neuroscience's New Consciousness Theory Is Spiritual", there was this passage:
  • Most rational thinkers will agree that the idea of a personal god who gets angry when we masturbate and routinely disrupts the laws of physics upon prayer is utterly ridiculous. Integrated Information Theory doesn't give credence to anything of the sort. It simply reveals an underlying harmony in nature, and a sweeping mental presence that isn't confined to biological systems. IIT's inevitable logical conclusions and philosophical implications are both elegant and precise. What it yields is a new kind of scientific spirituality that paints a picture of a soulful existence that even the most diehard materialist or devout atheist can unashamedly get behind.

I'll let the "inevitability" of IIT's logical conclusions slide for now, but is this "sweeping mental presence" just another form of idealism, which George Berkeley used to argue that the mind of God was everywhere and caused all things? It's not from the same source or for exactly the same reason, but it's related. As an essay at the Buddhist magazine Lion's Roar points out, "Leading neuroscientists and Buddhists agree: 'Consciousness is everywhere'." Here we find that:
  • Buddhism associates mind with sentience. The late Traleg Kyabgon Rinpoche stated that while mind, along with all objects, is empty, unlike most objects, it is also luminous. In a similar vein, IIT says consciousness is an intrinsic quality of everything yet only appears significantly in certain conditions — like how everything has mass, but only large objects have noticeable gravity."
  • In his major work, the Shobogenzo, Dogen, the founder of Soto Zen Buddhism, went so far as to say, “All is sentient being.” Grass, trees, land, sun, moon, and stars are all mind, wrote Dogen.
  • Koch, who became interested in Buddhism in college, says that his personal worldview has come to overlap with the Buddhist teachings on non-self, impermanence, atheism, and panpsychism. His interest in Buddhism, he says, represents a significant shift from his Roman Catholic upbringing. When he started studying consciousness — working with Nobel Prize winner Francis Crick — Koch believed that the only explanation for experience would have to invoke God. But, instead of affirming religion, Koch and Crick together established consciousness as a respected branch of neuroscience and invited Buddhist teachers into the discussion.
  • At Drepung Monastery, the Dalai Lama told Koch that the Buddha taught that sentience is everywhere at varying levels, and that humans should have compassion for all sentient beings. Until that point, Koch hadn’t appreciated the weight of his philosophy. "I was confronted with the Buddhist teaching that sentience is probably everywhere at varying levels, and that inspired me to take the consequences of this theory seriously," says Koch. "When I see insects in my home, I don't kill them."

These religious motivations don't necessarily mean that the motivated reasoning behind IIT is unsound. But it sure makes me skeptical. The cracks I see in IIT's logic—e.g. starting with seeing consciousness immediately and absolutely, making leaps from human experience to all experience, seeing islands of uncaused causes everywhere—are enough to give me pause. Despite all the fancy math plastered on top of these ideas, I'm still fundamentally unconvinced that consciousness is the integration of information, yet somehow "can't be computed and is the feeling of being alive." As for what I think consciousness really is, it's finally time for me to say. Hope I can get it down clearly!

What do you think? Is IIT flawed to you too? What useful concepts or calculations might it offer?

--------------------------------------------
Previous Posts in This Series:
Consciousness 1 — Introduction to the Series
Consciousness 2 — The Illusory Self and a Fundamental Mystery
Consciousness 3 — The Hard Problem
Consciousness 4 — Panpsychist Problems With Consciousness
Consciousness 5 — Is It Just An Illusion?
Consciousness 6 — Introducing an Evolutionary Perspective
Consciousness 7 — More On Evolution
Consciousness 8 — Neurophilosophy
Consciousness 9 — Global Neuronal Workspace Theory
Consciousness 10 — Mind + Self
Consciousness 11 — Neurobiological Naturalism
Consciousness 12 — The Deep History of Ourselves
Consciousness 13 — (Rethinking) The Attention Schema
19 Comments
SelfAwarePatterns link
4/13/2020 09:29:35 pm

Can't say I'm a fan of IIT. I think your criticisms are spot on.

From a distance, the theory sounds like it's on the right track. Who doesn't think integration and information are essential? But IIT in practice comes with a lot of philosophical baggage. It's trying to explain the ghost in the machine, which is equivalent to a theory of astronomy that tries to explain the celestial spheres and firmament.

Despite the name of the theory, Koch insists it isn't about information, but about causal powers. But what else is information, but concentrated and streamlined causation? An information processing system is a system that has causal effects on itself. Saying that such a system can't be computed seems confused at best.

I've read dozens of books on consciousness and neuroscience. After a lot of work, I can understand what a lot of neuroscience papers are talking about. But the more I study IIT, the more confusing it seems. Honestly, it feels like intentional obfuscation.

One of the things I like about GWT, HOT, AST, and other related theories, is that they fit within neuroscience. In some cases, they re-interpret it, or make predictions about what it will find, but they relate to that science. IIT doesn't seem like that. It seems like something separate, abstract, and apart.

All of which is to say, I'll be surprised if IIT turns out to be anything other than a dead end.

Reply
James of Seattle
4/13/2020 10:47:18 pm

I think your observations are essentially correct, but I still think there is a baby or two in that bath water.

So, ... Koch:
1. neural correlates ... stimulate neuron and see mother. IIT gives zero explanation of how that might work.

2. “[W]e need a fundamental theory of consciousness”. I agree, am working on, this. GWT, AST, HOT do not provide this. IIT begins to do this, and then goes off the rails.

3. “IIT goes back to Aristotle and Plato. In science, something exists to the extent that it exerts causal power over other things. Gravity exists because it exerts power over mass.“ Aristotle! WOOT! This is where it gets fundamental, but doesn’t get it quite right. The role of “causality” is going to be very important, and is going to take a long discussion. [Getting this right is what is holding up my posting my own theory.]. Long story short, causality is a pattern in how physical things change. But Aristotle’s 4 causes (4 aspects of causation) are key.

4. “IIT says fundamentally what consciousness is, is the ability of any physical system to exert causal power over itself. ” [See those things way over there? Those are the rails.]

5. “Look for the structure within the brain, or the CPU, that has the maximal causal power, ”. I think I know where this comes from. Erik Hoel came out of Tononi’s lab, and mathematically worked up a value he terms Effective Information which is a variation on the Information Theoretic concept of Mutual Information. (Google “Hoel effective information”.) [I hereby predict that this value will play a significant role in a fundamental theory of human consciousness.]. Quick version: Mut. Info. says if you have 2 variables, and measuring one tell’s you something about the other (via a percentage), they share mutual information. Effective information is about coarse-graining. If you have one system with lots and lots of variables, it might, as a whole, have some mutual information with an outside variable. However, if you can course-grain those lots and lots of variables into a few, or better yet, one, then the mutual information between that one and the outside variable can be higher, and some variation of course-graining will generate a maximum mutual information. (I expect this max is what motivates Phi-max of IIT, but can’t say that for sure.) Hoel suggests (I think) that this Eff. Info. *is* emergent causation. I think that’s overstating things, but I think that’s what Koch is talking about when he talks about phi as measuring causal power.

I think this Effective Information is one of the babies in the bath water. The key is the course-graining. How would that work? How can you get a neural system to do course-graining? The answer is pattern recognition. So one system might be lots and lots of variables, say, pixels. A pattern recognition system can course-grain that down to a single variable, maybe even a single bit: on or off. Depending on what pattern is being recognized, you could give that recognition a label, such as “cat image”. The fundamental point is that the one bit, however represented, will potentially share mutual information with a physical system “out there”. This mutual information is the fundamental basis of aboutness and intentionality.

*
[more to say, but my brain hurts]
[did you see the unitrackers?][points]
[btw, don’t wait for the “more to say”. May or may not happen]

Reply
Ed Gibney link
4/16/2020 12:38:57 pm

Your brain hurts?? How do you think I'm doing after writing all these posts and responding to dozens of tough comments from you guys! I had to take a couple of days off before I could even begin to look at these latest ones. : )

Glad to see we're all on board with seeing flaws in IIT. I had the same thought as Mike that the more I read about it, the less it made sense. I almost apologised for the delay in posting this essay precisely because it took me so long to get to the other side of that process.

Something I still don't understand yet about the positions you two seem to be expressing is the role that information plays. One quick question for both of you — are you mathematical realists?

(E.g. I found this article while googling "Hoel Effective Information"
https://medium.com/@eutelic/consciousness-and-mathematical-realism-298a61f981bc, which led to a Stanford entry as well https://plato.stanford.edu/entries/platonism-mathematics/)

Mike,
I know that we went deep on this already and you gave me a book to dive in further. I'll just say now that I was baffled by this latest statement:

>>>"But what else is information, but concentrated and streamlined causation?"

I might instead ask, "What is information, but causation stripped of its causal power?" Maybe your answer about mathematical realism, and the role that plays in your notion of where causation itself comes, from would help address this?

James,
I wonder what your interpretation of my questions for Mike are too. Outside of that, I do like the ideas of "coarse graining" and "pattern recognition" for how the brain models reality and how it can think about something like a Platonic concept.

Reply
SelfAwarePatterns link
4/16/2020 02:18:18 pm

Ed,
I would describe myself as a mathematical semi-realist. Math is a tool to model relations in reality. This is shown by the fact that when we find things it can't model, we expand it to do so, as Newton invented calculus to handle gravitation and motion.

But the tool can also be used to model things that don't actually exist. In that sense, it's like any other language we can use to describe things. So I'm not a platonist.

In what sense do you see information stripped of its causal power? I suppose you could say DNA by itself has no causal power without its transcription proteins and enzymes, but when we look at the actual interaction these entities have with each other, it's completely causal, in a way that I can't see changes when we look at other intersections between the actual physical storage of information and processing of that information. And even if we say the information itself has no power, the information processing certainly seems to.

Reply
James of Seattle
4/16/2020 07:24:25 pm

[Ed, I stand in awe of anyone who can post a coherent discussion every few days, like you and Mike, especially when it’s not their “day job”.]

Re: mathematical realism — I’m a pattern realist. I mean, that’s my second axiom. I think mathematics is a subset of all patterns. So, patterns are mind independent, but a given “mind” is largely defined by the set of patterns it recognizes, so “mind” is pattern dependent. BTW, my support for this leans heavily on Dennett’s paper “Real Patterns”.

As for information and causation, I think a fundamental understanding of these is key for understanding Consciousness. My understanding goes like this:

1. Any transformation of matter (any physical event) creates Mutual Information [in caps because is an Information Theory defined term]. This (I think) is true at the quantum level, and (I think) is the import of “entanglement”.

2. Causation *is* a pattern, specifically, it’s a combination of two subpatterns: a subpattern of matter before transformation and a subpattern after transformation, such that the two subpatterns share mutual information in the Information Theory sense. Thus, recognizing the second pattern gives some probability that the first subpattern existed previously, and recognizing the first subpattern will provide a probability that the second subpattern will exist in the future. Note that my understanding does not (yet) explain what causal “power” is, and suggests that notion might be misleading. [i don’t feel strongly about this last part.]. A good paper for tying information and causality to patterns is “Patterns, Information, and Causation” by Holly Andersen (http://philsci-archive.pitt.edu/13143/1/PatternsInformationCausation_JPhil_preprint.pdf)[also relies on Dennett’s “Real Patterns”].

3. The fundamental basis of Intentionality is Mutual Information. Mutual Information is an affordance of value.
A. Example of simple case:
A bacterial food source creates sugar molecules which diffuse away from the source. (The source “causes” sugar molecules to float away). The sugar molecule has mutual information with respect to the source. A bacterial surface molecule recognizes that sugar and causes a change (a signal) inside the cell. That signal carries the same Mutual Information as the sugar molecule. [This is a COPY operation, computationally.] The cell has a large number of receptors, some, not all, of which generate a signal in a given time interval. This set of signals will have mutual information with respect to the source. This set of signals also constitutes a pattern (a ratio of actual signals/possible signals). If some mechanism creates a physical state which is based only on the ratio, and not the specific individual signals, that is a coarse-graining, which Hoel showed can have greater Mutual Information. In fact, this state will have mutual information with respect to a new physical variable: the proximity of the cell to the source. The closer to the source, the higher ratio of signal. Assuming this state persists during a period where the sugar-receptor signal mechanism repeats, the first state can be considered a memory state, with mutual information with respect to the closeness of the source at that time. Now after time passes some sugar receptors will have stopped signaling (the sugar floated away) and some new receptors will have started signaling, so there may be a new ratio, with mutual information with respect to the new proximity, either the same or closer or farther. Suppose we have a new mechanism that can compare the new ratio with the memory ratio, and if the new ratio is the same or smaller, it generates a new signal, which we’ll call the decision signal. This signal has mutual information with respect to a “causal” pattern, namely, the pattern of moving closer to the source, which would cause more of the surface receptors to signal. Specifically: if we get the signal, the moving closer didn’t happen.

So now what? This “decision” signal has mutual information, and this mutual information is an affordance to associate an action with the associated pattern (moving closer or farther from the source). Let’s play the part of nature and create a bunch of cells that can all move two ways. The first way is move in a straight line. The second way is move randomly, which we will call “tumbling”. Now, acting as nature, in some cells we link the “decision signal” to moving in a straight line. In other cells we link the signal to tumbling. Let’s assume the default, i.e, no signal, means go straight. So, when the first group gets the decision signal, which correlates (mutual information) with not getting closer to the source, those cells go straight, which means if they are going away from the source

Reply
James of Seattle
4/16/2020 09:50:56 pm

[seems I hit a length limit. Kinda wish there was a warning. Let’s see if I can remember the rest]

... which means if they are going away from the source they just keep on going. For the second group, when they get the signal they start tumbling, choosing random directions until they find one where they start moving generally closer to the source, in which case the signal stops and they go straight again. Thus, the signal becomes an affordance of meeting a goal, in this case survival.

Mutual Information can likewise be traced through to human consciousness. The signal from a pattern recognition mechanism thus has mutual information relative to the pattern tracked. This mutual information, I suggest, is the basis of intentionality (and “what it’s like-ness”). And coupling this Mutual Information to a mechanism which gains value via its association with the pattern is the basis of consciousness, say I.

*

Reply
Ed GIbney link
4/17/2020 12:03:27 pm

Mike,

That helps to hear you aren't a math platonist. I'm not either so that's one less thread to pull on. I do have thoughts about math being only an approximate model of reality (the never-ending natures of both pi and the polygons of calculus are good examples of this), but that's not important here.

In what sense do I see information stripped of its causal power? Just in the way we already discussed that information processing and the information processor are inextricably linked. So, information alone has no power. And any differences between processors will incur changes in the experience those processors have. Skipping ahead to an example in my next post, since there are more than two objects in the universe, doesn't every single thing experience gravity and therefore "process information"? And if so, does that mean processing information and panpsychism are equivalent, i.e. both occur everywhere?

James,

Sorry about the length limit! I've been bitten by that myself in the past. It's out of my control, but maybe I should put a warning on my blog rules. Hopefully it won't get you again as I know it's very frustrating to try to recapture something you've written.

I'll have to check out that Dennett paper on pattern realism.

As for information and causation, I think they might only be "key for understanding consciousness" if you think these labels of Mutual Information, Causality, Signals, and Patterns etc. actually aids in understanding. Right now they seem like a foreign language to me that I can just about understand and speak, but only with great effort and time. It sorta feels like the use of the symbols of formal logic or higher mathematics. I guess those are useful for standardising and universalising the discussion between experts, but I'm not sure the discussion of consciousness has reached that point yet. At least not for me. I like the plain talk of sugars and tumbling towards them right now. When you say — "Thus, the signal becomes an affordance of meeting a goal, in this case survival." — I think you've added a layer of description that I didn't need to just understand the goal being met. Maybe I'll read Dennett and Holly Andersen though and have some a-ha moments because of all this. I definitely appreciate the effort to communicate all this so thank you for that.

Reply
SelfAwarePatterns link
4/17/2020 03:02:36 pm

Ed,
Information is a key concept in physics, so yes, every system in the universe processes information. It's just a matter of how much information processing is happening in relation to the magnitude of the energy involved. A pump has a low information to magnitude ratio, whereas the device you're reading this with has a high information to magnitude ratio.

However pancomputationalism doesn't automatically lead to panpsychism (at least not the limited version of pancomputationalism). Not all computation is consciousness, just as not all computation is Tetris, World of Warcraft, or accounting software, except by excessively broad interpretations of these things.

Reply
Ed Gibney link
4/17/2020 04:00:46 pm

Mike — so how do you draw the line between pancomputationalism and panpsychism? If information processing is the thing for you, then why does only certain information qualify? Which bits are those? What information is used to choose which information? I assume it's not just information about living / biological things in your book.

Reply
SelfAwarePatterns link
4/17/2020 08:59:18 pm

Ed,
In my view, consciousness is information processing for a certain functional toolset, one that enables an organism to expand the scope of what it can react to. I see the functionality as a hierarchy.
1. Reflexes and fixed action patterns. The organism reacts to direct stimuli.
2. Perception: image maps from distance senses, expanding what 1 reacts to in space.
3. Volition: cause and effect prediction in service of valenced goal to select which reflexes to allow or inhbit: action selection. This expands the scope of what 1 and 2 can react to in time.
4. Deliberative imagination: action-scenario simulations, further expanding the time scope of 1-3.
5. Introspection: deep recursive metacognition as a feedback mechanism, adding predictions of the self, along with symbolic thought.

I see 2 providing sensory consciousness, 3 providing primary consciousness, 4 episodic memory, with 5 necessary for human consciousness. Some machines (such as self driving cars) have reached or are approaching 2.

Note that in a living organism, this is all in service of homeostasis and reproduction, of selfish genes. But if that base layer were swapped out for other purposes in a machine, we'd have a machine consciousness. (Unless we insist a consciousness must have living motivations, in which case we'll need another word to describe the machine variety.)

Reply
James of Seattle
4/17/2020 11:18:56 pm

Mike, regarding machine Consciousness and your levels, where do you place a self-driving car that relies on gps and traffic data to determine the fastest route?

Reply
SelfAwarePatterns link
4/18/2020 12:09:28 am

Ed,
For self driving cars, it's the modeling of their immediate environment that I think has them approaching 2. The GPS and traffic data stuff enhance their functionality, but from the standpoint of consciousness, it's a cheat. It's not ego centric data constructed from sensory stimuli. And if that network data is wrong for any reason, it can leave the car helpless.

It's worth noting that biology often cheats as well. Ants appear to navigate on their own, but it turns out they're vitally dependent on pheromone trails. A lot of sophisticated looking behavior in animals is actually reflexes cued off some environmental trigger.

Reply
SelfAwarePatterns link
4/18/2020 12:12:08 am

Oops, sorry James. I thought that question came from Ed. Just realized I muffed it, both in the address and replying in the wrong spot.

Reply
James of Seattle
4/18/2020 01:31:35 am

A cheat? Why not alternate sensory data? Not ego-centric? Then how does the car place itself at a particular position on the map? If their sensory data is wrong, it can leave anyone helpless.

Why is relying on scent versus sight a cheat? A lot of sophisticated behavior in the human brain is actually reflexes cued off some environmental trigger.

So if the car is approaching level 2, how will we know when it gets there?

*
[Have at you!]
[I know/expect you’re being non-commital, but it’s okay to speculate.]

Reply
SelfAwarePatterns link
4/18/2020 12:16:20 pm

For the car, I see it as a cheat because the data is already processed and pre-prepared. But it's only a cheat in terms of consciousness, in developing a system like us, not in overall functionality.

In the case of the ant, it's not the scent, its the fact that the pheromone trail takes the place of a navigational model. (E. O. Wilson once used pheromones to make ants spell his name. (Some ants may have a little more volition than that; I'm not sure which species he did it with.))

How do we know when the car reaches level 2? It's admittedly a judgment call. But their inability to handle situations like rain storms, construction zones, and other novel situations seems like a strike against them. On the other hand, our standards for how they should respond to these is pretty high, much higher than we might expect of many animals, although the animals generally manage to find food and move around in those circumstances, albeit not always with good results.

Ed Gibney link
4/19/2020 01:53:51 pm

Hi again guys. I went off and finished some reading and now have time to share a few comments on all this. I'll try to keep it brief. (For comment length limits and also because I know my post of my own theory is going to take a lot of time and address some of these things.)

>>James>> The role of “causality” is going to be very important, and is going to take a long discussion. [Getting this right is what is holding up my posting my own theory.]. Long story short, causality is a pattern in how physical things change.

I'd have said "causality is *described by* a pattern in how physical things change." I wonder if you're getting stuck posting your own theory because you're focusing on the *whys* of consciousness rather than the *hows* as I described the difference between Chalmers' Hard (I say impossible) problem and the Easy problems of science. For example, I'm focusing on the long evolutionary history of the hows of life, from which (hopefully) a sensible definition of consciousness will be easier to see. That's all I'm able to bite off at this point, but I think that'd be a good bite.

>>James>> So, patterns are mind independent, but a given “mind” is largely defined by the set of patterns it recognizes, so “mind” is pattern dependent. BTW, my support for this leans heavily on Dennett’s paper “Real Patterns”.

I really enjoyed Dennett's paper so thanks for sharing that! I would disagree with any interpretation of this, though, that said patterns are mind independent. I take that independence to mean they could exist without minds, and I just don't see that as feasible. I would put it that minds are indeed very much characterised by the shorthand rules they come up with to compress information about the world (aka patterns).

>James>>> Any transformation of matter (any physical event) creates Mutual Information [in caps because is an Information Theory defined term]. This (I think) is true at the quantum level, and (I think) is the import of “entanglement”.

Once I straighten out the mystery of consciousness I'll go tackle this whole quantum mechanics business. ; )

>>James>> A good paper for tying information and causality to patterns is “Patterns, Information, and Causation” by Holly Andersen

Sorry but I couldn't get through the abstract. Far too much jargon. The first sentence required me to understand "collection of information-theoretic relationships" and "patterns instantiated in the causal nexus." I did not understand that.

>>Mike>> In my view, consciousness is information processing for a certain functional toolset, (one that enables an organism to expand the scope of what it can react to).

Okay, so this sounds like a functionalist then right? And if you are relying on the functions to distinguish between which information processing matters and which do not, then why not just speak of the functions and leave the information processing part of it out?

>>Mike>> ...one that enables an organism to expand the scope of what it can react to. I see the functionality as a hierarchy. 1. Reflexes and fixed action patterns. 2. Perception. 3. Volition. 4. Deliberative imagination. 5. Introspection.

This is promising to me as I too will describe the functioning as a hierarchy of what an organism is responding to. I differ in a few important things though.

1) I think you have to start with something that differentiates an organism from a non-organism. In your first stage, you say "the organism reacts to direct stimuli." Well, a rock breaks if you hit it, and it will erode if you pour the right acid on it. That's "reacting to a stimuli" and it will do so consistently, over and over and over again until it's gone (which you could also say about an ant). I might draw on James' use of the COPY function as a good description of something that marks the break between non-organism and organism. This video on abiogenesis is one I cite as a clear and simple explanation of a theory on this:

https://www.youtube.com/watch?time_continue=14&v=U6QYDdgP9eg

I might even go so far as to say that you *have* to have this COPY function *first* before any of the rest of the stages in consciousness make sense. (Not sure about that yet.) That would draw a line that the driverless car hasn't crossed yet, even though it has skipped ahead and does contain systems that function as your stage 2 perception. But without COPY, it won't have a sense of self yet. Does this imply a computer virus might be said to start to gain consciousness? Maybe so.

2) In your stage two, you talk about "image maps from distance senses." This is something I was actually thinking about recently as I was trying to build up a hierarchy of senses and realised that none of them are actually "from a distance." All senses just detect things happening at the surface of the sense receptor—either a chemical

Reply
Ed Gibney link
4/19/2020 01:55:13 pm

(I knew that one would get cut off...)

—either a chemical change, a pressure, or a vibration of touch, sound, light, etc. There's no spooky action at a distance. What the senses can do, though is help construct a model of things driving those sensations. And maybe those things drove the sensations from some distance away in either space OR time. Think of the joke of the person who doesn't understand perspective so wonders why that man (way over there) is so small. But also think of my dog who can smell that I was somewhere 8 hours ago (my chemical signature has diminished) so it's about time for me to come home.

3) I see your stages of volition, deliberate imagination, and introspection as deeply tied to notions of free will that sure seem to me like they aren't there.

>>Mike>> I see 2 providing sensory consciousness, 3 providing primary consciousness, 4 episodic memory, with 5 necessary for human consciousness.

I'm going to replace these sorts of things with the 10 steps in evolutionary epistemology that I mentioned to you on your website. I don't plan to call them different types of consciousness though. And certainly not human consciousness.

>>Mike>> Note that in a living organism, this is all in service of homeostasis and reproduction, of selfish genes. But if that base layer were swapped out for other purposes in a machine, we'd have a machine consciousness. (Unless we insist a consciousness must have living motivations, in which case we'll need another word to describe the machine variety.)

So you also see this original COPY function as basic to life. What did you have in mind for "other purposes in a machine" that could you could swap in for that? It's hard for me to see something that would indeed still be called consciousness. But I bet you have a better imagination about this right now.

Reply
SelfAwarePatterns link
4/19/2020 03:17:21 pm

Ed,
I am a functionalist. But the functionality is the what, with information processing being the how. Information processing is an inescapable part of the story.

Those layers were actually something I came up with for a post on panpsychism, particularly to call attention to what is missing from a conception of consciousness that only involves reacting to the environment, as a rock or proton does.

I'm not sure if I'm grasping the COPY criteria. When James first mentioned it, I took it as something more lower level. But even in terms of replication, clay and crystals seem to propagate their pattern, but aren't alive. One definition of life I saw a while back was: replication with modification. The (minute) modification leads to variances in copying success and natural selection.

On your point about distance senses, it’s worth making a distinction between mechanism and function. The mechanism of me seeing the wall on the other side of the room is photons exciting certain patterns of photoreceptors on my retina, but the functional *perception* is constructing a model from the resulting signals.

On layer 3, the word "volition" was carefully chosen (and "free will" carefully avoided). It's the high level description. The words that follow are the lower level description, which are meant to convey that there's nothing contra-causal on.

I'm looking forward to seeing your hierarchy. Mine isn't really meant to say anything new, and I've revised it a few times since I came up with it, and will almost certainly revise it in the future. It's more a pedagogical tool than anything else.

On other purposes, it could be anything we want the machine to do: drive us somewhere, explore Mars, clear mines, build something, etc. We do those things as intermediate goals, but to the machine, it would be their reason for being.

I realize a system not interested in self actualizing violates many people’s intuitions of a conscious system, even if the technological system has most of the components. Consciousness lies in the eye of the beholder.

Reply
James of Seattle
4/19/2020 04:49:15 pm

[at the risk of delaying Ed’s next post for even a minute]

Like I said, causality is going to be a long discussion, and you only need to get into if you want to give the whole physical explanation of how you get mutual information from the very bottom, which is quantum mechanics. Mutual Information is the key. When Mike talks about Information Processing, it’s the Mutual Information that is being processed. In a COPY operation, the mutual information in the input is being preserved (copied) in the output. So when a sugar receptor on the cell surface triggers the release of a phosphate on the inside of a cell, that’s a COPY operation. That phosphate has the same mutual information that the sugar had.

Now replication can be a COPY, but then we’re back to squinting and looking sideways.

I’m glad you liked Dennett’s paper. It will be useful to explain how you get better mutual information (what Hoel calls Effective Information) from pattern recognition mechanisms. Andersen’s paper helps explain how you get patterns from causality. The patterns are there. The mutual information is there. The information is an affordance for a mind to make use of. Natural selection generates the pattern recognition mechanisms that make use of the mutual information. But now, people can generate the mechanisms as well.

Finally, while we’re squinting and looking sideways, all matter carries mutual information with what came before, so, mutual Information is a pan-proto-psychic property.

Reply



Leave a Reply.

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    RSS Feed


    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012


    Click to set custom HTML
Powered by Create your own unique website with customizable templates.