Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Response to Thought Experiment 39: The Chinese Room

1/31/2016

0 Comments

 
​Okay, this is actually a really complicated thought experiment that took me days to sort out so please excuse the lateness of the post and bear with me through the entirety of it as well. Over at the Stanford Encyclopedia of Philosophy (SEP), they noted that "the argument and thought experiment now generally known as The Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle. It has become one of the best-known arguments in recent philosophy. The argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science, and cognitive science generally. As a result, there have been many critical replies to the argument." In the post where I first shared this experiment, I embedded a 60-second video that explained this experiment as it was originally (and better) written, but let's take a quick look at the modified version Julian Baggini wrote up before I dive in to talk about why it has become so well known.

---------------------------------------------------
     The booth of the clairvoyant Jun was one of the most popular in Beijing. What made Jun stand out was not the accuracy of her observations, but the fact that she was deaf and mute. She would insist on sitting behind a screen and communicating by scribbled notes, passed through a curtain.
     Jun was attracting the customers of a rival, Shing, who became convinced that Jun's deafness and muteness were affectations, designed to make her stand out from the crowd. So one day, he paid her a visit, in order to expose her.
     After a few routine questions, Shing started to challenge Jun's inability to talk. Jun showed no signs of being disturbed by this. Her replies came at the same speed, the handwriting remained the same. In the end, a frustrated Shing tore the curtain down and pushed the barrier aside. And there he saw, not Jun, but a man he would later find out was called John, sitting in front of a computer, typing in the last message he had passed through. Shing screamed at the man to explain himself.
     "Don't hassle me, dude," replied John. "I don't understand a word you're saying. No speak Chinese, comprende?"

Source: Chapter 2 of Minds, Brains, and Science by John Searle (1984)

Baggini, J., The Pig That Wants to Be Eaten, 2005, p. 115.
---------------------------------------------------

The changes that Baggini made to the original experiment don't actually introduce any new concepts to discuss, so we can look directly at the huge literature that already exists on this problem. The Chinese Room was meant to "challenge the claim that it is possible for a computer running a program to have a 'mind' and 'consciousness' in the same sense that people do, simply by virtue of running the right program." This is an attack against both functionalism and the so-called Strong Artificial Intelligence position.

Functionalism, in philosophy of mind, arose in the 1950s. In this view, having a mind does not depend on having a specific biological organ such as a brain, it simply depends on being able to perform the functions of minds, such as understanding, judging, and communicating. As Baggini explains in his discussion of this experiment: "Jun's clairvoyant booth is functioning as though there were someone in it who understands Chinese. Therefore, according to the functionalist, we should say that understanding Chinese is going on."

Contrasting such functionalism, Searle instead holds a philosophical position he calls biological naturalism: i.e., that consciousness and understanding require specific biological machinery that is found in brains. He believes that "brains cause minds and that actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains. Searle argues that this machinery (known to neuroscience as the 'neural correlates of consciousness') must have some (unspecified) 'causal powers' that permit the human experience of consciousness."

To see the full strength of Searle's arguments, you also have to know the difference between syntax and semantics. Syntax is concerned with the rules used for constructing, or transforming the symbols and words of a language, while the semantics of a language is concerned with what these symbols and words actually mean to the human mind in relation to reality. Searle argued that the Chinese Room thought experiment "underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics."

There have been many, many criticisms of Searle's argument already, which the SEP entry has divided into four categories. Let's take them in order of interest, and I'll try to add some of my own thoughts along the way as I build towards a potential overall solution to Strong AI.

The first of the standard type of replies involves claims that the Chinese Room scenario is impossible or irrelevant. It's impossible because such a one-to-one rule-based book for language responses would have to be infinitely large to actually work. For example, the number of ways to arrange 128 balls in a box (10 ^250) exceeds the number of atoms in the universe (10 ^80). By extension, the number of ways you could combine thousands of characters in the Chinese language would be incalculably large for today's computers. But this is a philosophical thought experiment, they are always hypothetical in nature, so practical impossibilities aren't a problem.

Even so, other critics claim it is irrelevant no matter how successfully the Chinese Room was built. They point out that we never really know if another person "understands" or just acts as if they do, so why try to hold the Chinese Room to these standards. Pragmatically, it just doesn't matter. Searle called this the "Other Minds Reply," but Alan Turing noted this thirty years earlier and called it "The Argument from Consciousness." Turing noted that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. The Turing test simply extends this 'polite convention' to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he doesn't think we need to." The same might be said for the Chinese Room, except evolution and comparative anatomy dissolves this "other minds" problem for all of us biological descendants of the origin of life. The question, for now, remains open for computers, machines, or Chinese Rooms.

So the Chinese Room isn't irrelevant, and we don't care if it's practically impossible. This forces us to move on and consider the next standard criticism—the "Systems Reply." This reply concedes that the man in the room doesn't understand Chinese, but the output of the room as a whole reflects an understanding of Chinese so you could think of this room as its own system. As Baggini pointed out in his discussion of this, the Systems Reply "isn't quite as crazy as it sounds. After all, I understand English, but I'm not sure it makes sense to say that my neurons, tongue, or ears understand English. But the booth, John, and the computer do not form the same kind of closely integrated whole as a person, and so the idea that by putting the three together you get understanding seems unpersuasive." This line of thinking agrees with Searle, who argued that no reasonable person should be satisfied with the Systems Reply without making an effort to explain how this pile of objects has become a conscious, thinking being.

From an evolutionary philosophy perspective, this criticism of the "Systems Reply" also chimes with what theoretical evolutionary biologists John Maynard Smith and Eors Szathmary said in The Origins of Life in their analysis of ecosystems:

"Consider a present-day ecosystem—for example, a forest or a lake. The individual organisms of each species are replicators; each reproduces its kind. There are interactions between individuals, both within and between species, affecting their chances of survival and reproduction. There is a massive amount of information in the system, but it is information specific to individuals. There is no additional information concerned with regulating the system as a whole. It is therefore misleading to think of an ecosystem as a super-organism." (My emphasis added.)

However, to those who are happy with fuzzy, woo-woo, expanded-consciousness thinking, or even those using a more rational extended phenotype analysis, this could be seen to be a question of perspective. In his forthcoming book I Contain Multitudes, "Ed Yong explains that we have been looking at life on the wrong level of scale. Animals—including human beings—are not discrete individuals, but colonies. We are superorganisms." So, given small or large enough analyses of space and time, our own understanding as an "individual" could be just as unpersuasive as the systems theory that tries to attribute understanding to the "booth-John-computer" system.

This Systems Reply shows, then, how the Chinese Room raises interesting questions about what exactly an individual is and how this requires one to state their subjective position on what perspective you are taking, but the Systems Reply doesn't resolve anything about functionalism or Strong AI. The man in the Chinese room is a minor element in the system—he is basically just the movable arm storing or fetching data from a spinning platter in an old hard drive—so whether or not he understands Chinese is of no consequence. He is not analogous to an entire computer system.

That objection may cripple the Chinese Room as it stands, but it is still worth going on to the third standard reply, which concedes that just running an "if-this-then-respond-with-that" program doesn't lead to understanding, but that's not what Strong AI is really about. Replies along this line offer variations to the Chinese Room that hope to show a computer system could understand. "The Robot Reply" (where a robot body in the Chinese room has sensors to interact with the world) and "The Brain Simulator Reply" (where a neuron-by-neuron simulation of the entire brain is used in the Chinese room) are the most well known variations for this direction. Searle's response to these is always the same: "no matter how much knowledge is written into the program and no matter how the program is connected to the world, it is still in the room manipulating symbols according to rules. Its actions are syntactic and this can never explain what the symbols stand for. Syntax is insufficient for semantics."

Now we're talking about the difference between hardware and software as to whether or not Strong AI is possible. Searle argues that all the hardware improvements in the Robot Reply and the Brain Simulator Reply still don't lead to understanding as long as the software running them is based on syntax.

But What about improvements to the software? Dan Dennett agrees that such software "would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge."

We are getting closer to having such software for our computers. Apple says Siri "understands what you say. It knows what you mean. IBM is quick to claim its much larger ‘Watson’ system is superior in language abilities to Siri. In 2011 Watson beat human champions on the television game show Jeopardy, a feat that relies heavily on language abilities and inference. IBM goes on to claim that what distinguishes Watson is that it 'knows what it knows, and knows what it does not know.' This appears to be claiming a form of reflexive self-awareness or consciousness for the Watson computer system.​" These are little more than marketing people misrepresenting what is going on beneath the hood of these advanced search algorithms though.

A far more interesting step towards Strong AI was shown in Fei-Fei Li's TED talk from 2015, titled: "How we’re teaching computers to understand pictures." I've embedded the full 18-minute video below and strongly recommend it for viewing, but here are some relevant quotes to give you a quick synopsis:

"​To listen is not the same as to hear. To take pictures is not the same as to see. By seeing, we really mean understanding. No one tells a child how to see. They learn this through real world examples. The eyes capture something like 60 frames per second, so by age of three they would have seen hundreds of millions of pictures of the real world. We used the internet to give our computers this quality and quantity of experience. Now that we have the data to nourish our computer's brains we went back to work on our machine learning algorithm, which was modelled on Convolutional Neural Networks. This is just like the brain with neuron nodes, each one taking input from other nodes and passing output to other nodes, and all of it organised in hierarchical layers. We have 24 million nodes with 150 billion connections. And it blossomed in a way no one expected. At first, it just learned to recognize objects, like a child who knows a few nouns. Soon, children develop to speak in sentences. And we've done that with computers now too. Although it makes mistakes, because it hasn't seen enough, and we haven't taught it art 101, or how to appreciate natural beauty, it will learn."
This is a stunning piece of software biomimicry. But when will it be enough? ​Searle argued that even if a program simulated every neuron in the brain of a Chinese speaker, such that there would be no significant difference between the operation of the program and the operation of a live human brain, that kind of simulation still would not reproduce the important features of the brain—its causal and intentional states. But isn't that possible to recreate as well?

In the end, Searle agrees that this is possible. Hooray! Sort of. He allows that in the future, better technology may yield computers that understand. After all, he writes, "we are precisely such machines." However, while granting this physicalist view of the mind-body problem, Searle holds that "the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then it may be possible to create machines that have consciousness and understanding." Without the specific machinery required though, Searle does not believe that consciousness can occur.

Is this really a valid obstruction in the path of Strong AI? To understand that, we have to clear up what Searle means when he says our conscious brains are "non-computational." In theoretical computer science, a computational problem is "a mathematical object representing a collection of questions that computers might be able to solve. It can be viewed as an infinite collection of instances together with a solution for every instance." In other, clearer words from a blogger: "By non-computational I mean something which cannot be achieved in a series of computational steps, or to put it in the language of computer science, something for which no algorithm can be written down. Now what are such aspects which cannot be algorithmically achieved? The answer is any such thought processes which advances in leaps and bounds instead of in a series of sequential steps."

Is there really any such thing? I say there is not. We may imagine that having a "eureka moment" or having something come to us "out of the blue" means that those results didn't have an obvious cause, but that's merely because we don't (or can't) monitor or interrogate our subconscious brain processes thoroughly enough. In the rational universe we live in though, where all effects have causes, there is theoretically nothing that is non-computational.

Now, that doesn't mean we are deterministic or that everything is actually solvable. Just because we can look backwards and see the causes to our actions does not mean we can calculate all current influences and precisely determine future actions. We will never have enough information to do that for all actions. Because of the chaotic complexity of the present, as well as the unknowableness of what the future will discover, our knowledge is always probabilistic. (Just to provide an example, we may all be 99.9999% sure the Earth goes around the Sun, but maybe someday in the future it will be revealed to us that we are actually all in a computer simulation that only makes it appear that way.) So when we are trying to determine the best ways to act to achieve our goals, we are essentially probability calculation machines, since it is probability that provides a way to cope with uncertainty or incomplete information.

Most of our everyday decisions have great certainty to them. I'm very sure that eating breakfast will sustain me to lunch, turning that faucet will give me water for showering, and walking that direction will take me to a store that's open and has what I need. But occasionally we are faced with a choice that it would be impractical or impossible to research and calculate a result that would gives us a highly confident decision. Should I turn left or right to find something interesting in this market? Should I order the risotto or the tart for lunch? Should I hire this highly qualified candidate for the job or that one? Should we allow these chemicals into the food chain based on limited, short-term trials that seem to indicate they are okay? What do we do in these situations?


Depending on the importance and potential consequences of the decision, we have all sorts of ways to get over this indecision—recency bias, kin preference, George Costanza's opposite day heuristic, biological tendencies towards optimistic or pessimistic outlooks, contrarianism to social observation, or yielding to outside sources such as to people with stronger feelings or even to the flipping of a coin.

Computer scientists have begun to model this probability into their algorithms—that's really how IBM's Watson 'knows what it knows, and knows what it does not know'. Watson buzzes in when it calculates that the probability of getting the Jeopardy question right is over a certain threshold, based on how many perfect matches or corroborating sources it has found. What I haven't seen, however, is anyone trying to model the effect that emotion has on our own decision-making process. When we are happy and joyful, we try to continue doing what we are doing. When we are frustrated and angry, we try something different, and quickly. When we are sad and depressed, we take stock more slowly before deciding what to do differently. This is a problem, since, as David Hume said, reason is the slave of the passions. It's one of three fundamental ingredients I see that are missing from Strong AI efforts.

​This missing ingredient is not surprising since emotions have never been the strong suit of philosophers or computer scientists. Emotions have just seemed messy and mysterious. But since the 1950's, cognitive psychologists have been building an Appraisal Theory for emotions, which hypothesises a logical link between our subjective appraisals of a situation and the resulting emotions. As I wrote in my post on emotions:

​---------------------------------------------
An influential theory of emotion is that of Lazarus: emotion is a disturbance that occurs in the following order: 1) cognitive appraisal - the individual assesses the event cognitively, which cues the emotion; 2) physiological changes - the cognitive reaction starts biological changes such as increased heart rate or pituitary adrenal response; 3) action - the individual feels the emotion and chooses how to react. Lazarus stressed that the quality and intensity of emotions are controlled through cognitive processes.

Now, what kind of cognitive assessments can you make? Using our logical system of finding a MECE (mutually exclusive, collectively exhaustive) framework to analyze your assessments, we can come up with something new and unique to understand our emotions. Specifically, I say:

No definitive emotion classification system exists, though numerous taxonomies have been proposed. I propose the following system. Given that emotions are responses to cognitive appraisals, they can be classified according to what we are appraising. In total, we can think about the past, present, or future, and we can judge events to be good or bad, or we can be unsure about them. We can also appraise our options for what to do about negative feelings (positive emotions need no immediate correction). Finally, our emotional responses can range from mild to extreme. A list of emotions might therefore be understood through the following table.
Picture
​---------------------------------------------
​Using this chart of emotions would allow our computer algorithms to get closer to performing with the kind of intelligence we believe that we humans have. We don't recognise cold, hard logic as intelligent because it is insufficient to deal with common scenarios characterised by uncertainty and a lack of evidence. A well calibrated emotional response could drive much more intelligent responses to these situations, however.

But in order to calibrate appropriately, we must be able to appraise something as "good" or "bad", which means we must have definitions for those terms. For the specific tasks that computers have so far been used, we use an instrumentalist definition of something being good or bad "for this particular task." If we want to dig a hole, a shovel is good while a saw is bad. If we want to cut a piece of wood, the opposite is true. If we want to maximise airline profits, a seat price maximisation algorithm is good while a fixed price algorithm is bad. Those specific definitions make sense for those specific tasks. For Strong AI, however, we must be able to generalise "good" and "bad" in some meaningful way that we all recognise and perhaps already adhere to. This is the second missing ingredient for Strong AI.

As I pointed out in my published paper on morality, all of this is given meaning by the desire for survival. We get from what is in the world, to how we ought to act in the world, by acknowledging our want to survive. Specifically, we ought to be guided by the fundamental want we all logically must have for life in general to survive over evolutionary spans of time.* All of the rest of our emotions should then be driven by the appraisals of whether a situation or action contributes towards that goal (good) or away from it (bad). This is the way meaning could be given to emotional responses that would allow Strong AI software to intelligently navigate our probabilistic universe.

(* As a small side note: this is why we are afraid of sci-fi robots. They do not have this root emotion built into them through a biological connection. We fear they would be immoral because they have no evolved regard for biological life. For Strong AI to be accepted, we must program this desire for life to survive into any Strong AI program. Or build it in directly through the use of biological materials if possible.)

So Strong AI seems very difficult, but it is likely possible. What about the other point of this thought experiment then? What about functionalism?

​​This brings us to the final overarching argument against the conclusions Searle draws from his Chinese Room. All along, Searle has been saying that one cannot get semantics (that is, meaning) from syntactic symbol manipulation. But some disagree with this. As many of Searle's critics have noted, "a computer running a program is not the same as 'syntax alone'. A computer is an enormously complex electronic causal system. State changes in the system are physical. One can interpret the physical states, e.g. voltages, as syntactic 1's and 0's."

Is this really any different than what is going on in our own brains? With this question, we are really treading on what it means to have an identity and to be conscious. Non-dualists, determinists, and Buddhists alike maintain there is no "I" residing behind the brain, no immaterial "me" observing it all from somewhere else out there. As we saw in My Response to Thought Experiment 38: I Am a Brain, David Hume and Derek Parfit describe identity using bundle theory, saying that we are the sum of our parts and nothing else. Take them away one by one and eventually, nothing is left. If that's true for us, the same would be true for a Strong AI computer. As the SEP article concludes, "AI programmers face many tough problems, but one can hold that they do not have to get semantics from syntax. If they are to get semantics, they must get it from causality." To clarify that, I said at the top of this essay that semantics for a language is concerned with what syntactical symbols and words actually mean to the human mind in relation to reality. This SEP quote says that as long as changes in observed reality cause changes to the semantic symbols, then this is understanding that is no different than our own.
​
I touched on this in 
My Response to Thought Experiment 32: Free Simone, where I said:
​
PictureThe third missing ingredient for Strong AI.
Surely, it will be difficult for any computer program we could ever design to achieve consciousness until we ourselves understand what that phrase really means, but I have great confidence we will get there for ourselves. I imagine it will then not be impossible to give a computer program some kind of Maslow's hierarchy of needs, and the capability to learn information and skills necessary to meet those needs, such that the computer program will appear to us to be alive and to seek to remain so. At that point, I believe it will become something so near to "life" that our empathies will be triggered into considering discussions of the rights that the computer program has to remain alive and to seek its goals.

We don't know and understand all of the parts of our consciousness yet, so we don't know exactly how to model them strongly enough to create Strong AI. But after my MECE attempt to Know Thyself, I feel that we are close enough to probably achieve something convincing if we put everything we know together.

0 Comments

Thought Experiment 39: The Chinese Room

1/25/2016

0 Comments

 
This week's thought experiment is going to require a bit of explanation, so let's take a quick look at it before I link to something much clearer.

---------------------------------------------------
     The booth of the clairvoyant Jun was one of the most popular in Beijing. What made Jun stand out was not the accuracy of her observations, but the fact that she was deaf and mute. She would insist on sitting behind a screen and communicating by scribbled notes, passed through a curtain.
     Jun was attracting the customers of a rival, Shing, who became convinced that Jun's deafness and muteness were affectations, designed to make her stand out from the crowd. So one day, he paid her a visit, in order to expose her.
     After a few routine questions, Shing started to challenge Jun's inability to talk. Jun showed no signs of being disturbed by this. Her replies came at the same speed, the handwriting remained the same. In the end, a frustrated Shing tore the curtain down and pushed the barrier aside. And there he saw, not Jun, but a man he would later find out was called John, sitting in front of a computer, typing in the last message he had passed through. Shing screamed at the man to explain himself.
     "Don't hassle me, dude," replied John. "I don't understand a word you're saying. No speak Chinese, comprende?"


Source: Chapter 2 of Minds, Brains, and Science by John Searle (1984)

Baggini, J., The Pig That Wants to Be Eaten, 2005, p. 115.

---------------------------------------------------

Baggini hasn't quite illustrated the full scenario here, so it's hard to know what he's getting at. It's actually meant to provoke a discussion about Artificial Intelligence, so 
watch this 60-second video to see why.
 So what do you think? Can a computer ever really be more than John in his Chinese room? I'll enter my own thoughts in my computer on Friday to hopefully share something intelligent about this.
0 Comments

Response to Thought Experiment 38: I Am A Brain

1/22/2016

0 Comments

 
Picture
On December 8, 1995, journalist Jean-Dominique Bauby suffered a massive stroke that left him with locked-in syndrome. He awoke mentally aware of his surroundings, but almost completely physically paralysed except for some movement in his head and eyes. Over the next ten months, Bauby "wrote" a memoir called The Diving Bell and the Butterfly describing what everyday life was now like for him. I used the word "wrote" in quotations because Bauby was forced to use a technique called partner assisted scanning, where he listened while a transcriber repeatedly recited a French language frequency-ordered alphabet (E, S, A, R, I, N, T, U, L, etc.) until Bauby blinked to choose the next letter he wanted to use. The book took about 200,000 blinks to write and an average word took approximately two minutes. It's a fascinating story and in 2007 it was made into an award-winning film. I thought this would be interesting to keep in mind as we take another look at this week's thought experiment.

---------------------------------------------------
     When Ceri Braum accepted the gift of eternal life, this was not quite what she had in mind. Sure, she knew that her brain would be removed from her body and kept alive in a vat. She also knew her only connection with the outside world would be via a camera, a microphone, and a speaker. But at the time, living forever like this seemed like a pretty good deal, especially compared to living for not much longer in her second, deteriorating body.
​     In retrospect, however, perhaps she had been convinced too easily that she was just her brain. When her first body had given out, surgeons had taken out her brain and put it into the body of someone whose own brain had failed. Waking up in the new body, she had no doubt that she was still the same person, Ceri Braum. And since it was only her brain that remained from her old self, it also seemed safe to conclude that she was, therefore, essentially her brain.
     But life just as a brain strikes Ceri as extremely impoverished. How she longs for the fleshiness of a more complete existence. Nevertheless, since it is her, Ceri, now having these thoughts and doubts, is she nonetheless right to conclude that she is, in essence, nothing more or less than her brain?


Source: Chapter 3 of The View From Nowhere by Thomas Nagel (1986)

Baggini, J., The Pig That Wants to Be Eaten, 2005, p. 112.
---------------------------------------------------


While I think it's clear from Ceri's present angst that she was once "something more than her brain," and our deep satisfaction from any number of sensory pleasures makes this obvious as well, the question this thought experiment is really getting at is that of personal identity, which I've already covered quite well.

First off, as I said in my Response to Thought Experiment 2: Beam Me Up: "When considering any of these issues, it's important philosophically to start by pointing out that the question of a soul or some other immaterial part of a person is entirely discounted. I firmly believe this is the correct view of the self though as there is no evidence to the contrary, so I'm happy to skip over that concern."

Next, it has come up in a few other thought experiments, but I first mentioned in my essay on John Locke how personality and identity lies at the crossroads of the Mind x Body intersection. However, as an evolutionary philosopher, I only see evidence for physicalist explanations for the mind, which places it squarely in the brains of our bodies. Our memories and sense of self remain unchanged when we lose a limb, donate a kidney, replace our hearts, or lose our vision, while strokes, brain tumors, or other head trauma have profound effects on "who we are."

I explored this in depth in my Response to Thought Experiment 30: Memories are Made of This, where I wrote: "This is now the second thought experiment inspired by Derek Parfit, who has been highly influential among contemporary philosophers on the subject of personal identity. Parfit is a reductionist, 'believing that since there is no adequate criterion of personal identity, people do not exist apart from their components.' In a late 1990's documentary on Channel 4 called Brainspotting, Parfit described four traditional theories of what components might constitute the self: the body, the brain, memories, or a soul. (You can see Parfit discuss this in two 10-minute clips here: Part 1, Part 2.) As an evolutionary philosopher looking at the evidence in nature...none of the four traditional components tell the story of identity, [so] what are we left with? The way Parfit explains it in Brainspotting, he sees personal identity rather like David Hume saw the definition of a nation. A nation is generally considered to be a group of people living on a portion of land, but it's not just "those people" nor just "that land". However, it's also not something over and above that either—the nation is not some permanent immaterial entity, it's just an ever-changing definition. To Parfit, the individual self can be regarded the same way. It is the totality of a set of perceptions within a body (which includes a brain); it is not just the body or just the perceptions. Problems arise when we mistakenly try to insist on one permanent definition of a single self. We are beings who change over time and our identities do as well."

​This changing of identities was explored thoroughly in my Response to Thought Experiment 11: The Ship Theseus, when I said: "The universe and everything in it are always changing in almost infinitesimally continuous ways. We've developed the branch of mathematics called calculus to help describe these tiny changes, but it would be incredibly difficult to keep track of reality this way by calling everything x, then x1, then x2, then x3, etc. on into infinity. It's much easier for our brains and our languages to just call something X and treat this x as a concrete thing even though it actually has very fuzzy borders at the edges. ... This may sound like a silly example concerning an imaginary object of little importance, but I for one will try to remember it the next time I meet my friend called "Jane" or "Joe" or "Mary" or "Mike". They've changed since the last time I've seen them, and Jane724 might have something more to teach me than Jane723 did. And then I can become Edxxxx....."

So in this present thought experiment, Baggini is playing with this notion of how our identities change over time with changes to our bodies or experiences. There's no question Ceri has changed, and since her body did not go on without her brain, we are left with her brain as the last location of her identity. What is interesting to me about this thought experiment is to consider the way Baggini has peeled away pieces of Ceri one at a time, so we could wonder just how far he could take this before Ceri was "gone."

There is a concept I failed to cover in my essay on David Hume that addresses this well, and that is Hume's idea of the bundle theory. (Bundle theory was discussed, though, in the silly Three Minute Philosophy video on Hume that I shared.) According to  bundle theory, "an object consists of its properties and nothing more: thus neither can there be an object without properties nor can one even conceive of such an object; for example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or at least one other of its properties. Thus, the theory asserts that the apple is no more than the collection of its properties." The clarity that bundle theory brings to this thought experiment comes when you imagine taking away all the properties of an object one by one until all of them are gone. Once that is done, according to Hume, nothing of the object is left. And so it is in this case, where our personal identity is a bundle of our purely physical body parts plus our mental parts that reside in our physical brains. Take them away one at a time, and "we" are still there in some capacity, but in a way that is understood to be diminished. In this case, Ceri 2 < Ceri 1. If the properties were taken away in a different order, say she suffered a stroke and her decimated brain was replaced with another working brain, then Ceri's body (Ceri 3) would still have life, but it would only be a very diminished sense of Ceri that was still around. In other words, Ceri 3 < Ceri 2 < Ceri 1. It is not until every property of her life has gone that we say Ceri has disappeared. But once all those bundled properties are removed, there is nothing left - no insubstantial, permanent soul. So, to answer Ceri's question, we are quite a bit more and less than our brains, but unless you want to get into cultural survival (best saved for another time), that does not extend to anything beyond our bodies.

0 Comments

Thought Experiment 38: I Am a Brain

1/18/2016

0 Comments

 
Picture
Not coming to a store near you any time soon.
I'm back! After a few weeks living in my brain while I finished the latest draft of my novel, followed by a week running my body ragged in rural Scotland, I'm pleased to see that this week's thought experiment is the perfect exploration of that natural dichotomy we fall into when speaking about ourselves. Let's use our eyes to take a look. (Or use your ears to listen if you have a computer read to you.)

---------------------------------------------------
     When Ceri Braum accepted the gift of eternal life, this was not quite what she had in mind. Sure, she knew that her brain would be removed from her body and kept alive in a vat. She also knew her only connection with the outside world would be via a camera, a microphone, and a speaker. But at the time, living forever like this seemed like a pretty good deal, especially compared to living for not much longer in her second, deteriorating body.
​     In retrospect, however, perhaps she had been convinced too easily that she was just her brain. When her first body had given out, surgeons had taken out her brain and put it into the body of someone whose own brain had failed. Waking up in the new body, she had no doubt that she was still the same person, Ceri Braum. And since it was only her brain that remained from her old self, it also seemed safe to conclude that she was, therefore, essentially her brain.
     But life just as a brain strikes Ceri as extremely impoverished. How she longs for the fleshiness of a more complete existence. Nevertheless, since it is her, Ceri, now having these thoughts and doubts, is she nonetheless right to conclude that she is, in essence, nothing more or less than her brain?


Source: Chapter 3 of The View From Nowhere by Thomas Nagel (1986)

Baggini, J., The Pig That Wants to Be Eaten, 2005, p. 112.

---------------------------------------------------

What do you think? Is there a single component that makes you, you? Is there something that, when all else is removed, can still be identified with your identity? I'll think about this some more and get back to you on Friday.
0 Comments

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    RSS Feed


    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012


    Click to set custom HTML
Powered by Create your own unique website with customizable templates.