Evolutionary Philosophy
  • Home
  • Worldview
    • Epistemology
    • Metaphysics
    • Logic
    • Ethics
    • Politics
    • Aesthetics
  • Applied
    • Know Thyself
    • 10 Tenets
    • Survival of the Fittest Philosophers >
      • Ancient Philosophy (Pre 450 CE)
      • Medieval Philosophy (450-1600 CE)
      • Modern Philosophy (1600-1920 CE)
      • Contemporary Philosophy (Post 1920 CE)
    • 100 Thought Experiments
    • Elsewhere
  • Fiction
    • Draining the Swamp >
      • Further Q&A
    • Short Stories
    • The Vitanauts
  • Blog
  • Store
  • About
    • Purpose
    • My Evolution
    • Evolution 101
    • Philosophy 101

Response to Thought Experiment 97: Moral Luck

6/16/2017

45 Comments

 
Picture
Come ooonnn, good. Daddy needs a new get out of jail card.
Before I introduce this week's thought experiment, I thought it would be helpful to consider the common Japanese saying, "All is Saiou's Horse." It's a saying that's based on an ancient Chinese parable about...

"...an old man whose horse runs away. His neighbors agree this is very bad, and Saiou says to wait and see. The next day the horse comes back with another horse. This is good, everyone agrees. Saiou says wait and see. His son, riding the new horse, falls and breaks his leg. This is bad, everyone agrees. Saiou says to wait and see. The army comes through forcibly conscripting young men, but does not take Saiou's son because of his broken leg."

The meaning of this is that "the ways of heaven are inscrutable; fortune is unpredictable and changeable." This may seem obvious when presented in this way, but if that's really true, then how can consequentialist moral philosophers ever decide they have waited long enough and considered widely enough to know for certain whether an action is "good" or "bad"? In fact, they probably cannot. What then to make of the following thought experiment?

--------------------------------------------------
     Mette looked into the eyes of her estranged husband, but could find no flicker of remorse.
     "You tell me you want us back," she said to him. "But how can we do that when you won't even admit that you did the wrong thing when you left me and the children?"
     "Because in my heart I don't think I did wrong, and I don't want to lie to you," explained Paul. "I left because I needed to get away to follow my muse. I went in the name of art. Don't you remember when we used to talk about Gauguin and how he had to do the same? You always said he had done a hard thing, but not a wrong one."
     "But you are no Gauguin," sighed Mette. "That's why you're back. You admit you failed."
     "Did Gauguin know he would succeed when he left his wife? No one can know such a thing. If he was in the right, then so was I."
     "No," said Mette. "His gamble paid off, and so he turned out to be right. Yours didn't, and so you turned out to be wrong."
     "His gamble?" replied Paul. "Are you saying luck can make the difference between right and wrong?"
     Mette thought for a few moments. "Yes. I suppose I am."

Source: The eponymous essay from Moral Luck by Bernard Williams, 1981.

Baggini, J., The Pig That Wants to Be Eaten, 2005, p. 289.
---------------------------------------------------

I suspect that Gauguin probably had more reasons than Paul did to expect that his bet on himself to become a successful artist would pay off, but we really don't know that for a fact. Either way, wouldn't Paul and Mette have discussed this when Paul first made his decision to leave the family? If not, they certainly should have. And if they did, then Paul presumably did the best that he could under the circumstances he was in. Otherwise, Mette wouldn't even be talking to him now. If such simple conversations at the moment of leaving could have solved this issue right from the get go, why is this thought experiment considered a problem at all?

As stated in the Stanford Encyclopedia's introduction to the problem of Moral Luck, Bernard Williams said, “when I first introduced the expression moral luck, I expected to suggest an oxymoron." Traditionally, people are committed to the general principle that we can only be judged morally on factors that are under our control. But in countless particular cases, "we morally assess agents for things that depend on factors that are not in their control. And making the situation still more problematic is the fact that a very natural line of reasoning suggests that it is impossible to morally assess anyone for anything if we adhere to [this principle of control]."

To help illustrate this further, consider another common example that is often used to discuss the extent of this problem of moral luck—that of a traffic accident. Imagine the following scenario:

"There are two people driving cars, Driver A, and Driver B. They are alike in every way. Driver A is driving down a road, and, in a moment of inattention, runs a red light as a child is crossing the street. Driver A slams the brakes, swerves, in short does everything to try to avoid hitting the child – alas, the car hits and kills the child. Driver B, in the meantime, also runs a red light but, since no one is crossing, gets a traffic ticket but nothing more. ... The only disparity is that in the case of Driver A, an external uncontrollable event occurred, whereas it did not in the case of Driver B."

So Driver A gets caught and punished severely, while Driver B faces nothing more than a fine. Is that fair? We might say that the circumstances can't be known to be exactly the same since Driver B might still have managed to avoid the child, but we could rule this quibble out at least theoretically for the purposes of thought experiments. The point, however, is not to say that all drivers like Driver B should be rounded up and given manslaughter charges—we just couldn't prove they really deserved it—but the story of the traffic accident involves a much different kind of luck than the one illustrated by Gauguin, Paul, and Mette. In fact, the philosopher Thomas Nagel identified four kinds of moral luck in a response to Williams' original paper:


  1. Resultant moral luck (consequential) - we can't know the future (e.g. Paul and Mette, or Saiou's horse)
  2. Circumstantial moral luck - our wider environments are out of our control (e.g. the traffic accident)
  3. Constitutive moral luck - who we are (nature x nurture) is out of our control
  4. Causal moral luck - free will determinists would say everything is out of our control

Given all this uncertainty in the world, there really ought to be much less moralising judgment. Don't you think? That, to me, is the point of this thought experiment. Heck, even in the case of the traffic accident, how can we really know that the child who was killed wasn't on his way to becoming Hitler? If that were found to be the case, would Driver A eventually be praised by historians? Movies about what a time traveller ought to do would certainly suggest so.

This all takes the fundamental epistemological uncertainty I've written about — i.e. that knowledge can only ever be justified, beliefs, that are surviving — and shows how that is relevant to everyday decisions about ethics and justice. Since knowledge is probabilistic, then it stands to reason that moral outcomes are susceptible to luck. And so, for ethics, no principles of consequentialism can ever provide a clear enough picture on their own to justify moral judgment. As I said in response to wonderful comments from reader Disagreeable Me on Monday's blog post, a larger picture must also be considered. I noted that...

...in previous posts, I've said the three main camps of moral philosophy are roughly concerned with the three tenses of time—past, present, and future. Virtue ethics is concerned about your prior intentions. Deontological rules govern in-the-moment actions. Consequentialism judges the future results. Of course, in real life, we take a total perspective of the whole and can easily recognise when someone had A) the best intentions, followed the rules, and things ended badly; or B) selfish intentions, broke a few rules, but things turned out well; or some other combination of the three. ... I would say that deontology is concerned with the *action*, not the intention leading up to it, nor the consequences after it, so while those actions may now be in the past, the deontological judgment is concerned with the momentary action in the present tense that it happened. Confining virtue ethics to the past tense is tricker because virtuous intentions do lead to an action, but I think that moral school of thought is more concerned with what is going on in the mind of the actor *prior to* an action (how virtuous are they trying to be) rather than the virtue of one action in isolation. For example, if someone cowered in fear about doing the right thing for weeks and months but then in the moment ended up acting courageously, I think virtue ethics would judge them less virtuous than someone who courageously prepared for a right action all along. 

Following on to how this plays out in considerations of justice, I say again that the four main categories of punishment must be reconsidered. As I already wrote about this topic, the various means of punishment should be doled out as necessary and appropriate in an escalating order of: restoration, rehabilitation, and finally incapacitation as a last resort. The focus of these punishments is the education of the criminal and the deterrence of future offenses by the populace. Seeking retribution gives way to short-term emotions of vengeance that were useful in nature before the public good of justice was provided for by the state. Now, however, the emotions of the victim of a crime must not be allowed to override the use of reason to create justice and stability for the long term.

Only retribution is aggravated when we ignore the role that moral luck plays, but as you can see, I think the motive for retribution should already be ignored.

Finally, to reiterate another fundamental position of my evolutionary philosophy, these judgments of "good" or "bad" virtues/rules/consequences are not based on some cosmic, supernatural thing that exists separately from us. Before life came into the universe, it made no sense to consider the concept of good or bad. We can only answer if something is good or bad once we consider the question "for whom?" And that requires a "who" to actually survive and exist. Recognising this, means that there is no ultimate or externally derived definition of good or bad. Good or bad, can only be judged for "us." In my published paper on morality, I explained how "us" must logically be widened as far as possible to include considerations for "life in general over evolutionary timelines," but the trick of course then, is how to balance the competing needs of various subsets of "us"—the self vs. others; family vs. family; nation vs. nation; species vs. species; current vs. future generations. Inevitably, however, the ultimate consequence of morally good actions must be the survival of life in general. Without life, the question of good or bad goes away entirely. We may not ever really know if our actions are leading towards this universally fundamental goal, but we can do our best with the knowledge we have and be ready to forgive or punish appropriately the trials and errors that have gone astray along the way.

​All our actions are susceptible to the luck of "genes x environment," but now that your environment has been exposed to the idea of what is morally good for the survival of life, you really ought to consider it and act accordingly. Sorry, but that's just the luck of our draw.

#sorrynotsorry
45 Comments
Disagreeable Me
6/23/2017 04:25:17 pm

Hi Ed,

Meant to reply sooner. Here we go!

> then how can consequentialist moral philosophers ever decide they have waited long enough and considered widely enough to know for certain whether an action is "good" or "bad"?

I don't think they can. But then I dispute that this is the point of consequentialism. I think the point of consequentialism is to provide a working definition of what it means for something to be good or bad, not a procedure for determining with infallible certainty whether a given action is good or bad.

> We might say that the circumstances can't be known to be exactly the same since Driver B might still have managed to avoid the child, but we could rule this quibble out at least theoretically for the purposes of thought experiments.

I think this quibble is the main point. The only way we can ever really prove that somebody was acting recklessly is if there are unfortunate consequences. In the abstract, we can say that the two people made equally poor decisions from a moral standpoint, but as a matter of practice, it makes sense to punish more severely recklessness which has negative consequences, because these are the specific cases which are proven to have negative consequences. We can really say this person was not in control, when if there had been no accident we cannot.

Ach, I don't know. If a true consequentialist would really say it's perfectly acceptable and moral to be needlessly reckless as long as you don't actually hurt anyone, then I guess I'm not a true consequentialist. But I wonder if that might be a straw man.

> If that were found to be the case, would Driver A eventually be praised by historians?

I wouldn't praise him anyway, because whatever the consequences of his actions turned out to be, he certainly wasn't behaving in a manner consistent with striving to achieve good consequences, which is what a consequentialist ought to praise (in my view).

I agree with you on your analysis of the reasons for justice and your rejection of retribution.

I agree with you that there is no cosmic source of morality outside ourselves.

I have not yet read your paper, but I do not agree with the conclusion as you present it here. I do not see any logical imperative to expand "us" at all, not even beyond "I". You're seeking a logical justification for morality and I don't think there is one. I'm moral because I want to be, the same reason I drink, breathe and do other things I'm wired to want to do. These things do not need logical justifications. They're part of what drives us as social animals.

So, for example, my view is that psychopaths who have no interest in the well-being of others and no desire to be moral for the sake of morality are not making any sort of logical mistake and may be perfectly rational. They are just wired differently and are lacking a drive that (presumably) you and I happen to have. It's best that we incentivise them to behave but we should not delude ourselves that there is some sort of logical argument we could construct that would convince them of the error of their ways.

Reply
Ed Gibney link
6/29/2017 02:22:56 pm

Thanks D.Me! I've been away on holiday so sorry for the slow reply. It was not because I wasn't excited for this discussion. Here goes!

-> I think the point of consequentialism is to provide a working definition of what it means for something to be good or bad, not a procedure for determining with infallible certainty whether a given action is good or bad.

I think you are emphasising *working* here in opposition to infallible, but I'm curious to know what that working definition of good or bad is.

-> ..."but we could rule this quibble out at least theoretically for the purposes of thought experiments." I think this quibble is the main point.

I agree in the real world. Later in my post I said this was why we couldn't actually convict people of manslaughter just for driving recklessly. With my quibble, I was just recognising that you *can* play by a different set of *theoretical* rules in thought experiments if you insist upon it.

-> Ach, I don't know. If a true consequentialist would really say it's perfectly acceptable and moral to be needlessly reckless as long as you don't actually hurt anyone, then I guess I'm not a true consequentialist. But I wonder if that might be a straw man.

Ha! Funny introduction of doubt. Thanks for leaving that in. More on this next...

-> I wouldn't praise him anyway, because whatever the consequences of his actions turned out to be, he certainly wasn't behaving in a manner consistent with striving to achieve good consequences, which is what a consequentialist ought to praise (in my view).

To me, this sounds like you aren't *strictly* a consequentialist. I think you are saying that the driver's striving with "virtuous intentions" matters, which introduces an element of virtue theory into your judgments. As I've said before, I think this is right - we can and should judge someone's 1) intentions, 2) actions, and 3) consequences each on their own in order to arrive at a comprehensive picture.

-> I have not yet read your paper, but I do not agree with the conclusion as you present it here. I do not see any logical imperative to expand "us" at all, not even beyond "I".

Very briefly and slightly amended, I noted in my paper that Peter Singer expanded the circle of moral concern by noting that any one person's moral preferences cannot logically be any more important than another person's. This is how he arrives at a policy of effective altruism towards other people and later animal liberation for sentient creatures. I disagree with his conclusions about what our moral stances must be, but I find his method for arriving at who deserves moral considerations to be a good start.

-> You're seeking a logical justification for morality and I don't think there is one. I'm moral because I want to be, the same reason I drink, breathe and do other things I'm wired to want to do. These things do not need logical justifications. They're part of what drives us as social animals.

Drives us as social animals to do....what? The ultimate answer to that is my logical justification for morality. We must survive in order for moral rules to survive. Moral rules are only "correct" and "good" if they themselves actually work to survive. That's the ultimate consequence that must be met before any other consequence can be considered.

You said you're moral because "you want to be." That's a good start. Hume said we cannot derive oughts from is...unless we insert a *want*. Reasons are the slave of the passions. E.g. 1) There is a train to Berlin. 2) I want to go to Berlin. 3) I ought to buy a ticket for that train. You cannot say 3) without knowing 2) (and ignoring other means of transportation). If we want to derive fundamental and universal *oughts* of morality, therefore, then we need to discover fundamental and universal *wants*. As I say in my paper, any *want* for a small group can ultimately prove to be selfish and immoral. Do I want survival (or well-being) for Ed? All Gibneys? Irish relatives? Humans? Mammals? Currently living organisms? 10 generations of life? No. None of those are big enough *wants* to be sure that the oughts derived from them will last and be "correct" and "good." The ultimate *want*, therefore, is for "life in general to survive over evolutionary timeframes." As long as something can want that goal, morality can continue. If that goal is lost, morality is lost. Therefore we have a logical basis for morality.

-> So, for example, my view is that psychopaths who have no interest in the well-being of others and no desire to be moral for the sake of morality are not making any sort of logical mistake and may be perfectly rational. They are just wired differently and are lacking a drive that (presumably) you and I happen to have. It's best that we incentivise them to behave but we should not delude ourselves that there is some sort of logical argument we could construct that would convince them of the error of their ways.

So obviously I disagree with th

Reply
Ed Gibney link
6/29/2017 02:24:13 pm

....cut off by my own site!....

So obviously I disagree with this. (Me disagreeable!) I believe it is easy to argue that we all want life to continue. As such, society works out moral rules through trial and error to discover the best path towards this goal. (I'm not saying all humans have consciously gotten this yet. I'm saying this is the logically universal rule for morality that we are only now growing to understand.) Your psychopath may not have had the nature x nurture in his life to strongly feel such wide and extensive empathic concerns on his own, but even if he (aren't they always he's?) only selfishly wants to succeed in society, he must learn to play by its rules. There are many tradeoffs to negotiate on the way towards this ultimate goal, but that's precisely why morality is difficult.

I hope that makes sense! My paper will expand on this more if you like the sound of any of this. Thanks again for taking the time to be involved.

Disagreeable Me
6/30/2017 09:55:23 am

Hi Ed,

> but I'm curious to know what that working definition of good or bad is.

Well, it depends on context.

If I'm trying to decide what to do, then I ought to do what I believe is good, and so I need to know what "good" means. What is good in this context is the choice which brings about good consequences (in the utilitarian sense). I can only guess which choice this is. I make the choice that seems good to me.

If I'm trying to judge whether a person is good, I judge them based on whether they intend to bring about good consequences. Whether they actually do bring about good consequences or not is immaterial to whether that is a good person. But actually, I'm not all that interested in judging people. This is not what a moral framework is for, in my view. Personally I'm really only interested in having a working framework on which to judge moral questions, e.g. whether it is right to do X or to do Y.

I call myself a consequentialist (actually, I prefer utilitarian, because the word "consequentialist" is more evocative of the position I don't hold -- and I'm not sure anyone does -- which you criticise) rather than a virtue ethicist because I recognise only one overriding fundamental virtue, which is the intention to bring about good consequences or increase utility. Usually virtue ethicists recognise many virtues such as courage, humility, generosity, justness, etc. To me these are secondary. We can derive them from my version of consequentialism/utilitarianism as virtues that tend to promote utility, but they are not fundamental.

Another difference between me and a virtue ethicist is that a virtue ethicists typically approaches a question of "What should I do?" by thinking about what a virtuous person would do. That's not at all how I approach these questions. I think only about the consequences.

Well, in theory, anyway. In practice, in day to day matters of no great import (e.g. "Ought I buy her a present for this occasion?"), I probably muddle through on ordinary human social instinct like anybody else. Practiced and cultivated virtue ethics is probably better for this day to day decision making than either gut instinct or overanalysing everything on consequentialist grounds. I'm thinking more in terms of big questions like whether euthanasia is morally permissable or not. I guess I'm saying that virtue ethics is a good framework for the micro-scale and consequentialism for the macro-scale.

> Peter Singer expanded the circle of moral concern by noting that any one person's moral preferences cannot logically be any more important than another person's

I would say that moral preferences are of no objective importance whatsoever, that importance is always entirely subjective. As such, it is making a mistake to even consider the question of whether one person's moral preferences are more important than another's. All that matters is whose moral preferences are more important to a given agent, and of course the answer is that agent's. So when a psychopath privileges their own preferences, they are not making any sort of error.

> Drives us as social animals to do....what?

To be nice, basically. I'm nice to other people because it makes me feel good. I don't steal from or hurt other people because it makes me feel guilty, and I don't like feeling guilty.

> We must survive in order for moral rules to survive.

We've evolved our moral instincts because these promote survival. The same reason we've evolved our sexual appetites etc. But none of this has any logical force. I am no more making a logical mistake if I choose to be immoral than I am if I choose to abstain from sex or if I am attracted to people of the same sex.

Evolution has given us our drives. Everything we do is in service of those drives. But if someone is wired a little differently for whatever reason, it is in my view incorrect to try to argue that their wiring is mistaken or that behaviour in service of these unorthodox drives is logically misguided.

> Irish relatives?

I'm Irish too, BTW :) A Cork man, specifically.

> I believe it is easy to argue that we all want life to continue.

Hmm, I don't know. Life for everyone? I imagine there are some people who only care about themselves. They care that other people continue to exist only insofar as those people are necessary for their own comfortable survival. Such people will not be worried by the deaths of millions as long as it leaves them unaffected. They will also not care one jot about what happens after their deaths, so they are not likely to care about long-term environmental issues. Such people may be immoral, but they are not irrational or logically mistaken in my view.

> but even if he (aren't they always he's?) only selfishly wants to succeed in society, he must learn to play by its rules.

Sure, that's why I said what I said about incentivising them to behave nicely. But if a psychopath believes that he can get away with some heinous crime for prof

Disagreeable Me
6/30/2017 09:56:19 am

...

Sure, that's why I said what I said about incentivising them to behave nicely. But if a psychopath believes that he can get away with some heinous crime for profit, then it is rational and logical for him to commit the crime. We prevent this by making it difficult for psychopaths to commit such crimes in the first place and then to make it difficult to get away with crimes (and making this difficulty obvious), not by presenting them with logical arguments purporting to show their mistake.

Ed Gibney link
6/30/2017 11:30:43 am

I feel we might be talking slightly past one another because you think I'm saying something I'm not. In my paper, I make sure to say:

"To reiterate, there is no supernatural force that dictates anything *must* follow rules for survival, but this blind and unsympathetic arbiter of the selection process within our universe means that this *is* the ultimate judge of all actions."

So I agree that your selfish, immoral psychopath is not making a logical mistake about what he *must* do, there are no dictates for that, but I'm saying he's making a mistake about what he *ought* to do. His selfish, immoral code is one that leads to extinction, so...logically...it's not something we ought to follow. Can we who want life to continue agree to that?

This is the central point that I don't want to get lost, so I'll refrain from commenting on the rest, other than to say I'm actually an America mutt who's 1/4 Irish, 1/4 German, 1/4 Polish, and 1/4 Cuban. But Gibney is an Irish name so I happened to choose that nationality for my example. I live near Newcastle, England now though and am actually planning a camping trip to Ireland next month. Perhaps this would all go down better over a Guinness! : )

Reply
Disagreeable Me
6/30/2017 04:34:57 pm

Hi Ed,

Sorry if you feel I'm missing your point, but given your clarification I am none the wiser -- your clarification is consistent with my previous undersanding and it seems to me that my answers were on point.

I agree with you and understand that you are not claiming any kind of supernatural basis for morality, and I also understand that you seek to provide a basis for 'ought' in the process of evolution.

My view however is that this basis is ultimately arbitrary and other equally arbitrary foundations for 'ought' are just as valid. What I'm saying is that you have no justification for any claim that your account of 'ought' is better than anyone else's, including that of the psychopath who has decided that for him 'ought' means looking out for himself alone. The only thing that your definition has going for it, in my view, is that your 'ought' hews a little closer to common usage than the psychopath's.

> but I'm saying he's making a mistake about what he *ought* to do.

If you define 'ought' as you do, then he is choosing to do not what he 'ought' -- but I wouldn't say this is a mistake of any kind. I also wouldn't say that you can uniquely justify your account of 'ought'.

> His selfish, immoral code is one that leads to extinction, so...logically...it's not something we ought to follo

Well, logically, as long as you define 'ought' as behaviour that would lead to extinction if everybody followed it.

But if that's what you want to say then aren't you saying one 'ought' not behave homesexually or choose to be childless? I would not agree with this definition of 'ought'.

> Can we who want life to continue agree to that?

We might for the purposes of conversation agree amongst ourselves on a definition of 'ought' that we will stick to. And then we can agree that the psychopath ought not behave cruelly. But this is backed up I would say not by any sort of logical argument, but by a convention we have both agreed to as people with roughly aligned moral intuitions.

But not perfectly aligned. I'm not for instance all that convinced that it is morally desirable to seek to preserve life on earth. You'd be wiping out a whole lot of suffering if earth suddenly exploded. Survival isn't everything. That said I would not choose to blow up the earth if I could -- I'm not so arrogant as to believe I have the right answer on that and in any case I'm not sure what I even think the right thing to do would be. This is one of those cases where utilitarianism breaks down and there isn't really a right answer, perhaps. But blowing up the earth is clearly the very worst thing one could do from your point of view. So your 'ought' is not the same as my 'ought' after all, and my point is that you have no way of showing that you are right. All we have are our individual preferences. There is no way of sorting out which preferences are correct.

I'm actually based in Aberdeen, although I will also be in Ireland towards the end of next month.

Reply
Ed Gibney link
6/30/2017 06:17:20 pm

-> I'm not for instance all that convinced that it is morally desirable to seek to preserve life on earth. You'd be wiping out a whole lot of suffering if earth suddenly exploded.

Yikes! If that's the position one has to take (or something similar like the "Voluntary Human Extinction Movement") in order to be in opposition to my position, then I think the choice is starkly clear. Again, I allow that you *can* hold that position, but that's a moral ought that will go extinct, rendering it wrong as far as all the survival impulses of life are concerned. I say that morals are the rules we uncover for emotional responses that guide behaviour, and so a universe without life has no morals in it. While we are alive, looking for the right moral rules, we have to preserve life for that quest to make any sense.

-> But if that's what you want to say then aren't you saying one 'ought' not behave homesexually or choose to be childless?

Of course not. Those behaviours are only "unhelpful" for the survival of one individual's particular genetic sequences. Within the wider context of a species or ecosystem—especially within our own species governed by gene-culture coevolution—homosexuality or childlessness can be, and has been, perfectly acceptable and even commendable behaviour.

Your question shows there's still some misunderstanding here, so I hope that's clearer. It's much clearer to me than this:

-> If I'm trying to decide what to do, then I ought to do what I believe is good, and so I need to know what "good" means. What is good in this context is the choice which brings about good consequences (in the utilitarian sense).

I find that completely circular, which is what I wrote in my entry to Sam Harris' Moral Landscape Challenge because he tries to define "good" by equating it with "well-being"—a synonym—which I think is what is generally meant by "the utilitarian sense." Also, you can't have well-being without survival, which makes survival paramount.

Ed Gibney link
6/30/2017 06:58:52 pm

Also, I think it should be made clear that the "blow up the Earth" scenario is the only alternative to my basis for morality that "life in general ought to continue." Any other smaller priority (my life, American life, human life) can be shown in some instance to be selfish and lead to the "blow up the Earth" scenario. I explained this in a thought experiment in a cover story for Humanist magazine.

https://thehumanist.com/magazine/march-april-2016/features/human-humanism-isnt-enough

Reply
Disagreeable Me
6/30/2017 07:21:00 pm

Hi Ed,

> Yikes! If that's the position one has to take ..

To be clear, I'm not advocating the destruction of the Earth. But how I would analyse the question is not at all how you would, and so our accounts of 'ought' are not the same, and so we are not in fact speaking the same language.

While we might in practice both agree we do not want Earth to be destroyed, I would probably prefer the destruction of the Earth to a scenario where every conscious living thing is surviving but suffering greatly, and you would presumably not. For me, well-being is fundamental to 'ought'. For you, survival is fundamental.

I think the only way one can succeed in showing an account of 'ought' to be flawed is to demonstrate an inconsistency there, either internally or an inconsistency with the professed views of the advocate. I find your account of ought to be internally consistent but it seems to me to conflict with your professed views.

As I expected, you are a liberal on LGBTQ issues (as am I). You say my questin shows there is some misunderstanding, but I don't think so. I never took you for a homophobe, but I think your acceptance of homosexuality/abstinence is inconsistent with your analysis of psychopathy. Just as the species can survive with a signifcant fraction of non-reproducing individuals, so can it survive with a significant fraction of selfish individuals.

Have to go, more later.


Reply
Disagreeable Me
6/30/2017 08:20:45 pm

... to continue

> I find that completely circular,

And I have some sympathy with this. This is not all that unlike the criticism in my own entry to the Moral Landscape Challenge. I agree with Harris up until the point that he claims that his framework is objective and that science can tell us what is right and what is wrong.

The first problem with the Moral Landscape is the same issue I take with yours -- that it claims to be the one correct account of morality, when I don't accept that there is any such thing.

But I also pointed out something like your criticism of my approach. Harris's framework and my own both have a problem with the vagueness of what counts as well-being and how it is measured and aggregated. For example, even if we could define and measure well-being, it is not clear whether we ought to care about averages or totals, or whether a state of suffering corresponds to a negative well-being measurement such that it is better not to exist than to suffer, or whether it corresponds to a very low but positive well-being value such that it is better to suffer than not to exist.

But I can deal with these criticisms by simply acknowledging that my system does not pretend to be objective or scientific. It's loose and vague but sufficient to help me figure out whether I am pro LGBTQ rights, pro choice, pro euthanasia, pro immigration, pro whatever. I have yet to encounter a real-life issue where my intuitive notions of well-being is too vague to be helpful, and if I did I would just accept that there is no right answer -- that both choices were morally acceptable to me.

But I would say that I don't think "good" and "well-being" are entirely synonymous. They are only synonymous for utilitarians. For you, "good" is synonymous with "promoting survival". For a theistic deontologist, "good" is synonymous with "in accord with the will of God". For such a person, it might well be 'good' that infidels be tortured for eternity in hell.

> Also, you can't have well-being without survival, which makes survival paramount.

You also can't have suffering without survival, and if we assign a negative utility to suffering then the utilitarian would say it is better not to survive than to suffer, which would make utility paramount to that utilitarian.

The basic reason your account does not accord with my moral intuitions is that I don't agree that survival at any cost is good. It may well be that a global Orwellian surveillance state which practices eugenics by culling the genetically impure is the most stable arrangement for human civilisation and our continued survival, yet even if this were the case I would not think it right to bring about such a society.

It seems to me that you have started with the common consensus of liberal values and you're working backwards to try to justify it from first principles, and this is why your analysis of psychopathy is so different from your analysis of homosexuality. If you're really deriving your values rather than rationalising them, then I would expect you to hold some surprising moral views, as I do with regard to the destruction of the earth.

Reply
Disagreeable Me
6/30/2017 09:39:26 pm

Just a final note.

I'm writing under time constraints so my tone isn't quite what I wanted it to be. I don't mean to accuse you of rationalisation. I mean to say that what you are doing seems to be rationalisation -- the distinction being that I am completely open to being persuaded that you are not rationalising and that you do indeed have robust arguments to back up the values you hold. I just don't think I've seen them yet.

Also, I really need to read your papers. Apologies for having not done so yet. I applaud what you are trying to do on many fronts. You are trying to explore new philosophical territory, you are trying to establish a coherent world view, you are taking evolution seriously as an idea that deserves to inform a great deal of philosophical thought. As well as that you're a good writer and you obviously know your stuff. I just disagree with you on the specifics of this point, and being who I am I necessarily focus on the disagreements.

Ed Gibney link
7/1/2017 11:07:29 am

Thank you very much for all that. I agree I haven't explained my position fully to you yet, and I relish the chance to do so. You certainly probe the correct points (and in a very civil manner) where previous evolutionary ethics have fallen down. But just as scientists have gained a much deeper understanding of principles of evolution over the last several decades (through modern synthesis and on into the extended evolutionary synthesis), so also I think evolutionary ethics can now be better supported and understood. So let me keep trying.

Given that our moral emotions have been slowly evolving for billions of years, I think it's bonkers to say shocking conclusions would be proof of concept. To me, those are proofs of being unmoored and out of touch with physical realities.

I don't see where I've been inconsistent in stances on psychopathy and homosexuality/abstinence. Let me explain the difference as I see it. LGBQT and chosen childlessness are morally neutral to me. In a post-apocalyptic Earth desperately in need of repopulation, you could imagine a situation where the insistence on such stances would be considered selfish and immoral. (Just this one time for the good of humanity!) However, in the opposite case, in an apocalyptic overcrowded Earth desperately in need of population reduction, such stances would be socially considered to be good and encouraged for everyone. In either of these extreme cases, or in the present day case in between, LGBQT and childless members of society can still contribute greatly to "the survival of life in general over evolutionary timelines," so they can still be very good. (As a personal aside, my wife and I have chosen not to have kids largely for the overpopulated anthropocene reasons, but also because we can then devote more of ourselves to other professional goals that we think are important.) For the psychopaths, however, while they *can* be borne by the rest of society in some small number, their psychopathic actions are nothing but a drain on that society and are therefore bad. I'm not saying the biologically-leaning psychopathic people are bad, but any actions in line with those leanings are bad and they should be persuaded to be altered. Hopefully this all seems more consistent to you now.

-> I would probably prefer the destruction of the Earth to a scenario where every conscious living thing is surviving but suffering greatly, and you would presumably not. ... I don't agree that survival at any cost is good. It may well be that a global Orwellian surveillance state which practices eugenics by culling the genetically impure is the most stable arrangement for human civilisation and our continued survival, yet even if this were the case I would not think it right to bring about such a society.

I agree that a world with only suffering and no hope of even *wanting* to survive, is in fact a very bad world. Sam Harris called this the worst possible world, but I also said an empty universe with no hope of happiness is worse. The key here is the hope for the future. If there really is no hope of turning things around, then suffering forever is worse than non-existence. Since, epistemologically, we can't really know that though, the ray of hope for the future would likely induce life to keep trying. Suffering is often done in the name of future gains. (We're doing it right now! : ) But pleasure can also be a road to ruin. So the idea of using pleasure and pain to calculate some kind of average, maximum, positive, or negative utility points just don't add up (as it were) without some greater concept at play to judge the rightness or wrongness of the suffering / pleasure. To me, the suffering or pleasure must be in service of the right goal - survival.

The dystopias of permanent suffering or Orwellian states are not good or stable in my view, because they are an extremely fragile form of survival based on exploitation and competitive domination rather than long-term cooperation. Questions about this are of course related to Parfit's repugnant conclusion and I wrote a long post about that last year showing how an evolutionary worldview can make sense of that conundrum. To add to your reading pile, the full post is here:

http://www.evphil.com/blog/response-to-thought-experiment-52-more-or-less

...but probably the key concepts to add to the discussion right now are Nassim Taleb's contrast between fragility and robustness. These ideas could be used to create a sort of Laffer Curve for survivability. Too little life in terms of quantity or quality or diversity is a fragile state on the left of the curve. Too much life full of overcrowded competitive misery is a fragile state on the right of the curve. My ethics looks for robust optimisation in the middle.

Note that in this state I think well-being is optimised too, but it is only reached by recognising that comfortably assured survival for life is the goal. Smaller goals than this lead to confusion, existential angst,

Reply
Ed Gibney link
7/1/2017 11:08:18 am

Note that in this state I think well-being is optimised too, but it is only reached by recognising that comfortably assured survival for life is the goal. Smaller goals than this lead to confusion, existential angst, and lessened well-being. I'm not against well-being, I just keep asking another "why?" (see "5-whys" root cause analysis: https://en.wikipedia.org/wiki/5_Whys) and think "well-being" isn't quite fundamental. We need survival to have that well-being, and we will put up with some suffering in the hope of having more survival and well-being in the future.

Reply
Disagreeable Me
7/1/2017 08:02:41 pm

Hi Ed,

> Given that our moral emotions have been slowly evolving for billions of years, I think it's bonkers to say shocking conclusions would be proof of concept.

Well, I would say that if you really do have a novel and useful framework for morality, it should be possible to use it to derive novel conclusions.

If it gives you back all the values you had to start with, one possibility is that you have successfully justified your values and you somehow happened to have them all right to begin with. Of course, if all humans had the same values, then it might make sense that one could find the underlying evolutionary reason to explain them. But not all humans or cultures have the same values, so any explanation which shows that the values of you and your culture happen to be the right ones is suspect in my view.

Another more plausible possibility is that you are rationalising.

> I don't see where I've been inconsistent in stances on psychopathy and homosexuality/abstinence.

Fair enough, and you've offered a plausible enough account of the difference. Childless people can aid society in spite of or indeed because of their childlessness. The selfishness of selfish people is unlikely to benefit society all that much (although I wouldn't rule out the possibility that having some monstrously selfish individuals is actually of benefit somehow -- maybe the world needs some Gauguins and Jobses). A good counter-argument to my original point might be the analogy of the worker bee -- an individual which does not reproduce and yet which is instrumental to the survival of the society and the species.

But while I can appreciate your line of thought and consider your view tenable, I'm not sure I'm persuaded. Unlike the worker bee, I find it unlikely that evolution is selecting for homosexuality or a caste of childless career-driven individuals because of the benefits they bring to the species. My suspicion is that these are, from an evolutionary perspective, strictly maladaptive aberrations akin to cystic fibrosis or muscular dystrophy and so are being selected against, which would seem to make such behaviour immoral from the perspective of your moral framework.

For me, the reasoning you employ to justify your morality is very similar to the reasoning a homophobe might employ to declare homosexuality evil. Whether your analysis of homosexuality is right or the homophobe is right is from the perspective of your framework simply an empirical question, hinging on whether sexual orientation has any significant effect on an individual's contribution to the survival of the species. Your suspicion is that homosexuals contribute just as much, and the homophobe's suspicion is that they do not. But this is not good enough for me. I would not accept that homophobia is immoral even if it turned out the homophobe's suspicions proved correct. I could only be convinced the homosexuality were immoral if it could be shown to be causing suffering and reducing well-being. I support LGBTQ rights precisely because it is not doing so that seems to me to decrease utility.

> but I also said an empty universe with no hope of happiness is worse.

I disagree. That's not to claim that you are incorrect, it is to say that my moral preferences differ from yours. I claim that neither of us is correct. Moral preferences are subjective, not objective, and I don't think your argument to the contrary follows.

You seem to be making an unwarranted deductive leap somewhere.

I think one problem is that you are personifying life. In your paper, you distil your argument down to the following pseudo-syllogism.

1. Life is.
2. Life wants to remain an is.
3. Therefore, life ought to act to remain alive

You provide a footnote to defend your anthropomorphising, but I still think it's problematic. You might as well say:

1. Rocks fall.
2. A falling rock wants to continue to fall.
3. A falling rock ought to act to continue falling.

Your usage of 'want' and 'ought' are acceptable within the scope of a metaphor, but they remain metaphorical. This doesn't work as the basis for an account of true moral oughts.

I think instead you are simply describing something that happens. Life is. Life adapts and so seems to overcome obstacles and persist. Life 'ought' not do anything, because life is not a moral agent, any more than a falling rock is.

Morel later.

Reply
Ed Gibney link
7/1/2017 08:28:21 pm

Which is it? I'm saying nothing new, or I'm saying something you can't agree with?

A rock can't want. Wants are emotions reactions requiring biochemistry. Life has that, to varying degrees of agency. (Which, to a determinist, is still zero for you and me.)

Reply
Disagreeable Me
7/1/2017 09:25:43 pm

Hi Ed,

I don't agree that wants are biochemical reactions. I think an AI can have wants too. But anyway, life is an abstract concept. I can understand how one might want to identify the wants of an individual organism with certain biochemical reactions and events, but it does not make sense to me to say that "Life" has wants which are biochemical reactions.

But I am not a falling rock. Let's take another analogy.

1. Entropy reduces the amount of usable energy in the universe.
2. Entropy wants to reduce the amount of usable energy in the universe.
3. Entropy ought to act to reduce the amount of usable energy in the universe.

Just as I am a living thing, so am I an entropic system, so if I inherit the "oughts" of Life I don't see why I should not also inherit the "oughts" of Entropy, and so I ought to waste as much energy as I can.

You insist that the 'ought' of morality is the very same 'ought' as what Life ought do in order to survive, and not for instance what an entropic system 'ought' to do to reduce entropy. But this to me is somewhat arbitrary. It faces similar issues as theistic deontology, where what we 'ought' to do is identified with what the creator of the universe would say we ought to do in order to fulfil His wishes. But why ought I care what God wants? And similarly, why ought I care what Life wants?

I would say instead that there is no objective question of what an agent ought care about, there is only the objective question of what it does care about. From there, two questions follow.

1) What ought that agent do to achieve its goals?
2) What ought that agent do to behave in a manner I personally deem moral?

Both of these oughts are subjective. There is no objective ought.

> The dystopias of permanent suffering or Orwellian states are not good or stable in my view

That's an empirical question. It could be that such a state might be extremely stable. It seems to me that a regime as powerful as that in 1984 might be next to impossible to topple and could perhaps be stable indefinitely. Liberal, free societies are perhaps more chaotic and unpredictable and prone to self-destruction (we've only had such societies for a relatively short time, so we don't know yet).

So suppose your hunch is wrong and such a society is not fragile after all. Would such a society be morally desirable? If not why not?

Perhaps this is unfair. Perhaps, you might say, I should consider the unlikely possibility that such societies bring about more well-being than free societies. Don't I then have the same problem? Well, no, because if this unlikely possibility turned out to be the case, then I would indeed find such a society to be morally desirable. The only reason I don't find it to be desirable is because I don't think it would be a pleasant one to live in. I would far prefer a thousand years of human flourishing before our extinction to a bioengineered plague to a billion years of mere survival in a nightmare dystopia before being wiped out by a gamma ray burst.

> Which is it? I'm saying nothing new, or I'm saying something you can't agree with?

I can't comment on whether you're saying nothing new or not. I'm not actually particularly conversant with the literature. I am informed only by a layman's superficial overview of the literature, and a whole lot of reflection.

I will say only that you have left me with the impression that what you're saying seems slightly reminiscent of the arguments of eugenicists or certain homophobes -- by which I don't mean to attack your character or paint you as guilty by association but just to point out a weakness in that kind of argument.

But you are certainly saying something I don't agree with. I'm reluctant to say *can't* agree with, because I don't want to foreclose the possibility that you might change my mind. But right now that seems unlikely.

Reply
Ed Gibney link
7/2/2017 10:36:00 am

-> I can't comment on whether you're saying nothing new or not.

Boy you really are disagreeable! You even disagree with your own arguments by backing away from them. Originally, you said you thought a novel morality required a novel conclusion. You haven't explored all of my positions to know if there are ones you disagree with (my Humanism article is a good place to find a big one, but I also have a FAQ page on my website), but I think your premise is silly anyway. When Newton "discovered" gravity, apples didn't start falling up. He just described what had been going on. I feel I am doing the same.

-> I don't agree that wants are biochemical reactions. I think an AI can have wants too.

That's a major mind-body error to me. AI has pre-programmed goals at best. The currently accepted definitions for emotions are confined to biological organisms. Invent non-biological life and then we can talk about its wants.

-> The dystopias of permanent suffering or Orwellian states are not good or stable in my view. That's an empirical question.

Yes it is. And we have the entire history of the universe to back up the principles of what works best in evolutionary systems. I listed many of these principles in my paper. Orwellian states do not fit the bill. That's not just a hunch.

-> I will say only that you have left me with the impression that what you're saying seems slightly reminiscent of the arguments of eugenicists or certain homophobes -- by which I don't mean to attack your character or paint you as guilty by association but just to point out a weakness in that kind of argument.

I'm sorry, but I really don't see the similarities here. You'd have to provide actual arguments from eugenicists and homophobes to make a point, but that's pointless because all of their arguments are flawed. Just because they imitate logical forms of argument does not taint all forms of logical argument.

-> Let's take another analogy. 1. Entropy reduces the amount of usable energy in the universe. 2. Entropy wants to reduce the amount of usable energy in the universe. 3. Entropy ought to act to reduce the amount of usable energy in the universe.

Sigh. Just like rocks, entropy isn't a living thing with wants. These analogies with flawed premises aren't helping clarify anything about my argument.

-> Just as I am a living thing, so am I an entropic system, so if I inherit the "oughts" of Life I don't see why I should not also inherit the "oughts" of Entropy, and so I ought to waste as much energy as I can.

Um, no. Life builds order out of energy (until it dies). That's one of the competing definitions for life. Your comparison doesn't work.

-> You insist that the 'ought' of morality is the very same 'ought' as what Life ought do in order to survive. But this to me is somewhat arbitrary. It faces similar issues as theistic deontology, where what we 'ought' to do is identified with what the creator of the universe would say we ought to do in order to fulfil His wishes. But why ought I care what God wants? And similarly, why ought I care what Life wants?

Because you are (presumably) a form of life (and not a chatbot). And the empirical evidence shows that life has survival instincts. This is what makes my survival wants fundamental and logically required in order for life (and morals) to exist at all. Theistic ethics have no such bases in fact. Big difference.

-> But you are certainly saying something I don't agree with. I'm reluctant to say *can't* agree with, because I don't want to foreclose the possibility that you might change my mind. But right now that seems unlikely.

Yes it does. I don't know how to convince someone of something they are convinced they do not want to believe. I would have to unmoor you first, and that's not really my mission here. Your position (as I crudely understand it) is that objective morals don't exist, but you somehow instinctively know how to muddle along and decide for yourself what is "good" and leads toward "well-being" even though those things have no definition. As far as I can tell, you've accepted that you don't need to look for an answer to the questions that your contradictory beliefs entail. I can't loosen you from that unambiguously ambiguous position. Perhaps it's best to take a break from this and let our minds work through any contradictions we would like to deal with.

Reply
Disagreeable Me
7/2/2017 10:23:13 pm

Well, you've made a number of points, to which I have answers. I feel like you've missed or misunderstood a few of my arguments. Should we drop it now or should I reply in full?

Ed Gibney link
7/2/2017 10:55:05 pm

I'm sorry. I didn't mean to be inhospitable. I felt we weren't getting any further so I thought a pause was in order, but by all means please clarify your points if I've misconstrued them.

Reply
Disagreeable Me
7/3/2017 09:16:17 am

Hi Ed,

Firstly, I just want to say something about why I get into these conversations. It's not to persuade anyone or be persuaded -- this happens very rarely in either direction, and not because people are blindly committed but because people often have very good reasons for believing what they do (at least in their minds). I have given the topic of moral realism a lot of thought, and so have you. The conclusions I have come to make the most sense to me given how my mind works, and the conclusions you have come to make the most sense to you given how your mind works. That's not to be a relativist about truth -- I think one of us has to be wrong. But it's just to acknowledge out of the gate that it is very unlikely that anybody's mind is going to change, and this should not be a cause for frustration.

What I get out of these conversations is a better understanding of other viewpoints, and a better understanding in the potential holes, whether real or not, there might be in my arguments and views.

That being said, I would say that for me the point to end the conversation is when we each feel we understand the other's views. I'm not sure we're there yet.

A point of confusion for me is which of the following projects you deem yourself to be undertaking:

1) To explain the moral values of humans through the lens of evolution
2) To provide an objective basis for morality in evolution.

You may see yourself as doing both, and that's fair enough, but I think the two projects are distinct and need to be considered separately.

Project 1 is basically evolutionary psychology, which is a pretty problematic science in the eyes of many, due to a dearth of evidence and rigour. However I would agree that the history of human evolution has certainly played a massive role in how we have developed our basic moral intuitons -- it's just difficult to know how exactly. One can formulate a just-so story for whatever conclusion you want, it seems to me. Survival of the fittest can justify selfishness and cruelty as well as compassion and cooperation. There are many winning strategies in the game of life, not just one. This is perhaps reflected in the fact that there have been many conflicting sets of values in different places and times. Slavery was deemed moral at one time, for instance. So it seems to me that Project 1 cannot hope to provide an objective basis for morality, because there is no one set of moral values that humans hold. This is the problem with your analogy to Newton. If you are just describing what is already going on, then you are just engaging in Project 1 and have not provided an objective basis for morality.

But if you're engaging in Project 2, and providing an objective basis for morality, this means you have worked out how to identify which sets of moral values are correct and which are incorrect. You can for instance say whether eugenics, slavery, drug use, prostitution, war, meat-eating, voluntary childlessness, homosexuality, euthanasia etc are right or wrong -- or at least define what it would take for them to be right or wrong, even if you're missing some of the required empirical information to answer the question right now. These are all issues on which different people have had different moral views. Now, my problem is that if your basis for morality just happens to justify all the particular views you happened to hold anyway, then one of three things must be true.

1) You're an intuitive moral genius
2) Your intuitions are correct by an astonishing stroke of luck
3) You are rationalising

Which is it? Or is there another possibility?

Reply
Disagreeable Me
7/3/2017 10:03:29 am

Hi Ed,

> That's a major mind-body error to me. AI has pre-programmed goals at best.

Well, I disagree, but that's another conversation. But the point here is that you already lose me when you insist that wants are biochemical in nature. I do not accept that. I'm a functionalist about intentions such as wants. From my point of view, if you're going to insist that Life "wants" to live this is perfectly analogous to saying that Entropy "wants" to reduce the amount of usable energy in the universe. I accept that you view this as an unhelpful disanalogy, but I make it to illustrate that one need not accept what you take as a foundational premise of your argument -- that Life "wants" anything any more than Entropy "wants" anything or rocks "want" anything. Life just adapts and survives. From my point of view, it doesn't want anything except in an extremely loose metaphorical sense.

> Orwellian states do not fit the bill. That's not just a hunch.

I'm reluctant to just flatly disagree, but it seems I must. I don't see that there's any evidence in evolutionary science to show that an Orwellian global superstate would not be stable. Keep in mind I'm not insisting that it would be -- I'm saying we don't know. For instance, it's not as if social insects are models of individualism and freedom. They kill and eat sick or injured members of the hive. It seems to me that a panopticon state which does horrific things to certain individuals as necessary, which severely limits freedoms and in which everything is designed to preserve the society at any cost could well be stable once established. I don't see why not. If you're not even willing to entertain that possibility for the sake of argument then it might be difficult to continue.

We tend to take it for granted that progress is inevitable, that freedoms will increase and repressive states are doomed to fail. But I don't see that. The recent election of Trump, and the recent rise of British nationalism leading to Brexit seem to me evidence that progress is not inevitable after all.

> You'd have to provide actual arguments from eugenicists and homophobes to make a point,

Well, just a quick sketch perhaps.


(My understanding of) you: We should take our morals from what evolution teaches us and what will promote survival.

Eugenicist: Survival of the fittest demands that we not keep the weak alive or we are wasting resources and weakening the gene pool. It is natural and right that the weak must be allowed to die. Only the strongest should reproduce and contribute their genes to the next generation. This is necessary to ensure the healthy survival of the human species.

Homophobe: Homosexual sex is an aberration, a perversion of the natural sex drive which has evolved to encourage human reproduction. As such, it is against natural law, against the best interests of the survival of the species and so immoral.

So, basically, the problem is that it seems to me that both the eugenicist and the homophobe are taking your advice. They are reaching different conclusions from you, but their argument starts in the same place, more or less, which suggests to me that this is a dangerous place to start.

> Life builds order out of energy (until it dies).

And in doing so increases the entropy of the universe. Necessarily so, according to the second law of thermodynamics. The order built by life is local and temporary and comes with a wider cost.

> And the empirical evidence shows that life has survival instincts.

I think we should get away from personifying life. You can instead say "living things". But in that case I would not agree that all living things have survival instincts. I don't think a dandelion has survival instincts. Rather I would say living things are adapted to survive. Now, if you want to say that human beings by and large have survival instincts, I would agree with you. But so what? We also have instincts that things can only be in one place at a time, but quantum mechanics would beg to disagree.

> Theistic ethics have no such bases in fact. Big difference.

We both agree that God does not exist. But I can at least entertain the possibility that he does for the sake of argument. And if he did, then why ought I care what he wants me to do (setting aside fear of divine retribution)? I would say, and I would hope that you would agree, that there is no compelling justification for why it is objectively immoral to displease God. Similarly, I see no compelling justificiation for why it might be objectively immoral to transgress against what your personified Life might want.

> but you somehow instinctively know how to muddle along and decide for yourself what is "good" and leads toward "well-being" even though those things have no definition.

The way you say this, using the word "somehow" suggests that it is mysterious or implausible that anyone might be able to make moral decisions without have something akin to your objective basis for moral

Reply
Disagreeable Me
7/3/2017 10:21:12 am

Damn, lost a bunch there...

The way you say this, using the word "somehow" suggests that it is mysterious or implausible that anyone might be able to make moral decisions without have something akin to your objective basis for morality. But I don't have an objective framework for assessing whether food is delicious a priori either. I just like what I like. I completely accept that one can provide an evolutionary explanation for why I might find this or that food attractive, but the liking is automatic, effortless. Moral preferences are often the same.

But sometimes moral preferences conflict with each other. All things being equal, I might have an instinctive preference that foetuses not be killed. All things being equal, I might have an instinctive preference that woman have autonomy over their bodies. On the question of abortion, these preferences come into conflict. For me, utilitarianism/consequentialism is just a mental tool, a way of reframing the question by casting it in the light of one overriding moral preference -- the preference to promote well-being and prevent suffering. From that point of view, it seems to me that there is a clear winner, and so I am pro-choice. But I don't need to do mathematics to calculate the well-being because as with assessing taste preferences it is largely automatic. The suffering of a woman who is forced to carry a pregnancy to term against her will is evident. The data that shows that unwanted children are less likely to have good lives and are more likely to get involved in crime is further evidence.

> As far as I can tell, you've accepted that you don't need to look for an answer to the questions that your contradictory beliefs entail.

I am not aware of holding any contradictory beliefs, so if you think I'm content to hold contradictory beliefs you have misunderstood me or I have not explained myself very well.

I simply acknowledge that not all issues will be like abortion. Sometimes there will be no clear winner, even when all the empirical facts are in. In these cases, it just means that the mental tool of reframing questions in terms of well-being turns out not to be all that useful. In those cases, as a moral anti-realist I can legitimately shrug my shoulders and claim that there is no right answer.

Or I guess I might try approaching the question from another angle according to how I feel at the time. I am not bound to be a consistent consequentialist because I am not a moral realist and consequentialism is no more than a mental tool to help me think through moral issues. When it doesn't help, I am free to choose another tool. I describe myself as a consequentialist because I find that in practice it almost always does help.

Ed Gibney link
7/3/2017 10:38:51 am

That's a noble sentiment for why you get into these conversations, and I accept that these are your true intentions. The frustration I feel, however, is that I don't think they are fully enacted (and my tolerance for these frustrations is admittedly low due to many unfruitful repetitions with other philosophers). For example, you ended this last comment with a rather condescending set of 3 options for me to choose from after making this statement:

-> if your basis for morality just happens to justify all the particular views you happened to hold anyway, then one of three things must be true.

Who said my basis for morality justified all the particular ethical views I held before I developed this basis?? You aren't allowing at all for the possibility that my moral positions on specific problems have changed and can continue to change as the evidence comes in for what works to get us to the end goal. (I started this quest after a strict religious upbringing so I can assure you much of my thinking has changed over the years.) Instead of allowing for this, or inquiring about it, you've made a giant assumption and tried to corner me into choosing something ridiculous. Such assumptions do not come from a place of curiosity, so I do not feel you are thoroughly "seeking first to understand." Your previous arguments based on misunderstandings of me or continuing to go back to the "but we can all just die" argument in one way or another, also leaves me longing to work on other things.

You have started this last comment with a great question though so I should answer it.

-> A point of confusion for me is which of the following projects you deem yourself to be undertaking: 1) To explain the moral values of humans through the lens of evolution 2) To provide an objective basis for morality in evolution. You may see yourself as doing both, and that's fair enough, but I think the two projects are distinct and need to be considered separately.

I would say that by discovering (2), I can examine history and see how the moral values of humans has grown and changed over time and should continue to do so into the future. The flexibility we humans have inherited for determining our goals, and the very long time it takes for the machinations of evolution to work itself out, means that yes, many moral rules from many different subcultures in many different times have existed and can be explained in their moment using just-so stories of evolutionary influences. I too see this relativistic problem with evolutionary psychologists who seek just to explain what is rather than what ought to be. I would therefore modify your option #1 to say I try to: "analyse the moral values humans have held during their evolution, and then judge them using the meta-principles of what we see best survives over the long term through evolutionary processes." As it says at the top of every page on this website: "Contemplating the past. Choosing the destination."

Let me finish by sincerely thanking you for pushing difficult questions like this. It forces me to explain things more thoroughly in every direction that's necessary. I started this website because I had a sketch of my beliefs in mind and I wanted to develop it as comprehensively as possible. I keep notes on all these interactions with the intention of producing a revised manifesto as it were that I hope to write before I die. It's a big project that, just like life, needs selective pressures in order to be shaped into something that will actually live on. So thanks again for providing such pressure.

Reply
Disagreeable Me
7/3/2017 11:26:47 am

Hi Ed,

> Who said my basis for morality justified all the particular ethical views I held before I developed this basis?

Well, great. It seems we have a simple misunderstanding to sort out.

I never meant to tell you that this was the case. I said that if this were the case, then there are three options. If this is not the case, then you should be able to show how your framework helped you change your mind on something.

This is what I was getting at when I said:

"If you're really deriving your values rather than rationalising them, then I would expect you to hold some surprising moral views, as I do with regard to the destruction of the earth."

By "surprising" I mean, not what you might have come to naturally before you started thinking about this stuff from the point of view of your current framework, as with my surprising (to me) realisation that I don't necessarily view the destruction of the earth as a morally bad event.

But so far you haven't provided any such example, and so far all your values seem to conform with standard liberal values such as you might be expected to hold without any particular framework guiding them. Which is why, so far, you seem to me to be rationalising pre-existing values. I'm not telling you you haven't changed your mind, I'm asking for examples of how you have changed your mind so as to dispel the illusion that you are just rationalising the values you already had or the values that it is socially acceptable to have.

The more surprising the example, the more persuasive. If for example your change in value just tracks with changes in what is socially acceptable over time (e.g. simply moving from transphobia to trans-acceptance), then it is less compelling. The problem now is that rather than a mysterious coincidence in the alignment between your original values and what comes out of your framework, we might instead find a mysterious coincidence between your derived values and the values that are socially acceptable today in the environment in which you happen to find yourself. So the ideal counter-example to dispel any illusion that you are merely rationalising would be a moral view that is not commonly held in your particular social circles (but perhaps might be common in other milieus).

The absence of any satisfactory example does not prove that you are rationalising, but it does seem to me to leave us with the three choices I previously identified. I leave open the possibility that there is another choice I have missed, just as I leave open the possibility that you can provide a satisfactory example of a "surprising" change of view informed by your evolutionary framework.

Reply
Ed GIbney link
7/3/2017 11:40:36 am

We're repeating ourselves and I still think your premise is silly. I've said my views change as the evidence rolls in. Society's progressive views may change in the same way. However, I've pointed you to my article When the Human Isn't Enough for Humanism to show an example of where I've tried to use a thought experiment to change beliefs held by a large segment of "my people" based on the moral fundamentals I now hold. I encourage you to read the whole article for context, but here's the most relevant passage for your inquiry:

"Imagine a sci-fi scenario where humans discover a new substance that boosts our intelligence and productivity and gives us feelings of tremendous happiness without any known side effects. For twenty years, pills containing this substance work so well that eventually every single human in the world has taken it and we’re 100 percent sure that it will be passed down through our genes to all our future children. Then, tragically, scientists discover that this substance degrades after thirty years in our bodies, whereupon it dissolves all carbon molecule bonds it comes into contact with. Let loose in the world, the substance would destroy all life as we know it.

In this thought experiment, the chance of humanity surviving is zero. We’ve screwed up big time, but the survival of life still hangs in the balance. So what’s the “good” thing to do? Would we just soak up the last of our physical pleasures, or would we feel compelled to act for the survival of “life in general” by jettisoning our contaminated bodies into space? I strongly believe we’d do the latter. Our moral concern for our extended kin would compel us to protect them. Once our existential survival was taken away, only lesser concerns like economic, religious, or hedonistic desires would remain. But this scenario makes it clear that these are secondary to the existential survival concerns of the rest of life. If we were to insist upon the destruction of the rest of life for the sake of, let’s say, one more day of human fun, then that would be the very definition of selfish, destructive, and therefore immoral behavior. I believe humans would be better than that, and I surely think we ought to be, although we’re not acting that way now while we still think we can all survive the status quo."

Ed Gibney link
7/3/2017 12:11:43 pm

Quickly for the rest of your latest comments:

-> “That's a major mind-body error to me. AI has pre-programmed goals at best. Well, I disagree, but that's another conversation. But the point here is that you already lose me when you insist that wants are biochemical in nature. I do not accept that.

That’s fine. I lose a lot of people along the way. The evolutionary view extends to all arguments for me though and I think it’s clear from that that wants are currently biochemical.

-> For instance, it's not as if social insects are models of individualism and freedom.

And they aren’t creatures with the levels of consciousness and free will for treachery or cooperation like we are. Any comparisons therefore don’t hold.

-> We tend to take it for granted that progress is inevitable, that freedoms will increase and repressive states are doomed to fail. But I don't see that.

You’re right; progress is not inevitable. That’s why we need the guidance of good philosophy.

-> (My understanding of) you: We should take our morals from what evolution teaches us and what will promote survival. Eugenicist: Survival of the fittest demands that we not keep the weak alive or we are wasting resources and weakening the gene pool. It is natural and right that the weak must be allowed to die. Only the strongest should reproduce and contribute their genes to the next generation. This is necessary to ensure the healthy survival of the human species. Homophobe: Homosexual sex is an aberration, a perversion of the natural sex drive which has evolved to encourage human reproduction.

The eugenicist and homophobe are using crude and incorrect understandings of evolution, ignoring the wider roles of diversity and cooperation that we now know are required for species such as ours to handle fluctuations in the environment over the long haul. I said previous attempts at evolutionary ethics have failed.

-> The order built by life is local and temporary and comes with a wider cost.

I don’t know what that wider cost is, but the temporary time seems to me to be entire age of the history of life, which is a pretty significant project.


-> I think we should get away from personifying life. You can instead say "living things".

I’m not opposed to that. But it’s cumbersome and lacks rhetorical punch though to spell out every single time that individual living things are all striving to live but are all connected to one anther through time and ecological balancing. I think we get it when I say “life.”

-> I don't think a dandelion has survival instincts. Rather I would say living things are adapted to survive. Now, if you want to say that human beings by and large have survival instincts, I would agree with you.

These survival instincts are the same thing along a continuum. I don’t think they magically turn on in one species and not another.

-> We both agree that God does not exist. But I can at least entertain the possibility that he does for the sake of argument.

And all those arguments can remain in the realm of fantasy too.

-> Similarly, I see no compelling justification for why it might be objectively immoral to transgress against what your personified Life might want.

Back to the “we can all just die” argument. That’s the alternative. I find it compelling to fight that.

-> Damn, lost a bunch there...

Sorry! I hate that too about weebly. There should be a warning. I’ve suggested it to them.

-> The way you say this, using the word "somehow" suggests that it is mysterious or implausible that anyone might be able to make moral decisions without have something akin to your objective basis for morality.

No, I think it is like a duck that flies without knowing the laws of gravity. Moral instincts may not understand the ultimate goal of morality either.

-> But sometimes moral preferences conflict with each other.

Agreed, and I discuss this at length elsewhere. This is the flexibility we are required to have in order to value ourselves sometimes, families sometimes, species sometimes, the future sometimes, etc., etc. How do we rectify these conflicts? By knowing the greatest goal.

-> I am not aware of holding any contradictory beliefs, so if you think I'm content to hold contradictory beliefs you have misunderstood me or I have not explained myself very well.

Maybe I don’t understand you, but I still haven’t heard a definition for “good” or “well-being” even though you say you act instinctively towards those ideas yet say they don’t really exist. I find that contradictory, but maybe I’m just confused because I’m not trying to fully understand your worldview.

-> Or I guess I might try approaching the question from another angle according to how I feel at the time. I am not bound to be a consistent consequentialist because I am not a moral

Reply
Ed GIbney link
7/3/2017 12:12:26 pm

-> Or I guess I might try approaching the question from another angle according to how I feel at the time. I am not bound to be a consistent consequentialist because I am not a moral realist and consequentialism is no more than a mental tool to help me think through moral issues. When it doesn't help, I am free to choose another tool. I describe myself as a consequentialist because I find that in practice it almost always does help.

Sounds to me much more like moral relativism and nihilism, but you don’t want to go that far. I don’t have the foggiest idea how you actually think through moral problems, other than to say maybe it’s like the way a duck flies.

Reply
Disagreeable Me
7/3/2017 03:21:42 pm

Hi Ed,

I don't think we're repeating ourselves as long as we're making progress towards understanding each other, and I think we are, because I think we're still correcting misunderstandings as we go. In my last offering, I corrected a misunderstanding of yours (that I was assuming you had no surprising views), and in this I shall acknowledge that you corrected some misunderstandings of mine, as well as correcting some of yours again (e.g. that I am no different from a nihilist or relativist).

One misunderstanding I had is that you had not offered examples of surprising views, when you had. This is the view that humanity might and ought self-annihilate if it were doomed anyway and this self-annihilation were necessary to preserve life on earth. I think this is sufficiently surprising to show that you are not merely rationalising. However I confess I still feel that arguments for morality from evolution could go either way on issues such as homosexuality or eugenics, which is one reason I have qualms about arguments for morality from evolution. That doesn't rule them out, but it advises caution.

I have now read the article on humanism, and while again I do agree with your arguments that we have a moral duty to other forms of life, I would agree with a lot of what you said, particularly with respect to how we ought to care for nature out of self-interest if nothing else, and with respect to the fact that we have a moral duty to (certain) animals. For me, this is because those animals are capable of experiencing well-being and suffering to some extent, and not because they are other living things with a right to survive.

Another misunderstanding you have corrected is that you mean "Life" as a poetic synonym for living things, and not as a personification of life itself. So when you are saying that life wants to survive, you mean that individual living things want to survive and not that the whole aggregated system of life on earth wants to survive. I was confused on this point which is why I was asking whether you could refer to living things instead.

This is also why I was confused about why I should care what "Life" might want and why I made the analogy to the personification of a similar abstract concept "Entropy".

This particular misunderstanding might explain a lot of our talking past each other. If I understand you better now, you're not so much arguing that we should act to preserve life on earth, but that we must respect the right to life of other living things, including future generations of living things. These might seem to be equivalent, but they feel a little different to me. If one were merely concerned about Life, one might be happy enough even if all multicellular life were wiped out as long as bacteria (and so Life) continued to persist. But I suspect that you are more concerned that all the current forms of life and perhaps new ones continue to exist, that we respect and support to the extent that we can the rights of currently living individuals and their descendants to survive and reproduce.

On the other hand I'm not sure how you reconcile the right to life of a cow, (or even a lettuce), with your right to live by eating it. You mention this issue but I don't think you satisfactorily deal with it. I agree that we don't have the choice not to destroy other living things by our actions as we live, but we do have the choice to cease to live and so avoid responsibility for the destruction of other living things. I guess that this could come back to your argument that humans need to exist in order to protect the planet from asteroids etc.

If I understand you correctly, I suspect you might hesitate before killing spiders or insects simply because they annoy you. I on the other hand, have no such qualms, because I think they are too simple to merit moral consideration, being (I suspect) incapable of significant well-being or suffering. I'm much less sanguine about cruelty to more cognitively sophisticated animals such as mammals.

Reply
Disagreeable Me
7/3/2017 03:22:24 pm

> These survival instincts are the same thing along a continuum.

Agreed, to a point. But electromagnetic waves are along a continuum too. That doesn't mean that gamma rays are radio waves. The adaptations of a dandelion are not instincts in my view -- I would reserve that word for adaptations on a different part of that continuum. If you have enough of a quantitative difference it can amount to a qualitative difference, and in my view there is a qualitative difference between the intentions of a conscious mind and those of a plant.

> Maybe I don’t understand you, but I still haven’t heard a definition for “good” or “well-being” even though you say you act instinctively towards those ideas yet say they don’t really exist.

Indeed I am reluctant to define them, because I do not agree that they need to be defined. I don't think it is any kind of contradiction to react instinctively to concepts which are not readily defined. As I said before, we can determine what tastes good just by instinctively reacting to the taste. We don't need to define criteria for tastiness. The only definition we need for tastiness is "taste preference". Similarly, the only definition we need for "good" is "moral preference" and the only definition we need for "well-being" is "quality of life preference", i.e. a state of being that is preferred.

The only thing is that moral preferences can be confusing and contradictory, and so I prefer to analyse moral problems with respect to one overriding moral preference -- that for "well-being", that is I look at it from whether a choice tends to leave us with people (and other conscious creatures) which are in a preferred state of being or not.

Reply
Disagreeable Me
7/3/2017 03:23:04 pm

> Sounds to me much more like moral relativism and nihilism, but you don’t want to go that far.

It depends what you mean exactly. It certainly shares quite a bit with those positions, but I can distinguish myself as follows.

I take a nihilist to be someone who just throws out moral questions entirely, calls moral questions meaningless and says nothing matters, anything is permissable. I don't do that, because I care about morality and I want to be good, and I want other people to be good (where "good" just refers to my moral preferences). To me a nihilist is like someone who says that since there is no objective standard of taste, then it doesn't matter whether we eat disgusting food or pleasant food, so we might as well all eat horrible gunk as long as it is adequately nutritious. Well, I'm not like that. I agree there is no objective standard of taste, but I care about what tastes nice to me and I want to eat tasty food.

I take a moral relativist to be someone who says that since there is no objective standard of morality, then it is incorrect to judge the actions of other people by our own personal moral standards. This is not me, because while I don't take my moral standards to have any objective basis, they are necessarily the standards by which I judge other people and cultures. If I feel my preferences strongly enough, and I have the means, then I will even impose them on others. For instance if I can act to save a child from FGM, then I will, even though FGM may be morally required in her culture. I don't judge the question from the standards of her culture, but from my own standards.

You might then ask me what's to stop someone from another culture similarly imposing their personal moral preferences on me? Nothing! This is after all what Islamic state is doing. What determines whose views win out on the ground is not whose views are objectively correct, but who has the means to back them up. Might is right in this sense if no other.

That's not to say that rationality doesn't enter into it -- we can always attempt to persuade those with the means to come around to our way of thinking. But if we do so by appealing to an objective morality, I think we are deceiving both ourselves and others. I think the only logically sound kind of moral persuasion is the kind that depends on showing inconsistencies between moral claims, either with each other or with empirical evidence. If somebody just fundamentally, irreducibly believes it is morally required to kill and eat as many babies as possible for its own sake, then I think there's no justifiable argument that can show him to be wrong. Sure, you can use fallacious appeals to objective morality if that works. Or use force if it doesn't.

Reply
Ed Gibney link
7/4/2017 02:22:17 pm

Thanks D.Me. It does seem like we’ve rounded a corner and made some progress. I really have to thank you for that for diligently reading my other publications and seriously considering my views very, very honestly. After laying out all my beliefs here and then going through 97 (and counting) thought experiments to show how those beliefs play out in different scenarios, I realize I have a LOT of back catalogue to get through and that’s also why I’m sensitive to repeating myself. I already repeated myself to myself! : )

I think I have just a few (near final?) comments.

-> I confess I still feel that arguments for morality from evolution could go either way on issues such as homosexuality or eugenics, which is one reason I have qualms about arguments for morality from evolution. That doesn't rule them out, but it advises caution.

Clearly the history in these particular subjects shows caution is wise. In fact, the uncertainty of the future and the long timeline needed for evolution mean caution will always be wise. That said, I think the history of repressive societies doing poorly in comparison to enlightened ones has already shown that condemning homosexuality or promoting eugenics are failing strategies. Given the choice, people vote with their feet for what I think are the better groups. The rise in understanding of “the evolution of cooperation” (e.g. Axelrod’s book of that title is only from 1984) is enough for me to think arguments for morality from evolution are ready to move forward. I definitely acknowledge, however, that the third rail created by of poor evolutionary thinking in the past is what has opened up a gap in academia on these topics, which is why I find myself standing here fairly alone. There’s no way “evolutionary philosophy” should be a new term, but when I googled it for my website in 2012 I found nothing related.

-> Another misunderstanding you have corrected is that you mean "Life" as a poetic synonym for living things, and not as a personification of life itself.

Yeah, that’s probably a good way to characterize it. This simple terminology gets my main point read quickly, but then I get into trouble with immediate misunderstandings. I think if people read me in total though, they find that I’m not saying the simplistic woo-thing they thought I was. I’m still struggling to find the best medium for the message. As a non-academic, the history of that published peer-reviewed paper is an odd one, and not necessarily the best one, but it’s out there, I’m proud of it, and I live with it, although it’s not perfect by any means. The best thing it has done is open some doors for credibility with places that otherwise could have easily filed me and my website under the “just another crank on the internet” pile.

-> If I understand you better now, you're not so much arguing that we should act to preserve life on earth, but that we must respect the right to life of other living things, including future generations of living things.

I don’t like to use the word “rights” since those don’t exist other than according to legal conventions. But I’ll let you continue.

-> These might seem to be equivalent, but they feel a little different to me. If one were merely concerned about Life, one might be happy enough even if all multicellular life were wiped out as long as bacteria (and so Life) continued to persist. But I suspect that you are more concerned that all the current forms of life and perhaps new ones continue to exist, that we respect and support to the extent that we can the rights of currently living individuals and their descendants to survive and reproduce.

Yeah, my thinking on this is balanced by the need you noticed elsewhere that the project of life needs some species to progress to the point of getting life off one planet or protecting this one from asteroid strikes. To choose the way forward is clearly in the realm of theory only at this point though without data from other planets.

-> On the other hand I'm not sure how you reconcile the right to life of a cow, (or even a lettuce), with your right to live by eating it. You mention this issue but I don't think you satisfactorily deal with it.

I purposefully ducked it here because it’s a big topic with lots to go through. Too much for these comments alone too, but here are three snapshots:

1) Personally, I was a vegetarian for 10 years, but now I try to be a ConOm, a conscientious omnivore. All living things do get eaten at some point by something, so to be a part of life, one has to accept (for now, until the Star Trek replicators exist) that we have to eat other forms of life. So that’s the way I try to live my values.

2) In the online comments to my Humanist article, someone asked about their small-scale farm with a few pigs running wild until they get old and slaughtere

Reply
Ed Gibney link
7/4/2017 02:23:07 pm

2) In the online comments to my Humanist article, someone asked about their small-scale farm with a few pigs running wild until they get old and slaughtered. I responded thusly: “I purposefully didn't wade into the arguments about the ethical consumption of meat, but I agree with you that it is possible, and the description of your own small-scale meat production sounds about as ideal as it gets. I happen to be a big fan of farmers like Joel Salatin (even though he does it somewhat for Christian reasons) who try to run holistic farms that treat soil and life with respect. When I talked about "...the systemic abuse humans inflict on other animals....[and]...the widespread disregard for other forms of life", I was definitely thinking about the industrial agriculture / large feeder operations you also disapprove. Lions might set up gazelle farms with tiny confining pens some day, but I won't condemn that until they can. In the meantime, I do recognise the "nasty, brutish, and short" nature of life in the wild, and I do consider that when weighing alternative options. I just don't want to use those horrors of nature as excuses for extending them to billions of caged pigs and chickens, or feed lot cows, etc.”

3) In the comments to my blog post on thought experiment 84, I responded to a question about a world with predation thusly: “This conversation would eventually lead to a talk about predation and that's an interesting one to have. From the perspective of life in general, you could say that predation *has* happened and *is* necessary, but not that it *must* happen and *ought* to happen. Predation is currently necessary because herbivore vegetarians don't consciously keep their numbers in check. Could life ever evolve naturally or be engineered by us to remove the need for predation? Theoretically, yes. Would that be preferable to a world with predation? From a standpoint of reduced suffering, probably yes. Are we anywhere near understanding ecosystems and self-control to make that happen? Absolutely not. Are there morally better choices we can make right now about vegetarianism, veganism, animal welfare, re-wilding, etc. to make a world more filled with cooperative robust survival? Yes. All of which I love to talk about, because morality is hard and we need better guides to understand it.”

So, yeah, living things have a lot of moral issues to work out before the Earth becomes an optimized living environment. I don’t claim to have all those answers definitively worked out.

-> If I understand you correctly, I suspect you might hesitate before killing spiders or insects simply because they annoy you. I on the other hand, have no such qualms, because I think they are too simple to merit moral consideration, being (I suspect) incapable of significant well-being or suffering. I'm much less sanguine about cruelty to more cognitively sophisticated animals such as mammals.

I’m mostly with you on this. I find subtle moral differences between animals (especially insects) who make their living as parasites, have very simple nervous systems, and bear offspring using quantity over quality (“K/r reproductive strategies”). So I do try to shoo spiders outside the house, but I’ll swat any mosquito I can detect. Garden snails get chucked for distance, but I don’t spray my cabbages with pesticide. It must be said that my wife is a Green Criminologist (studies crimes against the environment such as trafficking in endangered species or corporate pollution, etc.) who’s moral instincts are stricter than mine on animal issues too so that moves my behavioral needle too.

-> The adaptations of a dandelion are not instincts in my view -- I would reserve that word for adaptations on a different part of that continuum. If you have enough of a quantitative difference it can amount to a qualitative difference, and in my view there is a qualitative difference between the intentions of a conscious mind and those of a plant.

I understand this. The word “instincts” isn’t central to my moralizing. Given that free will and determinism are so hotly debated, though, I don’t find the distinctions all that important. Do we *really* have instincts? That depends. But I think the rest of my arguments work whether you are a determinist or a compatibilist or anything else as long as you recognize that living things act in ways that try to keep themselves and/or their genes alive.

-> I don't think it is any kind of contradiction to react instinctively to concepts which are not readily defined. As I said before, we can determine what tastes good just by instinctively reacting to the taste. We don't need to define criteria for tastiness.

Ah, but I think those criteria are there, somewhere to be discovered in our evolutionary history. All effects have causes. Tasty food generally gives us calories without poisoning us. (This system can of course be tricke

Reply
Ed Gibney link
7/4/2017 02:23:50 pm

Ah, but I think those criteria are there, somewhere to be discovered in our evolutionary history. All effects have causes. Tasty food generally gives us calories without poisoning us. (This system can of course be tricked by some “devious” plants.)

-> Similarly, the only definition we need for "good" is "moral preference" and the only definition we need for "well-being" is "quality of life preference", i.e. a state of being that is preferred.

But why is that state of being preferred? Why? Why? Why? It all leads to prolonged survival of something in my view. If that’s a prolonged existence of my heavenly soul, then that’s a mistaken end goal based on fantasy. If that’s a prolonged existence of life on Earth, well then I think we’re getting somewhere based on reality.

-> I take a moral relativist to be someone who says that since there is no objective standard of morality, then it is incorrect to judge the actions of other people by our own personal moral standards. This is not me, because while I don't take my moral standards to have any objective basis, they are necessarily the standards by which I judge other people and cultures. If I feel my preferences strongly enough, and I have the means, then I will even impose them on others.

I don’t know how you ever feel something strongly enough with that stated reasoning underlying your moral subjectivity. If you really think they’re just yours, how do you justifiably work up any gumption to judge others? (I hypothesize that instincts for life and “the good life” that leads to more life is what drives these moral urges, since you say you don’t believe in any gods.)

-> Might is right in this sense if no other.

Certainly might can beat right into extinction, which is why it’s important to spread the word, make progress, and defend what is right. I’m with you on the need for that.

Reply
Disagreeable Me
7/5/2017 10:09:21 am

Hi Ed,

OK, a lot of misunderstandings have been clarified, but there are still some points on which I'm confused or on which your argument does not seem to hang together all that well.

My main bone of contention as you know is with your conclusion that there is an objective basis for morality. You seem to present at least a couple of different arguments for this, but each argument seems to be missing a premise.

Argument 1 (rephrased a little to reflect how I understand it):

1. Living things want to survive
2. (Given that they want to survive, living things ought to act to survive)
3. No one living thing's wants are any more important objectively than those of any other
4. Therefore all moral agents have an objective moral duty to balance the wants of all living things as far as possible
5. Therefore all moral agents have an objective moral duty to promote the survival of living things as far as possible

This argument doesn't really hang together for me for a couple of reasons. Though you seem to want to emphasise it, I don't think that the second point has much to do with the conclusion, and this is why I place it in parentheses. I can agree with you that on a certain view, living things ought to breathe and eat and seek a mate and so on (although to me this is an instrumental ought rather than a moral ought), but I don't see how this point relates to the conclusion that moral agents ought to help other living things to survive.

Another reason the argument doesn't work for me is because it seems you're missing a premise, namely that a moral agent has a moral objective duty to balance the wants of entities according to their objective importance. I would instead say that an agent (instrumental-)ought to do whatever is required to achieve its goals, even if those goals include the annihilation of all life. It seems to me that all that matters to any agent is the subjective importance of its own goals, and this includes relatively selfless, moral people, who simply assign a great subjective importance to the goal of helping others. Objective importance doesn't enter into it (unless it seems subjectively important to you, of course).

Another argument you present is that if living things did not survive, there would be no morality, which means that any morality which does not lead to the survival of living things is self-defeating. You conclude that the goal of morality is to ensure the survival of living things.

I could turn that around by saying that about anything. I could say that any evil which does not lead to the survival of living things is self-defeating, and so the goal of evil is to ensure the survival of living things. Or the goal of origami is to ensure the survival of living things. I don't think either of those are correct.

Not to mention that there exist endeavours which are intrinsically self-defeating, such as the endeavour to eradicate smallpox. The success of that endeavour is marked by that endeavour ceasing to exist. So it doesn't seem right to me that any coherent endeavour or field or human activity has to be self-propagating. You could perhaps shore up the argument a little by adding the premise that morality must have as one of its ultimate goals the propagation of morality, but then I would simply reject that premise.

Not to mention that it rather underdetermines what morality is about in practice. As long as there are humans, there will be morality, so all this gives us is that we ought to care about the long-term survival of humans (and I guess other moral agents), not about the survival of anything else. We would only care about the survival of other things instrumentally insofar as they aid in our own survival. But we don't depend for our survival on whales or elephants or rhinos, say, so this argument suggests no moral duty to preserve them, whereas I think you would say that we do have such a duty.

The next thing I wanted to discuss is that on a couple of occasions you seem to suggest that I forget or deny the evolutionary origins of our desires.

> Ah, but I think those criteria are there, somewhere to be discovered in our evolutionary history. All effects have causes. Tasty food generally gives us calories without poisoning us.

> But why is that state of being preferred? Why? Why? Why? It all leads to prolonged survival of something in my view.

> I hypothesize that instincts for life and “the good life” that leads to more life is what drives these moral urges

But I never deny, nor forget, that my instincts and drives are a product of evolution. That's not to say that they are necessarily adaptive -- what evolution produces can be subverted and pointed in maladaptive directions. That's part of the variation required for evolution to work in the first place.

Again, I would suggest that homosexuality is maladaptive, and so it would not be quite correct to say that a gay man prefers to have sex with other men because this leads to prolonged survi

Reply
Disagreeable Me
7/5/2017 10:09:51 am


Again, I would suggest that homosexuality is maladaptive, and so it would not be quite correct to say that a gay man prefers to have sex with other men because this leads to prolonged survival (of the species). Rather the existence of a sex drive in the first place is adaptive. But this just goes to show that it's a bit too simplistic to say that preferences are what they are because of evolution, and also why it is problematic to identify what is good with what is adaptive or selected for by evolution.

In particular, I make the analogy to taste only to say that I don't need to have objective criteria for well-being consciously mapped out in order to recognise it or to prefer one scenario to another. I don't deny that there are criteria embedded in my brain somewhere even though I may not be able to articulate them. But I suspect that these criteria are a little different in every brain, just as taste in food or sexual preference varies.

So, though I acknowledge the evolutionary origin of these drives and preferences, I resist the attempt to found objective morality on them, because these preferences vary hugely and because I do not agree with any attempt to equate morality with what is adaptive. It is adaptive for lions to kill the cubs of rivals on deposing them. It is likely adaptive for monarchs and drug dealers to do likewise for much the same reasons, but I wouldn't say it is moral.

> how do you justifiably work up any gumption to judge others

Why do I have to justify it? What does that give me? I judge others and myself by my standards of morality -- those are the only standards by which I can make moral judgements, just as the only standards by which I can make aesthetic judgements are by my standards of beauty. As long as I have the means to enforce my will, justification is useless to me. Power is all that ultimately matters when it comes down to determining which moral view wins out. All justification gives me is a method of persuading others to support me and so make my view more powerful, and to do this job it doesn't need to be coherent or correct it just needs to be persuasive enough to work.

Let me distinguish between two accounts of justification. Ultimate or objective justification purports to ground some view or action in an objective framework such that to disagree or oppose it would be to make a mistake of some kind. In my view, such justifications are not coherent but they may work quite well in practice at persuading others.

Another account of justification is a justification with respect to a certain underlying moral framework which is not claimed to be objective. These justifications can persuade those who accept the same underlying framework. Thus I can coherently justify my actions to myself and to others who share my moral preference for promoting well-being, but not to others who do not share that overriding moral preference. If I want to persuade other people, I have a few options. I could find an argument that reaches the same conclusion but starting from their moral preferences, I could try to change their moral preferences, I could attempt to use charisma and beautiful rhetoric to bring them around to my view with lies and sophistry and nonsense that falls apart on analysis, or I could simply use force.

Reply
Ed Gibney link
7/6/2017 03:25:39 pm

Okay, D.Me. I count 7 separate points that you’ve raised here, which makes it very difficult for me to stay focused and preserve energy for continuing to make progress. I’m going to try to address each one, but let’s see if we can’t pinch a couple of these branches off and stick to root issues for the future. I find this discussion with you quite good, but I do have a lot of other projects I need time and head space for. Here then is a summary (mostly for my benefit) of the points you raised. I’ll try to address them in an order that I think builds to my main points:

1) The search for a classically enumerated argument with no missing premises
2) Objectivity vs. subjectivity of what is important to an individual
3) Applying the need to survive to any project
4) Survival concerns may be limited to humans
5) The problems of simplistic evolutionary thinking
6) Adaptations that promote survival in the short term aren’t moral
7) The justifiability of gumption

Let’s start at the end.

7) The justifiability of gumption

-> Why do I have to justify it? What does that give me? … Let me distinguish between two accounts of justification.

I was actually talking about a third account. I’m talking about the appraisal theory of emotions from cognitive psychology. (https://en.wikipedia.org/wiki/Appraisal_theory) According to this school of thought, one only has emotional responses *after* cognitive appraisals of a situation have been made. These can be conscious or unconscious appraisals, but you cannot be scared of a spider until some part of you knows it’s there. Likewise, you can’t be morally outraged unless there’s a reason for that judgment. I don’t think you have interrogated your subconscious enough to understand what is driving your moral reactions. Those drives may or may not be *logically* justified in the philosophical senses that you were referring to, but these appraisals are justifying something *emotionally* in you (or in anyone else). If you have a personally subjective moral position - what is justifying that? Something definitely is. Whether or not it is philosophically consistent is an unanswered question.

This leads to a fundamental misunderstanding in point 2.

2) Objectivity vs. subjectivity of what is important to an individual

-> …a moral agent has a moral objective duty…to balance the wants of entities according to their objective importance…all that matters to any agent is the subjective importance of its own goals…relatively selfless, moral people, who simply assign a great subjective importance to the goal of helping others. Objective importance doesn't enter into it.

I’m sorry but I see that I haven’t made a really basic point clear to you - I’m not saying each and every moral obligation or decision can now be objectively known and measured. I’m just saying the ultimate goal of moral actions has an objective outcome. The survival of life is an objective fact, and empirical data can show whether life exists or does not. Well-being is subjective, which is why it struggles as an end goal. There’s no binary switch that flicks on and says “well-being is now lit.” We will have many disagreements about what actions lead to the most robust forms of survival, and without billions of life simulations to study, we empirically *can’t* know the answer to resolve all of those disagreements, but at the end of it all, we should agree on one huge thing - we want to see life surviving as an objective fact. I agree with you that the subjective importance for moral agents will matter along the way towards that, but our certainty about the moral correctness of those subjective yearnings will vary in proportion to our certainty of whether those subjective yearnings are leading towards or away from our objective goal.

This makes it straightforward to address points 4, 5, and 6.

4) Survival concerns may be limited to humans

-> We would only care about the survival of other things instrumentally insofar as they aid in our own survival. But we don't depend for our survival on whales or elephants or rhinos, say, so this argument suggests no moral duty to preserve them, whereas I think you would say that we do have such a duty.

I think it would be profoundly arrogant and ignorant to say we don’t depend for our survival on any particular other species. We don’t know that, and since we can’t take extinction back, we ought to be extremely cautious about it. See for example: 1) the role of keystone species in trophic cascades (https://en.wikipedia.org/wiki/Trophic_cascade); 2) how Wolves change rivers (https://www.youtube.com/watch?v=ysa5OBhXz-Q); and 3) the planetary boundaries we are crossing, including biodiversity loss, that could exponentially tip ecosystems out of balance in a sudden enough manner that the environment would change f

Reply
Ed Gibney link
7/6/2017 03:26:39 pm

...and 3) the planetary boundaries we are crossing, including biodiversity loss, that could exponentially tip ecosystems out of balance in a sudden enough manner that the environment would change faster than life could adapt to those changes (https://en.wikipedia.org/wiki/Planetary_boundaries).

I joked about personally swatting mosquitoes and chucking slugs, but I would hesitate strongly before genetically modifying any of those species entirely into extinction (https://is.gd/WtBtXf).

5) The problems of simplistic evolutionary thinking

-> …what evolution produces can be subverted and pointed in maladaptive directions. That's part of the variation required for evolution to work in the first place…Again, I would suggest that homosexuality is maladaptive, and so it would not be quite correct to say that a gay man prefers to have sex with other men because this leads to prolonged survival (of the species). Rather the existence of a sex drive in the first place is adaptive. But this just goes to show that it's a bit too simplistic to say that preferences are what they are because of evolution, and also why it is problematic to identify what is good with what is adaptive or selected for by evolution….I don't deny that there are criteria embedded in my brain somewhere even though I may not be able to articulate them. But I suspect that these criteria are a little different in every brain, just as taste in food or sexual preference varies.

A simplistic evolutionary thinker might say there is *a* sex drive or *a* tasty food, and such simplistic singularly-focused thinking *has* spawned poor evolutionary ethics in the past. As you say though, variation is required for evolution to work, and one of the ways it works is that it provides species with diversity so they can collectively survive fluctuations in the environment. Species that can only eat one tasty thing (I’m looking at you pandas and koalas) struggle to survive over the long term. And the research isn’t in yet, but it may be true that tendencies for homosexuality in individuals could be a response to resource stresses in the environment, therefore acting to reduce pressure on the species as a whole. Calling homosexuality maladaptive is far too focused on the individual organism.

6) Adaptations that promote survival in the short term aren’t moral

-> I do not agree with any attempt to equate morality with what is adaptive. It is adaptive for lions to kill the cubs of rivals on deposing them. It is likely adaptive for monarchs and drug dealers to do likewise for much the same reasons, but I wouldn't say it is moral.

I agree. These appear to be short-term solutions for individuals that do not lead to robust survival for groups over the long haul. Lions and drug dealers may not be “surviving” — I believe they are merely “existing” and on the way towards extinction.

Now that the principles of evolution have been better explained, we can turn to your arguments against my philosophical logic.

3) Applying the need to survive to any project

-> I could turn that around by saying that about anything. I could say that any evil which does not lead to the survival of living things is self-defeating, and so the goal of evil is to ensure the survival of living things. Or the goal of origami is to ensure the survival of living things. I don't think either of those are correct. Not to mention that there exist endeavours which are intrinsically self-defeating, such as the endeavour to eradicate smallpox. The success of that endeavour is marked by that endeavour ceasing to exist. So it doesn't seem right to me that any coherent endeavour or field or human activity has to be self-propagating.

These are all deeply flawed. No definition of evil is compatible with this goal. Origami has its own goal of producing art (or at least a craft). If you want to evaluate whether such art is good and helps humans enjoy or figure out survival, you can do that, but right now you’re ignoring the difference between proximate and ultimate causes, (which I noted in footnote 23 of my paper). The same holds true of smallpox eradication. That project has a proximate cause in service of an ultimate cause.

-> You could perhaps shore up the argument a little by adding the premise that morality must have as one of its ultimate goals the propagation of morality, but then I would simply reject that premise.

You’ve created a circular modification of my actual argument, but you’re either rejecting that circle or you’re back to the “we can all just die” argument, which I acknowledge exists, but still reject as plainly undesirable.

1) The search for a classically enumerated argument with no missing premises

Finally, I can see why my arguments don’t hang together for you because I don’t at all agree with your 5-point rephrasing of my ideas. I’m very skeptical of t

Reply
Ed Gibney link
7/6/2017 03:27:25 pm

Finally, I can see why my arguments don’t hang together for you because I don’t at all agree with your 5-point rephrasing of my ideas. I’m very skeptical of the possibility of enumerating all my systemic premises and conclusions to create a linear formal argument, but this is a grand project so maybe it can be started with something like this?

1. Life is.
1.1 The fact that living things are alive is an objective fact that can be empirically observed.
1.2 All life on Earth is related and descended from a single abiogenesis.
1.3 After abiogenesis, physical and chemical reactions somehow created biological organisms that sense the environment, respond to it, and replicate with some variation. (Note: the following video offers the best hypothesis I’ve seen on how this may have begun, but we still don’t know how biology became something different then physics and chemistry. https://www.youtube.com/watch?v=U6QYDdgP9eg)
1.4 Over time, living things differentiate into individuals, societies, and species that compete to inhabit niches, which can collectively cooperate (intentionally or unintentionally) to reinforce one another’s survival.
1. Life is.

2. Life wants to survive.
2.1 Collectively, all living things in the past, present, and future can be referred to as “life.” However, as Smith and Szathmary said in The Origins of Life, “There is no additional information concerned with regulating the system as a whole. It is therefore misleading to think of an ecosystem [or “life”] as a super-organism.”
2.2 A “want” is defined as a desire, a longing, or a craving for something.
2.3 Wants are therefore emotional urges driven by biochemical reactions in living organisms.
2.4 Wants can be conscious or unconscious, so the level or definition of consciousness in a living organism is not required to recognize what it wants.
2.5 The process of evolution ensures that living things will continue to survive if and only if their biochemical reactions to their environment collectively lead to the continued survival of at least some of their relatives (keeping in mind 1.2).
2.6 During the course of the entire history of life on Earth, living organisms have developed powerful wants to survive because wherever it is otherwise, such life would perish.
2. Life wants to survive.

3. Life ought to act to survive.
3.1 Oughts are general rules that tell one how to act.
3.2 Morals are specific rules that tell one how to “be good.”
3.3 In order for there to “be good,” living things must be. Moral oughts that lead to extinction, will by extension go extinct over the course of evolutionary timelines.
3.4 Oughts can only be derived from “is” by using wants. (i.e. Reason is, and ought only to be the slave of the passions. --Hume)
3.5 Living things are faced with many proximate wants that are often contradictory.
3.6 A living thing can only decide correctly from among proximate wants by knowing an ultimate want for ultimate guidance.
3.7 Wants that only consider the happiness or survival of partial segments of life as paramount can be shown by some example to “be bad” (i.e. selfish and immoral) because if they were adopted as an ultimate want, they could lead towards the extinction of all living things (i.e. there would not “be good”).
3.8 No supernatural phenomenon points to any individual, society, species, or ecosystem being objectively more important than any other.
3.9 No individual, society, species, or ecosystem, can logically justify having their own ultimate moral goal that is different from other’s ethical systems.
3.10 The survival of life in general over evolutionary timelines is the largest, most comprehensive, objective, and consilient goal that can exist for life.
3.11 Ultimate moral wants drive ultimate moral oughts.
3. Life ought to act to survive.

Reply
Disagreeable Me
7/7/2017 02:38:25 pm

Hi Ed,

Feel free to bow out or take your time in answering if you don't have time for this. Only engage in this discussion to the point that you find it valuable to understand why somebody might not immediately buy your argument. I'm enjoying thinking through your points so I'm not holding back. I realise this makes it difficult for you to reciprocate in similar detail. You can engage with points as they interest you. I'm not so much trying to win an argument as giving you feedback. I will not interpret silence on a point as a concession of that point.

> Likewise, you can’t be morally outraged unless there’s a reason for that judgment.

I would go along with appraisal theory in saying that you need to appraise a scenario in order to have a moral reaction to it, but appraising a scenario is just interpreting it and describing it to yourself. Then you react to it more or less automatically, albeit perhaps influenced by how you described it (i.e. whether you think of abortion as killing a baby or as terminating a pregnancy). The ultimate "reasons" for your reactions are buried deep in your psyche and are the product of nature and nurture. They need not be part of the conscious appraisal.

> I don’t think you have interrogated your subconscious enough to understand what is driving your moral reactions.

Again, I make the analogy to taste. Do I need to interrogate my subconscious to know that I like pizza? I'm not sure that such interrogation would be all that fruitful anyway. Liking pizza is a function of who I am, of how I am wired, as a product of my genes and my culture and my personal history. I don't need to have conscious reasons for liking pizza. I just like it. We can of course give evopsych just-so-stories for why I might like pizza (which would fail to account for why my wife does not), and I can rationalise and generate an account of why I like it in terms of its combination of flavours and texture, but I don't need such an account in order to know that I like pizza. Why should morality be any different?

I agree with you that pro-social sentiment has been adaptive and so has been selected for and this accounts for a lot of my moral reactions. There's no argument there, nor are you telling me anything I don't know. But simply having those reactions is all I need to motivate moral behaviour, and perhaps even intervening with force if necessary to stop those with opposing moral reactions. What I don't need in order to motivate me or provide me with gumption is any kind of objective justification for my reactions. Apes and monkeys have also been seen to engage in moral behaviour, seeming to have something approximating a sense of fairness or justice. I don't think they need any philosophical groundwork to justify their gumption when they choose to intervene in order to stop a bully from stealing food. Nor do I.

You then go on to argue that the benefit of your theory over utilitarianism is that the survival of life is an objective fact and well-being is not. Well, first I don't accept that the survival of life is as objective as you think, and second even if I did I don't think it would automatically mean your theory is better.

A world where only one (immortal, say) bacterium exists is a world where life has survived. As is a world like ours. So if you want to say our world is better than the world with only one bacterium, you have to quantify a measure of how much life there is somehow. So then we might want to compare Earth to a world teeming with life, but again that life is all bacteria, perhaps much more biomass than we have on earth at present. That other world would seem to have more life than Earth, whatever way you count it. More biomass, more individuals with their own wants, etc. But I think you would prefer Earth, so not only do you need to take into account the quantity of life, you need to take into account the quality of life, and then you're in similar territory to utilitarians.

I suspect you will feel the temptation to avoid the point by projecting into the future and predicting that only a world with conscious intelligent life can protect itself against asteroids etc. But I think you do have preferences independently of this issue. So let's imagine for the sake of argument that these thought experiments describe the full story from beginning to end of life on possible worlds which exists until it is wiped out by a gamma ray burst (and before the inhabitants of that world can spread to other worlds).

After all, even if your goal is to allow life to survive, that goal will surely fail eventually. Life has to come to an end at some point, even if it lasts until the heat death of the universe. You therefore cannot simplistically assess alternative scenarios based on whether life has survived to that point or not. Suppose, restricting ourselves to humans for simplicity, that there will exist ten billion more humans before humans go extinct, starting now. I don't

Reply
Disagreeable Me
7/7/2017 02:39:15 pm

Suppose, restricting ourselves to humans for simplicity, that there will exist ten billion more humans before humans go extinct, starting now. I don't see why it should matter whether all those humans live at the same time (so we go extinct in about 70 years) or one after the other (so we go extinct in about 700 billion years). The same number of lives were lived either way, but the two scenarios are completely different if we're looking only at whether life has survived to a certain point in time.

Since life will end some day, you need not only to look at whether life has survived, but a way to compare alternative possible future histories of life in their entirety. Mere longevity does not seem to be enough. It seems dubious that the goal of creating an immortal line of bacteria that survives until the heat death of the universe is worth giving up even a single year of human flourishing, even if I agree that life has intrinsic value.

And that's before even getting into what counts as life in the first place. We could well end up with machine successors doing all the kinds of things that biological life does (e.g. reproducing and evolving) but without the biochemistry. Reasonable people could differ on whether the survival of these machines ought to count for your purposes -- I know you would say they do not, but as a functionalist about intentions such as wants this would seem to me to be a biological-chauvinist view of life and which entities might have wants we ought to preserve. I don't think this disagreement can ever be settled objectively and neither do I think one of us is uniquely right about what ought to count as life in this context -- I think we each simply have different ways of appraising the same state of affairs and so different preferences.

But let's say I accept your point, that we can objectively answer whether life has survived with a Boolean true/false variable. That alone doesn't mean it is superior as a theory of objective morality to utilitarianism. Vastly more important than whether it is objective and clearly defined is whether it is justified as a theory of objective morality in preference to other candidates, and to me at least utilitarianism is better justified simply because it accords far better with my intuitions than a Boolean variable which tells me only that some life form exists. All kinds of dumb objective theories of morality might be proposed -- that morality is about increasing the amount of oxygen in the atmosphere, say. That's just as objective as your theory -- in some ways it is superior because it admits of degree or quantification rather than on/off -- but it's absurd to propose it as a theory of morality because it bears almost no relation to our moral intuitions and lacks any kind of justification about why this ought to be taken seriously as a candidate for a theory of morality.

For moral realism/objective morality, you need more than a way of defining morality according to some objective metric. You need a way to show that this objective metric is objectively the right one of all the metrics that might be proposed. In fairness, you have attempted to do that elsewhere, but my point here is that whether your account of morality uses objective criteria or not is actually not all that relevant to the point of whether it works as an account of objective morality. It might be necessary for a truly objective theory of morality, but it is certainly not sufficient.

> I think it would be profoundly arrogant and ignorant to say we don’t depend for our survival on any particular other species.

I'm not insisting, I'm just supposing for the sake of argument -- although actually it does seem unlikely to me that should elephants become extinct, humans would soon follow. I am aware that species can have wider ecological roles and that extinction can have wider consequences. But it does seem profoundly implausible to me that the extinction of rhinos will mean the extinction of humanity.

But in any case it seems to be the case that the dodo and the passenger pigeon died out and we didn't. Sure, there was damage to the ecoysystem, but life, and in particular humanity goes on. So does that mean that the extinction of the dodo doesn't matter? That the extinction of other species is only to be avoided if it will affect the ability of moral agents to survive? It seems clear that we can cause the extinction of certain other species and yet continue to flourish ourselves. So what if the elephant, rhino and blue whale happen to be among them? Does that mean there would in fact be no moral imperative to preserve them? Are you just saying that we should allow them to persist just in case their extinction causes a cascade to wipe us out, and that we would have no moral imperative to preserve them otherwise? In fact, I would say there are actually species not yet extinct which we know cannot plausibly cause further disastrous trophic cascades, because their numbers have d

Reply
Disagreeable Me
7/7/2017 02:40:23 pm

In fact, I would say there are actually species not yet extinct which we know cannot plausibly cause further disastrous trophic cascades, because their numbers have dwindled to the point that these cascades would already have happened. It would seem you should argue against any efforts to conserve those species, as we seem to be surviving just fine without them. Who knows what might happen if they start to become common again!

> I joked about personally swatting mosquitoes and chucking slugs, but I would hesitate strongly before genetically modifying any of those species entirely into extinction

Me too! Because these are common enough and widespread enough that I can entertain the possibility that wiping them out might have disastrous unforeseen consequences. Not so for some species of rhino that is so rare it's on the brink of extinction already.

> but it may be true that tendencies for homosexuality in individuals could be a response to resource stresses in the environment

Indeed it may. But it seems to me your helping yourself to empirical possibilities that would support your view (we might need rhinos to survive) and ignoring possibilities that would undermine it (we might not need rhinos to survive). Instead of supposing that homosexuality might be an adaptive response, consider the possibility that it isn't -- that it's more like muscular dystrophy than the childlessness of worker bees. Just as you seem to make the imperative to conserve rhinos contingent on whether we need rhinos for our own survival, you seem to make the morality of homosexuality contingent on whether it is adaptive or not. I don't think that's good enough. For me, the morality of homosexuality has absolutely nothing to do with whether it is adaptive.

> Lions and drug dealers may not be “surviving” — I believe they are merely “existing” and on the way towards extinction.

And now you are helping yourself to the empirical possibility that the killing of a rival's offspring is not a viable long term evolutionary strategy, despite the fact that this kind of stuff has been going on for millions if not billions of years and there's no sign of it stopping. Your claim that cooperation and kindness and altruism and compassion are moral is contingent on the claim that these are the best evolutionary strategies, but that claim is far from obvious to me. Earlier you insisted that evolution shows that an Orwellian state could not survive long, but I don't see any evidence of that. North Korea seems pretty stable as far as I can see -- the regime does not show any signs of toppling any time soon. If it does topple, it is likely to be due to pressure from without. But if the whole Earth were under such a regime there would be no pressure from without.

My view is that there are many many many ways to survive. Just as a quadruped mammal on the African savannah might survive by being small and agile or by being huge and intimidating, by eating grass or by eating meat, by hunting alone or in packs, by killing potential rivals or banding together with them, there are an innumerable number of strategies by which human societies on Earth might survive indefinitely. You seem to assume too much when you suggest that the trajectory of moral progress in Western societies promotes the survival of life more than other less humane paths would.

> 3) Applying the need to survive to any project

Sorry, I don't see here a defence of the idea that morality ought to be about the survival of morality. You're just asserting that my analogies are not about the survival of those analogies (and I agree). But likewise I don't see why morality has to be about the survival of morality.

I think you're perhaps a bit quick to dismiss "we can all just die" as plainly undesirable. Obviously, very few people want all of us to die. But compared to what alternative? You might find that a lot of people prefer the scenario where humans flourish in freedom and prosperity for a thousand years before being wiped out by a gamma ray burst to the scenario where humans exist in suffering and misery for twenty thousand years before being wiped out by a gamma ray burst, and yet your framework prefers the latter scenario.

Finally, the breakdown of your argument provides more detail but it still seems to me to leap to the conclusion.

I think I would mostly take issue with your points in the "3. Life ought to act to survive" group.

> 3.6 A living thing can only decide correctly from among proximate wants by knowing an ultimate want for ultimate guidance.

I would reject the idea of a *correct* decision. We can only objectively assess correctness by your criteria if one decision leads to death and the other does not. But in many cases neither decision will lead to death, and in many cases even understanding an ultimate goal will not lead you to choose the correct decision -- it could be that the decision which seems to conform best to your

Reply
Disagreeable Me
7/7/2017 02:41:07 pm

-- it could be that the decision which seems to conform best to your actual goal will in fact defeat it.

But anyway, all this talk of knowing an ultimate goal is completely at odds with how decisions get made by organisms in the real world. A rabbit that sees a carrot out in the open wants to leave cover because it wants food but wants to remain in cover because it fears predators. It does not need to know an ultimate want in order to make a decision. Its drives compete and one of them wins out according to a very complex process which has evolved and is wired in but of which the rabbit is presumably largely unaware. This mechanism has evolved to optimise for survival, but that doesn't mean that there is a correct decision or that the rabbit's ultimate goal is actually survival -- the rabbit's actual ultimate goal I would say is just that function that balances all the rabbit's competing wants at any given moment. That may not always coincide with survival. There may be a rabbit that has no sense of fear and just irreducibly wants to be always eating carrots for as long as it lives. Given what it wants, that rabbit ought to go for the carrot whether or not it is likely to be spotted by a predator. That is the correct decision given its actual goals, even if it might be incorrect by the goals you think it ought to have.

> 3.7 Wants that only consider the happiness or survival of partial segments of life as paramount can be shown by some example to “be bad” (i.e. selfish and immoral) because if they were adopted as an ultimate want, they could lead towards the extinction of all living things (i.e. there would not “be good”).

Helping yourself to supportive empirical possibilities again. To say it could cause extinction isn't really good enough. Selfishness could lead towards the extinction of all living things, but then selflessness could also by some route lead to the extinction of all living things. It could also be that there are strategies which are selfish but are not likely to lead towards the extinction of all living things (and I suspect this is true). For instance, capitalism is largely built on selfish motivations, communism built on communal motivations, and yet it seems to be that capitalism is in practice better at serving the community than communism. Whether or not this is actually the case is debatable (you could argue that true communism has never been practiced anywhere), but it seems to me that it is at least possible that this is true, and if it were then it would serve as a counter-example to your point. That possibility means that your point isn't necessarily true, and so that there are probably cases where it isn't.

> 3.9 No individual, society, species, or ecosystem, can logically justify having their own ultimate moral goal that is different from other’s ethical systems.

I would protest the need for logical justification of moral goals. Individuals, societies and species simply have these goals whether or not they are justified. Goals simply are. They need no more justification than coloration or the number of legs an animal might have. They are a product of evolution (and circumstance and chance) and so can to a certain extent be explained with reference to evolution, but they are not philosophical positions to be grounded in objective justification. Justifying one's ultimate goals seems to me to be a category error. You have acknowledged that you cannot bridge the is-ought divide without a want, but that's what you're doing when you try to justify the ultimate want. Wants cannot be justified except as instrumental goals with respect to some more profound want. But you're saying "I am such and such a being, and ..., therefore I ought to want X ultimately".

> 3.10 The survival of life in general over evolutionary timelines is the largest, most comprehensive, objective, and consilient goal that can exist for life.

There seems to be an equivocation here. Either life is an overriding Gaia-like superentity with its own goals, or life is an aggregation of living things each with their own goals. If you're going for the former (and you said you weren't), then you could talk of this superentity having goals. But if you're not, then I don't see why each individual can't have its own goals and I see no logical need for a shared goal. So I don't think you successfully make the jump to "an individual moral agent ought to care about the survival of life in general over evolutionary timelines".

Reply
Ed Gibney link
7/7/2017 04:42:15 pm

Just a very few quick thoughts:

We aren't sure yet that life will go extinct. The big crunch or the big freeze (or gamma ray bursts) are not assured yet. I actually wrote a bit about this and the philosopher John Messerly discussed it on his blog Reason and Meaning:

http://reasonandmeaning.com/2016/04/16/meaning-in-life-as-being-part-of-cosmic-evolution/#comment-35905

If universal extinction ever becomes a certainty, then yeah, sure, fuck it. We all ought to just party until we die.

-> Apes and monkeys have also been seen to engage in moral behaviour, seeming to have something approximating a sense of fairness or justice. I don't think they need any philosophical groundwork to justify their gumption when they choose to intervene in order to stop a bully from stealing food. Nor do I.

-> North Korea seems pretty stable as far as I can see -- the regime does not show any signs of toppling any time soon.

-> I think you're perhaps a bit quick to dismiss "we can all just die" as plainly undesirable.

-> I would protest the need for logical justification of moral goals.

These statements struck me as profoundly unphilosophical - i.e. they do not show a love of wisdom. It made me feel this is all just mental gymnastics to you, which leaves me unmotivated to continue tumbling along. I gotta go now. Maybe there will be more later. Maybe not. Cheers for the extensive feedback though and for pressing me for more logical rigour.

Reply
Disagreeable Me
7/7/2017 07:18:03 pm

Hi Ed,

I wasn't aware that you were going out so far on a limb as to hope that life might continue literally forever. If you wanted me to entertain that notion then a lot of my arguments would not apply, I agree. However I would assign a very low prior to that panning out.

You seem to have an attitude that there is no meaning or point in anything if the existence of life in the universe is finite. This strikes me as similar to the view of people who think that we require an eternal afterlife in order for our lives on Earth to have meaning. I don't think either eternity is required for meaning and hope. We can hope to find our own meaning, to live fulfilling lives in the time we have. I wouldn't agree that we ought to just party until we die. I would argue that we ought to pursue our goals whatever they might be, and the goals I have for myself include promoting well-being for myself and for others.

I would reject the charge of being unphilosophical.

I'm lost as to why you find my comment about North Korea to be unphilosophical. It seems to be self-evident to you that such states cannot be stable, and so rejecting that view perhaps seems to be unwise. But it isn't obvious to me and it wasn't obvious to Orwell. The horror of 1984 for me is not just how awful the society is but how little hope there seems to be of it being overthrown. It seems to have reached a state of such self-reinforcing oppression such that no resistance has any chance of blossoming. In addition, it doesn't seem to be at any great risk of succumbing to famine or natural disaster. A centrally organised state can potentially achieve great feats of coordination if it wants, c.f. the Soviet space program.

I can see why you might think that it is unphliosophical not to have any interest in justifying moral goals. But I'm not just uninterested. I'm making a positive philosophical claim that to do so is to make a category mistake. Indeed, I reject it precisely because I love wisdom -- I think you are unwise to attempt to do so. I said why with reference to Hume's is-ought divide. You bridge the is-ought divide only with a want, but if you want to justify that want you have to bridge the gap again, from what is the case to what you ought to want. You can do this in an iterative fashion to a point but you have to stop when you reach your ultimate want and there are no further wants with which to build the bridge. At that point no further justification is possible.

Not sure why you have a problem with my statement regarding "we can all just die" when I justified it by considering a couple of different scenarios.

Reply

Your comment will be posted after it is approved.


Leave a Reply.

    Subscribe to Help Shape This Evolution

    SUBSCRIBE

    Blog Philosophy

    This is where ideas mate to form new and better ones. Please share yours respectfully...or they will suffer the fate of extinction!


    Archives

    February 2025
    August 2024
    July 2024
    June 2024
    April 2024
    December 2023
    November 2023
    October 2023
    September 2023
    January 2023
    August 2022
    July 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    August 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    May 2019
    March 2019
    December 2018
    July 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    September 2012
    August 2012
    July 2012
    April 2012

Powered by Create your own unique website with customizable templates.