@deaths-accountant I will, if I remember, think carefully about your thought experiment and respond to it soon (although I will probably change some details so that it is less similar to current events because I don't want people to misunderstand the nature of the discussion and get mad at me), but, in the mean time, here is a counter-thought-experiment for you:
Suppose there is a guy Bob, currently hanging out in the heavenly realm or whatever, and he is presented by an angel with the following choice:
- Bob will be born into the world, and live an ordinary-seeming life. Over the course of his life, the net utility (under whatever form of utilitarianism you endorse; hedonic, preferential, etc.) which he contributes to everyone else in the world besides himself will be 0. In other words, the people of the world (not including him) will be no better off nor worse off for his being born. However, he himself, under the same conception of utility, will receive -ε net utils. He will have N (for reasonable large N) utils worth of joys, triumphs, etc., and -(N+ε) utils worth of pains, failures and so on. Thus, he will live a net-negative life.
- Bob will not be born into the world, he will cease to exist.
Implicitly I'm discounting here all the thoughts and feelings that Bob experiences here in the heavenly realm before he is born (or not) as irrelevant, but if you don't feel comfortable with this you can just adjust the numbers so that the net utility of each choice comes out as intended above.
It is possible, I think, that in light of the above choice, Bob would select (2) and cease to exist. But I think it's also possible that Bob would say "no, I'll take (1), I want to have the joys and triumphs even if there turn out also to be a greater number of failures and losses". In particular, I am almost certain that I would choose (1), and not just for fear of death (the above scenario is an abstraction of choices that I have actually made, where no risk of death was involved).
The question is: would it be moral for the angel to override Bob here, "for his own good", and choose (2) for him?
By construction a utilitarian has to say yes. If ε is small the utilitarian might say "well, it's not a very big deal; the normative force behind overriding Bob and choosing (2) is low". But I can think of scenarios in which I would chose (1) even if (I believed that) ε was pretty significant, where this excuse doesn't work.
Also consider for instance... the archetype of the starving artist. The man who is committed to producing his Great Work even at significant cost to himself. Suppose that he has made many sacrifices in order to hone his craft, he's given up financial success and a social life, he lives in the mountains and, you know, carves statue after statue in pursuit of perfection. Suppose that he can rationally conclude that, when (if) he does complete his masterpiece, the satisfaction will be relatively small in the face of all the sacrifices he's made. I mean, yeah, he'll be happy, he'll feel fulfilled and genuinely, deeply satisfied. But on a literal, summative level, that just won't add up to the lifetime of late nights, missed opportunities for social connection, etc., either in terms of net pleasure or net preference satisfaction or whatever. But suppose also that on the day to day level he doesn't feel miserable, he's not suffering. He's toiling in pursuit of a deeply held personal goal, and it feels... well, "good" isn't always the word. But he is plenty motivated to keep going; he's out here in the mountains of his own accord. The fact that he judges that at the end of his life the utility tally won't come up positive for him doesn't weigh on him much. "Why should I care about some number?" he says. "Maybe I'd be net happier if I went out on the town and found a wife and settled down, but I don't want to do that. I want to complete my Great Work."
Is this artist doing something immoral by living his life the way he has? Would it be moral for a third party to step in and prevent him from pursuing his endeavors?
In both of these thought experiments, my extremely strong intuition is that the answer is "no", making choices for other people "for their own good" in this way is not moral. But this seems like a necessary consequence of any kind of utilitarianism, so I can't get behind utilitarianism.
The starving artist example gets to a more fundamental issue, too. I kept saying things like "he really wants to complete his Great Work, and it will make him very satisfied, but he will be more net satisfied if he gives up on that and lives a normal life". Well... what the hell does "net satisfied" mean? How do you measure the strength of a preference? He "really wants" to complete his Great Work, and materially that corresponds to a certain neural state, but how do you put a number on that neural state which is fungible with the numbers you put on all the other neural states of human life? You run into this problem in both hedonic and preference utilitarianism, because "preference" is a neural phenomenon. Is there even a well-defined abstraction here, is there even a coherent thing to which "preference strength" can possibly refer? Maybe, but I don't know that there is. And the problem is that if you pick the wrong abstraction, if you pick the wrong way of getting a fungible quantity out of a fundamentally non-numerical arrangement of matter, then what you have doesn't correspond to "ethics" anymore, right, it lacks normative force. It's just some number.
This is why I say that utilitarian-ish ethics are fine on the large scale, they're fine for the policy maker or the economist, who for methodological reasons simply needs to pick an ok enough abstraction on run with it. But on the scale of individual humans, individual minds, and what it "really means" to treat people right, I don't think utilitarianism can possibly hold up.
I might have made this exact post before somewhere, if so apologies for repeating myself.
By construction a utilitarian has to say yes. If ε is small the utilitarian might say "well, it's not a very big deal; the normative force behind overriding Bob and choosing (2) is low". But I can think of scenarios in which I would chose (1) even if (I believed that) ε was pretty significant, where this excuse doesn't work
I find this assumption really strange, I dont think it's true at all that utilitarians have to say yes to this. Deontologists owe a lot to Kant, but there are very few modern Kantians. Mental dualists owe a lot to Descartes, but there there are very few modern Cartesians. And utilitarians owe a lot to Bentham's hedonic utilitarianism, but I think that modern utilitarianism has overwhelmingly switched over to preference utilitarianism. Maybe a hedonic utilitarian has to say yes to this, but to a preference utilitarian, Bob has clearly and of sound mind expressed his preference, and it would be unethical to intercede against that. Pain and pleasure are not intrinsically negative or valuable on their own, their only value is in the value they are assigned as goals by conscious agents. It simply does not make sense to try to weigh them against each other in the first place without an agent assigning them meaning. Pure preference utilitarians should categorically never intervene against a (sound-minded, informed) agent making choices that affect no one else.
I'm not sure that's what preference utilitarianism is though! Like, a preference utilitarian is not obligated to respect any old preference someone holds; what they are committed to is maximizing net preference satisfaction. It's that lifetime summative component that makes utilitarianism distinct from other forms of consequentialism. So, what if Bob's decision to live does not maximize his lifetime preference satisfaction? What if he has the preference to live, knows that this will lead to many more of his preferences being violated and on net choosing to live will mean more preference violation than preference satisfaction for him, and he chooses to live anyway? That's the scenario I'm trying to set up. And it seems to be that anyone who thinks there is an obligation to respect his preference in this case (as I do) cannot be a utilitarian.
if you are posting with certainty a bob that makes the wrong choice according to a systematic and self consistent aggregation of his many competing preferences to live out a life that is (by all of his preferences taken together) worse than ceasing to exist (which is my attempt to summarize the key elements of this thought experiment, let me know if you think that's a mischaracterization), then i do believe it is (probably) a moral imperative to give Bob the result that matches his actually coherent extrapolated aggregate volition with more urgency than the result that he willingly consents to. the fact that these results have diverged is (as all good thought experiments tend to be) an incredibly rare and degenerate situation that in practice humans are much much more likely to assume is happening than actually is, and in the real world you should 99.999% of the time obey the heuristic of assuming someone knows what matches their preferences better than you can guess, especially given the ways that violating someone's consent causes pain and damage to people in very deep and serious ways even if it would *in theory* be a better object level result if the willingly chose that option instead.
like, my thinking here is that Bob is piloting his decision making in a way that fails to accurately account for the long run results of subtle preferences and these preferences within Bob maybe deserve a fair amount of moral weight and by deciding to live out a life that (by your description of the thought experiment) he will on net regret having lived out, *something* seems like it must have gone wrong here.