@deaths-accountant I will, if I remember, think carefully about your thought experiment and respond to it soon (although I will probably change some details so that it is less similar to current events because I don't want people to misunderstand the nature of the discussion and get mad at me), but, in the mean time, here is a counter-thought-experiment for you:
Suppose there is a guy Bob, currently hanging out in the heavenly realm or whatever, and he is presented by an angel with the following choice:
- Bob will be born into the world, and live an ordinary-seeming life. Over the course of his life, the net utility (under whatever form of utilitarianism you endorse; hedonic, preferential, etc.) which he contributes to everyone else in the world besides himself will be 0. In other words, the people of the world (not including him) will be no better off nor worse off for his being born. However, he himself, under the same conception of utility, will receive -ε net utils. He will have N (for reasonable large N) utils worth of joys, triumphs, etc., and -(N+ε) utils worth of pains, failures and so on. Thus, he will live a net-negative life.
- Bob will not be born into the world, he will cease to exist.
Implicitly I'm discounting here all the thoughts and feelings that Bob experiences here in the heavenly realm before he is born (or not) as irrelevant, but if you don't feel comfortable with this you can just adjust the numbers so that the net utility of each choice comes out as intended above.
It is possible, I think, that in light of the above choice, Bob would select (2) and cease to exist. But I think it's also possible that Bob would say "no, I'll take (1), I want to have the joys and triumphs even if there turn out also to be a greater number of failures and losses". In particular, I am almost certain that I would choose (1), and not just for fear of death (the above scenario is an abstraction of choices that I have actually made, where no risk of death was involved).
The question is: would it be moral for the angel to override Bob here, "for his own good", and choose (2) for him?
By construction a utilitarian has to say yes. If ε is small the utilitarian might say "well, it's not a very big deal; the normative force behind overriding Bob and choosing (2) is low". But I can think of scenarios in which I would chose (1) even if (I believed that) ε was pretty significant, where this excuse doesn't work.
Also consider for instance... the archetype of the starving artist. The man who is committed to producing his Great Work even at significant cost to himself. Suppose that he has made many sacrifices in order to hone his craft, he's given up financial success and a social life, he lives in the mountains and, you know, carves statue after statue in pursuit of perfection. Suppose that he can rationally conclude that, when (if) he does complete his masterpiece, the satisfaction will be relatively small in the face of all the sacrifices he's made. I mean, yeah, he'll be happy, he'll feel fulfilled and genuinely, deeply satisfied. But on a literal, summative level, that just won't add up to the lifetime of late nights, missed opportunities for social connection, etc., either in terms of net pleasure or net preference satisfaction or whatever. But suppose also that on the day to day level he doesn't feel miserable, he's not suffering. He's toiling in pursuit of a deeply held personal goal, and it feels... well, "good" isn't always the word. But he is plenty motivated to keep going; he's out here in the mountains of his own accord. The fact that he judges that at the end of his life the utility tally won't come up positive for him doesn't weigh on him much. "Why should I care about some number?" he says. "Maybe I'd be net happier if I went out on the town and found a wife and settled down, but I don't want to do that. I want to complete my Great Work."
Is this artist doing something immoral by living his life the way he has? Would it be moral for a third party to step in and prevent him from pursuing his endeavors?
In both of these thought experiments, my extremely strong intuition is that the answer is "no", making choices for other people "for their own good" in this way is not moral. But this seems like a necessary consequence of any kind of utilitarianism, so I can't get behind utilitarianism.
The starving artist example gets to a more fundamental issue, too. I kept saying things like "he really wants to complete his Great Work, and it will make him very satisfied, but he will be more net satisfied if he gives up on that and lives a normal life". Well... what the hell does "net satisfied" mean? How do you measure the strength of a preference? He "really wants" to complete his Great Work, and materially that corresponds to a certain neural state, but how do you put a number on that neural state which is fungible with the numbers you put on all the other neural states of human life? You run into this problem in both hedonic and preference utilitarianism, because "preference" is a neural phenomenon. Is there even a well-defined abstraction here, is there even a coherent thing to which "preference strength" can possibly refer? Maybe, but I don't know that there is. And the problem is that if you pick the wrong abstraction, if you pick the wrong way of getting a fungible quantity out of a fundamentally non-numerical arrangement of matter, then what you have doesn't correspond to "ethics" anymore, right, it lacks normative force. It's just some number.
This is why I say that utilitarian-ish ethics are fine on the large scale, they're fine for the policy maker or the economist, who for methodological reasons simply needs to pick an ok enough abstraction on run with it. But on the scale of individual humans, individual minds, and what it "really means" to treat people right, I don't think utilitarianism can possibly hold up.
I might have made this exact post before somewhere, if so apologies for repeating myself.