The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one’s word or other actions will have on the perceptions (and thus future actions) of other people
This view is not broadly accepted amongst the EA community. At the very least, this view is self-defeating in the following sense: such an “ideal utilitarian” should not try to convince other people to be an ideal utilitarian, and should attempt to become a non-ideal utilitarian ASAP (see e.g. Parfit’s hitchhiker for the standard counterexample, though obviously there are more realistic cases).
However, the post gives us no reason to believe it’s particular interpretation of integrity “being straightforward” is the best such heuristic. It merely asserts the author’s belief that this somehow works out to be the best.
I argued for my conclusion. You may not buy the arguments, and indeed they aren’t totally tight, but calling it “mere assertion” seems silly.
the very reason for considering integrity is that, “I find the ideal of integrity very viscerally compelling, significantly moreso than other abstract beliefs or principles that I often act on.”
This is neither true, nor what I said.
WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST
This is what it looks like when something is asserted without argument.
I do agree roughly with this sentiment, but only if it is interpreted sufficiently broadly that it is consistent with my post.
Does that mean that instead of saying “I’ll be there for you whatever happens” we should say “I’ll be there for you as long as the balance of probability doesn’t suggest that supporting you will cost more than 5 QALYs” (quality adjusted life years)?
I tried to spell out pretty explicitly what I recommend in the post, right at the beginning (“when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option”), and it clearly doesn’t recommend anything like this.
You seem to use “being straightforward” in a different way than I do. Saying “I’ll be there for you whatever happens” is straightforward if you actually mean the thing that people will understand you as meaning.
Re your first point yup they won’t try to recruit others to that belief but so what? That’s already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.
Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize utility. So maybe the better way to phrase this is as: why care what the agent who pledges to utilitarianism in some way and wants to recruit others might need to do or act that’s a distraction from the simple question of what in fact maximizes utility. If that means convincing everyone not to be utilitarians then so be it.
--
And yes re the rest of your points I guess I just don’t see why it matters what would be good to do if other agents respond in some way you argue would be reasonable. Indeed, what makes consequentialism consequentialism is that you aren’t acting based on what would happen if you imagine interacting with idealized agents like a Kantianesque theory might consider but what actually happens when you actually act.
I agree the caps were aggressive and I apologize for that and I agree I’m not trying to produce evidence which says that in fact how people respond to supposed signals of integrity tends to match what they see as evidence you follow the standard norms. That’s just something people need to consult their own experience and ask themselves if, in their experience, thay tends to be true. Ultimately I think that it’s just not true that a priori analysis of what should make people see you as trustworthy or have any other social reaction is a good guide to what they will do?
But I guess that is just going to return to point 1 and our different conceptions of what is utilitarianism requires.
I apologize in advance if I’m a bit snarky.
This view is not broadly accepted amongst the EA community. At the very least, this view is self-defeating in the following sense: such an “ideal utilitarian” should not try to convince other people to be an ideal utilitarian, and should attempt to become a non-ideal utilitarian ASAP (see e.g. Parfit’s hitchhiker for the standard counterexample, though obviously there are more realistic cases).
I argued for my conclusion. You may not buy the arguments, and indeed they aren’t totally tight, but calling it “mere assertion” seems silly.
This is neither true, nor what I said.
This is what it looks like when something is asserted without argument.
I do agree roughly with this sentiment, but only if it is interpreted sufficiently broadly that it is consistent with my post.
I tried to spell out pretty explicitly what I recommend in the post, right at the beginning (“when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option”), and it clearly doesn’t recommend anything like this.
You seem to use “being straightforward” in a different way than I do. Saying “I’ll be there for you whatever happens” is straightforward if you actually mean the thing that people will understand you as meaning.
Re your first point yup they won’t try to recruit others to that belief but so what? That’s already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.
Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize utility. So maybe the better way to phrase this is as: why care what the agent who pledges to utilitarianism in some way and wants to recruit others might need to do or act that’s a distraction from the simple question of what in fact maximizes utility. If that means convincing everyone not to be utilitarians then so be it.
--
And yes re the rest of your points I guess I just don’t see why it matters what would be good to do if other agents respond in some way you argue would be reasonable. Indeed, what makes consequentialism consequentialism is that you aren’t acting based on what would happen if you imagine interacting with idealized agents like a Kantianesque theory might consider but what actually happens when you actually act.
I agree the caps were aggressive and I apologize for that and I agree I’m not trying to produce evidence which says that in fact how people respond to supposed signals of integrity tends to match what they see as evidence you follow the standard norms. That’s just something people need to consult their own experience and ask themselves if, in their experience, thay tends to be true. Ultimately I think that it’s just not true that a priori analysis of what should make people see you as trustworthy or have any other social reaction is a good guide to what they will do?
But I guess that is just going to return to point 1 and our different conceptions of what is utilitarianism requires.