[Only skimmed Aaron’s notes, didn’t read the paper, so might be quite off.]
At first glance, this seems like a special case of (e.g.) Parfit’s observation in the first part of Reasons and Persons that consequentialist views can imply it’d be better if you didn’t follow them, didn’t believe in them etc. (similar to how prudential theories can imply that in some situations it’d be better for you if you were ‘rationally irrational’). Probably the basic idea was already mentioned by Sidgwick or earlier utilitarians.
I.e. the key insight is that, as people often put it, utilitarianism as a ‘criterion of rightness’ does not imply we ought to always use utilitarianism (or something that looks like a ‘direct’ application of it) as a ‘decision procedure’. Instead, consequentialist criteria of rightness transform the question which decision procedures to use into a purely empirical one. It’s trivial to construct contrived thought experiments where the ‘correct’ decision procedure is arbitrarily bizarre.
I think this kind of cuts both ways:
On one hand, to say something interesting, papers like the above need to engage in empirical investigations: They need to say something about when and how often situations in which it’d be best for the world to use some ‘non-consequentialist’ decision procedure actually occur. E.g., does this paper give convincing examples for ‘utility cascades’, or arguments for why we should expect them to be common?
On the other hand, it means that (by consequentialist lights) the appropriateness of EA’s principles, methods etc. is a purely empirical question as well. They depend on one’s normative views as much as they depend on a host of contingent facts, such as the track record of science, how others react to EA, etc.
[Only skimmed Aaron’s notes, didn’t read the paper, so might be quite off.]
At first glance, this seems like a special case of (e.g.) Parfit’s observation in the first part of Reasons and Persons that consequentialist views can imply it’d be better if you didn’t follow them, didn’t believe in them etc. (similar to how prudential theories can imply that in some situations it’d be better for you if you were ‘rationally irrational’). Probably the basic idea was already mentioned by Sidgwick or earlier utilitarians.
I.e. the key insight is that, as people often put it, utilitarianism as a ‘criterion of rightness’ does not imply we ought to always use utilitarianism (or something that looks like a ‘direct’ application of it) as a ‘decision procedure’. Instead, consequentialist criteria of rightness transform the question which decision procedures to use into a purely empirical one. It’s trivial to construct contrived thought experiments where the ‘correct’ decision procedure is arbitrarily bizarre.
I think this kind of cuts both ways:
On one hand, to say something interesting, papers like the above need to engage in empirical investigations: They need to say something about when and how often situations in which it’d be best for the world to use some ‘non-consequentialist’ decision procedure actually occur. E.g., does this paper give convincing examples for ‘utility cascades’, or arguments for why we should expect them to be common?
On the other hand, it means that (by consequentialist lights) the appropriateness of EA’s principles, methods etc. is a purely empirical question as well. They depend on one’s normative views as much as they depend on a host of contingent facts, such as the track record of science, how others react to EA, etc.