Also, I think there’s an adequate Kantian response to your example. Am I missing something?
Part of the relevant context for your action includes, by stipulation, you knowing that the needed 50 votes have been cast. This information changes your context, and thus you reason “what action would rational agents in this situation (my epistemic state included) perform to best achieve my ends” — in this case, as the 50 votes have been cast, you don’t cast another.
So, I act on the basis of maxims, but changes in my epistemic state can still appropriately inform my decision-making.
Sounds reasonable! Though if you can build in all the details of your specific individual situation, and are directed to do what’s best in light of this, do you think this ends up being recognizably distinct from act consequentialism?
(Not that convergence is necessarily a problem. It can be a happy result that different theorists are “climbing the same mountain from different sides”, to borrow Parfit’s metaphor. But it would at least suggest that the Kantian spin is optional, and the basic view could be just as well characterized in act consequentialist terms.)
The short answer is: I think the norm delivers meaningfully different verdicts for certain ways of cashing out ‘act consequentialism’, but I imagine that you (and many other consequentialists) are going to want to say that the ‘Practical Kantian’ norm is compatible with act consequentialism. I’ll first discuss the practical question of deontic norms and EA’s self-conception, and then respond to the more philosophical question.
1.
If I’m right about your view, my suggested Kantian spin would (for you) be one way among many to talk about deontic norms, which could be phrased in more explicitly act-consequentialist language. That said, I still think there’s an argument for EA as a whole making deontic norms more central to its self-conception, as opposed to a conception where some underlying theory of the good is more central. EA is trying to intervene on people’s actions, after all, and your underlying theory of the good (at least in principle) underdetermines your norms for action. So, to me, it seems better to just directly highlight the deontic norms we think are valuable. EA is not a movement of moral theorists qua moral theorists, we’re a movement of people trying to do stuff that makes the world better. Even as a consequentialist, I guess that you’re only going to want involvement with a movement that shares broadly similar views to you about the action-relevant implications of consequentialism.
I want to say that I also think there should be clear public work outlining how the various deontic norms we endorse in EA clearly follow from consequentialist theories. Otherwise, I can see internal bad actors (or even just outsiders) thinking that statements about the importance of deontological norms are just about ‘brand management’, or whatever. I think it’s important to have a consistent story about the ways in which our deontic norms related to our more foundational principles, both so that outsiders don’t feel like they’re being misled about what EA is about, and so that we have really explicit grounds on which to condemn certain behaviors as legitimately and unambiguously violating norms that we care about.
(Also, independently, I’ve (e.g.) met many people in EA who seem to flit between ‘EUT is the right procedure for practical decision-making’ and ‘EUT is an underratedly useful tool’ — even aside from discussions of side-constraints, I don’t think we have a clear conception as to what our deontic norms are, and I think this would independently beneficial. For instance, I think it would be good to have a clearer account of the procedures that really drive our prioritization decisions).
2.
On a more philosophical level, I believe that various puzzle cases in decision theory help motivate the case for treating maxims as the appropriate evaluative focal point wrt rational decision-making, rather than acts. Here are some versions of act consequentialism that I think will diverge from the Practical Kantian norm:
Kant+CDT tells you to one-box in the standard Newcomb problem, whereas Consequentialism+CDT doesn’t.
Consequentialism+EDT is vulnerable XOR blackmail, whereas Kant+CDT isn’t.
Perhaps there is a satisfying decision theory which, combined with act-consequentialism, provides you with (what I believe to be) the right answers to decision-theoretic puzzle cases, though I’m currently not convinced. I think I might also disagree with you about the implications of collective action problems for consequentialism (though I agree that what you describe as “The Rounding to Zero Fallacy” and “The First-Increment Fallacy” are legitimate errors), but I’d want to think more about those arguments before saying anything more.
Yes, agreed that what matters for EA’s purposes is agreement on its most central practical norms, which should include norms of integrity, etc., and it’s fine to have different underlying theories of what ultimately justifies these. (+ also fine, of course, to have empirical/applied disagreements about what we should end up prioritizing, etc., as a result..)
I’ll look forward to hearing more of your thoughts on consequentialism & collective action problems at some future point!
(No problem with self-linking, I appreciate it!)
Also, I think there’s an adequate Kantian response to your example. Am I missing something?
So, I act on the basis of maxims, but changes in my epistemic state can still appropriately inform my decision-making.
Sounds reasonable! Though if you can build in all the details of your specific individual situation, and are directed to do what’s best in light of this, do you think this ends up being recognizably distinct from act consequentialism?
(Not that convergence is necessarily a problem. It can be a happy result that different theorists are “climbing the same mountain from different sides”, to borrow Parfit’s metaphor. But it would at least suggest that the Kantian spin is optional, and the basic view could be just as well characterized in act consequentialist terms.)
The short answer is: I think the norm delivers meaningfully different verdicts for certain ways of cashing out ‘act consequentialism’, but I imagine that you (and many other consequentialists) are going to want to say that the ‘Practical Kantian’ norm is compatible with act consequentialism. I’ll first discuss the practical question of deontic norms and EA’s self-conception, and then respond to the more philosophical question.
1.
If I’m right about your view, my suggested Kantian spin would (for you) be one way among many to talk about deontic norms, which could be phrased in more explicitly act-consequentialist language. That said, I still think there’s an argument for EA as a whole making deontic norms more central to its self-conception, as opposed to a conception where some underlying theory of the good is more central. EA is trying to intervene on people’s actions, after all, and your underlying theory of the good (at least in principle) underdetermines your norms for action. So, to me, it seems better to just directly highlight the deontic norms we think are valuable. EA is not a movement of moral theorists qua moral theorists, we’re a movement of people trying to do stuff that makes the world better. Even as a consequentialist, I guess that you’re only going to want involvement with a movement that shares broadly similar views to you about the action-relevant implications of consequentialism.
I want to say that I also think there should be clear public work outlining how the various deontic norms we endorse in EA clearly follow from consequentialist theories. Otherwise, I can see internal bad actors (or even just outsiders) thinking that statements about the importance of deontological norms are just about ‘brand management’, or whatever. I think it’s important to have a consistent story about the ways in which our deontic norms related to our more foundational principles, both so that outsiders don’t feel like they’re being misled about what EA is about, and so that we have really explicit grounds on which to condemn certain behaviors as legitimately and unambiguously violating norms that we care about.
(Also, independently, I’ve (e.g.) met many people in EA who seem to flit between ‘EUT is the right procedure for practical decision-making’ and ‘EUT is an underratedly useful tool’ — even aside from discussions of side-constraints, I don’t think we have a clear conception as to what our deontic norms are, and I think this would independently beneficial. For instance, I think it would be good to have a clearer account of the procedures that really drive our prioritization decisions).
2.
On a more philosophical level, I believe that various puzzle cases in decision theory help motivate the case for treating maxims as the appropriate evaluative focal point wrt rational decision-making, rather than acts. Here are some versions of act consequentialism that I think will diverge from the Practical Kantian norm:
Kant+CDT tells you to one-box in the standard Newcomb problem, whereas Consequentialism+CDT doesn’t.
Consequentialism+EDT is vulnerable XOR blackmail, whereas Kant+CDT isn’t.
Perhaps there is a satisfying decision theory which, combined with act-consequentialism, provides you with (what I believe to be) the right answers to decision-theoretic puzzle cases, though I’m currently not convinced. I think I might also disagree with you about the implications of collective action problems for consequentialism (though I agree that what you describe as “The Rounding to Zero Fallacy” and “The First-Increment Fallacy” are legitimate errors), but I’d want to think more about those arguments before saying anything more.
Yes, agreed that what matters for EA’s purposes is agreement on its most central practical norms, which should include norms of integrity, etc., and it’s fine to have different underlying theories of what ultimately justifies these. (+ also fine, of course, to have empirical/applied disagreements about what we should end up prioritizing, etc., as a result..)
I’ll look forward to hearing more of your thoughts on consequentialism & collective action problems at some future point!