Cluelessness seems to imply that altruists should be indifferent between all possible actions that they can take. Is this implication of the view embraced?
As I say in another comment, I think that a few effects—such as reducing the risk of human extinction—can be rescued from cluelessness. Therefore, I’m not committed to being indifferent between literally all actions.
I do, however, think that consequentialism provides a reason for only very few actions. In particular, I do not think there is a valid argument for donating to AMF instead of the Make-a-Wish Foundation based on consequentialism alone.
This is actually one example of where I believe cluelessness has practical import. Here is a related thing I wrote a few months ago in another discussion:
“Another not super well-formed claim: - Donating 10% of one’s income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar ‘individual’ actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their ‘symbolic’ and indirect benefits such as signalling and maintaining community norms. - Therefore, they are analogous to things like: environmentalists refusing to fly or reducing the waste produced by their household; activists participating in a protest; party members attending weekly meetings of their party; religious people donating money for missionary purposes or building temples. - Rash criticism of such actions in other communities that appeals to their direct short-term consequences is generally unjustified, and based on a misunderstanding of the role of such actions both within EA and in other communities. If we wanted to assess the ‘effectiveness’ of these other movements, the crucial question to ask (ignoring higher-level questions such as cause prioritization) about, say, an environmentalist insisting to always switch of the lights when they leave a room, would not be how much CO2 emissions are avoided; instead, the relevant questions would be things like: How does promoting a norm of switching off lights affect that community’s ability to attract followers and other resources? How does promoting a norm of switching off lights affect that community’s actions in high-stakes situations, in particular when there is strategic interdependence—for example, what does it imply about the psychology and ability to make credible commitments of a Green party leader negotiating a coalition government? - It is not at all obvious that promoting norms that are ostensibly about maximizing the effectiveness of all individual ‘altruistic’ decisions is an optimal or even net positive choice for maximizing a community’s total impact. (Both because of and independently of cluelessness.) I think there are relatively good reasons to believe that several EA norms of that kind actually have been impact-increasing innovations, but this is a claim about a messy empirical question, not a tautology.”
Donating 10% of one’s income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar ‘individual’ actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their ‘symbolic’ and indirect benefits such as signalling and maintaining community norms.
Suppose that it is true that the value of those actions comes almost entirely from their symbolic benefits. If so, then a further question is whether those symbolic benefits are dependent on the belief that that is not the case; i.e. the belief that the value of those actions, on the contrary, largely comes from their direct and non-symbolic effects. (Analogously to how indirect benefits of a religion on well-being or community cohesion may be dependent on the false belief that the religion’s metaphysical claims are true.) It could be that making it widely known that the value of those actions comes almost entirely from their symbolic benefits would undermine those benefits (maybe even turn them to harms; e.g. because knowingly doing something with low direct benefits for symbolic reasons would be seen as hypocritical). Whether that’s the case depends on the social context and doesn’t seem straightforward to determine.
I agree this is a non-obvious question. There is a good reason why consequentialists at least since Sidgwick have asked to what extent the correct moral theory might imply to keep its own principles secret.
Yes, though it seems to me that EAs largely think one shouldn’t (cf. that Integrity is one of “the guiding principles of effective altruism” as understood by a number of organisations). (Not that you would suggest otherwise.)
A tangentially related comment. What symbolic benefits or harms our actions have will be dependent on our norms, and these norms will to at least some extent be malleable. Jason Brennan has argued that we should judge such symbolic norms by their consequences.
If you’ve read Markets without Limits or “Markets without Symbolic Limits,” you’ve seen one of the moves I end up making here. We imbue the right to vote with all sorts of symbolic value–we treat it is a metaphorical badge of equality and full membership. But we don’t have to do that. The rest of you could and should think of political power the way I do, that having the right to vote has no more inherent special status than a plumbing license. Further, I argue that we can judge semiotic/symbolic norms by their consequences. In this case, if it turns out that epistocracy produces more substantively just results than democracy, this would mean we’re obligated to change the semiotics we attach to the right to vote, not that we’re obligated to stick with democracy because the right to vote has special meaning. I push hard on the claim that it’s probably just a contingent social construction that we imbue the right to vote with symbolic value. At least, no one has successfully shown otherwise.
So, we shouldn’t just take symbolic benefits into account when we prioritise what action to take, but we should also consider whether to change our symbolic norms, so that the symbolic benefits (which are a consequence of those norms) change. Brennan argues that if epistocracy produces greater direct benefits than democracy, then we should change our symbolic norms so that democracy doesn’t yield greater symbolic benefits than epistocracy. Similarly, one could argue that if some effective altruist intervention produces greater direct benefits than some other effective altruist intervention (say diet change), then we should change our symbolic norms so that the latter doesn’t yield greater symbolic benefits than the former.
[Edit: I realise now that the last paragraph in your above comment touches on these issues.]
As I say in another comment, I think that a few effects—such as reducing the risk of human extinction—can be rescued from cluelessness. Therefore, I’m not committed to being indifferent between literally all actions.
I do, however, think that consequentialism provides a reason for only very few actions. In particular, I do not think there is a valid argument for donating to AMF instead of the Make-a-Wish Foundation based on consequentialism alone.
This is actually one example of where I believe cluelessness has practical import. Here is a related thing I wrote a few months ago in another discussion:
“Another not super well-formed claim:
- Donating 10% of one’s income to GiveWell charities, prioritizing to reduce chicken consumption over reducing beef consumption, and similar ‘individual’ actions by EAs that at first glance seem optimized for effectiveness are valuable almost entirely for their ‘symbolic’ and indirect benefits such as signalling and maintaining community norms.
- Therefore, they are analogous to things like: environmentalists refusing to fly or reducing the waste produced by their household; activists participating in a protest; party members attending weekly meetings of their party; religious people donating money for missionary purposes or building temples.
- Rash criticism of such actions in other communities that appeals to their direct short-term consequences is generally unjustified, and based on a misunderstanding of the role of such actions both within EA and in other communities. If we wanted to assess the ‘effectiveness’ of these other movements, the crucial question to ask (ignoring higher-level questions such as cause prioritization) about, say, an environmentalist insisting to always switch of the lights when they leave a room, would not be how much CO2 emissions are avoided; instead, the relevant questions would be things like: How does promoting a norm of switching off lights affect that community’s ability to attract followers and other resources? How does promoting a norm of switching off lights affect that community’s actions in high-stakes situations, in particular when there is strategic interdependence—for example, what does it imply about the psychology and ability to make credible commitments of a Green party leader negotiating a coalition government?
- It is not at all obvious that promoting norms that are ostensibly about maximizing the effectiveness of all individual ‘altruistic’ decisions is an optimal or even net positive choice for maximizing a community’s total impact. (Both because of and independently of cluelessness.) I think there are relatively good reasons to believe that several EA norms of that kind actually have been impact-increasing innovations, but this is a claim about a messy empirical question, not a tautology.”
Thanks, Max, this is interesting.
Suppose that it is true that the value of those actions comes almost entirely from their symbolic benefits. If so, then a further question is whether those symbolic benefits are dependent on the belief that that is not the case; i.e. the belief that the value of those actions, on the contrary, largely comes from their direct and non-symbolic effects. (Analogously to how indirect benefits of a religion on well-being or community cohesion may be dependent on the false belief that the religion’s metaphysical claims are true.) It could be that making it widely known that the value of those actions comes almost entirely from their symbolic benefits would undermine those benefits (maybe even turn them to harms; e.g. because knowingly doing something with low direct benefits for symbolic reasons would be seen as hypocritical). Whether that’s the case depends on the social context and doesn’t seem straightforward to determine.
I agree this is a non-obvious question. There is a good reason why consequentialists at least since Sidgwick have asked to what extent the correct moral theory might imply to keep its own principles secret.
Yes, though it seems to me that EAs largely think one shouldn’t (cf. that Integrity is one of “the guiding principles of effective altruism” as understood by a number of organisations). (Not that you would suggest otherwise.)
A tangentially related comment. What symbolic benefits or harms our actions have will be dependent on our norms, and these norms will to at least some extent be malleable. Jason Brennan has argued that we should judge such symbolic norms by their consequences.
So, we shouldn’t just take symbolic benefits into account when we prioritise what action to take, but we should also consider whether to change our symbolic norms, so that the symbolic benefits (which are a consequence of those norms) change. Brennan argues that if epistocracy produces greater direct benefits than democracy, then we should change our symbolic norms so that democracy doesn’t yield greater symbolic benefits than epistocracy. Similarly, one could argue that if some effective altruist intervention produces greater direct benefits than some other effective altruist intervention (say diet change), then we should change our symbolic norms so that the latter doesn’t yield greater symbolic benefits than the former.
[Edit: I realise now that the last paragraph in your above comment touches on these issues.]
Hi Max, I think the link is broken. Maybe it is this one?
I don’t remember I’m afraid. I don’t recall having seen the article you link to, so I doubt it was that. Maybe it was this one.