I don’t see how you could make a general argument for cluelessness with respect to all decisions made by the community.
I agree. More specifically, I think the argument for cluelessness is defeatable, and tentatively think that we know of defeaters in some cases. Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals. (Though I do think that for none of these it is obvious that their expected value is positive, and that considering either to be obvious would be a serious epistemic error.)
[...] I don’t see how this could ever generalise to an argument that all of our decisions are clueless, since the level of uncertainty will always be almost entirely dependent on the facts about the particular case. Why would uncertainty about the effects of AMF have any bearing on uncertainty about the effects of MIRI or the Clean Air Task Force?
I think you overstate your case here. I agree in principle that “the level of uncertainty will always be almost entirely dependent on the facts about the particular case,” and so that whether we are clueless about any particular decision is a contingent question. However, I think that inspecting the arguments for cluelessness about, say, the effects of donations to AMF do suggest that cluelessness will be pervasive, for reasons we are in principle able to isolate. To name just one example, many actions will have small but in expectation non-zero, highly uncertain effect on the pace of technological growth; this in turn will have an in expectation non-zero, highly uncertain net effect on the risk of human extinction, which in turn … - I believe this line of reasoning alone could be fleshed out into a decisive argument for cluelessness about a wide range of decisions.
On the latter, yes that is a good point—there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.
I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.
AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.
I agree in principle. However, there are a few other reasons why I believe making this point is worthwhile:
GiveWell has in the past advanced an optimistic view about the long-term effects of economic development.
Anecdotally, I know many EAs who both endorse long-termism and donate to AMF. In fact, my guess is that a majority of long-termist EAs donate to organizations that have been selected for their short-term benefits. As I say in another comment, I’m not sure this is a mistake because ‘symbolic’ considerations may outweigh attempts to directly maximize the impact of one’s donations. However, it at least suggests that a conversation about the long-termist benefits of organizations like AMF is relevant for many people.
More broadly, at the level of organizations and norms, various actors within EA seem to endorse the conjunction of longtermism and recommending donations to AMF over donations to the Make-A-Wish foundation. It’s unclear whether this is some kind of political compromise, a marketing tool, or done because of a sincere belief that they are compatible.
The point might serve as guidance for developing the ethical and epistemological foundations of EA. To explain, we might simply be unwilling to give up our intuitive commitments and insist that a satisfying ethical and epistemological basis would make longtermism and “AMF over Make-A-Wish” compatible. This would then be one criterion to reject proposed ethical or epistemological theories.
Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals.
I would be curious to know whether you still basically believe this, and whether you have meanwhile became convinced of the robustness of other actions.
(personal views only) In brief, yes, I still basically believe both of these things; and no, I don’t think I know of any other type or action that I’d consider ‘robustly positive’, at least from a strictly consequentialist perspective.
To be clear, my belief regarding (i) and (ii) is closer to “there exist actions of these types that are robustly positive”, as opposed to “any action that purports to be of one these types is robustly positive”. E.g., it’s certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective or even counterproductive (i.e., to on net increase the risk of extinction, or to otherwise cause significant harms such that I’d consider the action impermissible), it’s possible for resources that were acquired for impartial welfarist purposes to eventually be misused, etc.,
I made some nuanced updates about “acquiring resources for longtermist goals”, but they are mostly things like me having become more or less excited about particular examples/substrategies, me having somewhat richer views on some pitfalls of that strategy (though I don’t think I became aware of qualitatively ‘new’ pitfalls), etc., as opposed to sweeping updates about that whole class of actions and whether they can be robustly positive.
I agree. More specifically, I think the argument for cluelessness is defeatable, and tentatively think that we know of defeaters in some cases. Concretely, I think that we are justified in believing in the positive expected value of (i) avoiding human extinction and (ii) acquiring resources for longtermist goals. (Though I do think that for none of these it is obvious that their expected value is positive, and that considering either to be obvious would be a serious epistemic error.)
I think you overstate your case here. I agree in principle that “the level of uncertainty will always be almost entirely dependent on the facts about the particular case,” and so that whether we are clueless about any particular decision is a contingent question. However, I think that inspecting the arguments for cluelessness about, say, the effects of donations to AMF do suggest that cluelessness will be pervasive, for reasons we are in principle able to isolate. To name just one example, many actions will have small but in expectation non-zero, highly uncertain effect on the pace of technological growth; this in turn will have an in expectation non-zero, highly uncertain net effect on the risk of human extinction, which in turn … - I believe this line of reasoning alone could be fleshed out into a decisive argument for cluelessness about a wide range of decisions.
On the latter, yes that is a good point—there are general features at play here, so I retract my previous comment. However, it still seems true that your rational credal state will always depend to a very significant extent on the particular facts.
I find the use of the long-termist point of view a bit weird as applied to the AMF example. AMF is not usually justified from a long-termist point of view, so it is not really surprising that its benefits seem less obvious when you consider it from that point of view.
I agree in principle. However, there are a few other reasons why I believe making this point is worthwhile:
GiveWell has in the past advanced an optimistic view about the long-term effects of economic development.
Anecdotally, I know many EAs who both endorse long-termism and donate to AMF. In fact, my guess is that a majority of long-termist EAs donate to organizations that have been selected for their short-term benefits. As I say in another comment, I’m not sure this is a mistake because ‘symbolic’ considerations may outweigh attempts to directly maximize the impact of one’s donations. However, it at least suggests that a conversation about the long-termist benefits of organizations like AMF is relevant for many people.
More broadly, at the level of organizations and norms, various actors within EA seem to endorse the conjunction of longtermism and recommending donations to AMF over donations to the Make-A-Wish foundation. It’s unclear whether this is some kind of political compromise, a marketing tool, or done because of a sincere belief that they are compatible.
The point might serve as guidance for developing the ethical and epistemological foundations of EA. To explain, we might simply be unwilling to give up our intuitive commitments and insist that a satisfying ethical and epistemological basis would make longtermism and “AMF over Make-A-Wish” compatible. This would then be one criterion to reject proposed ethical or epistemological theories.
Hi Max,
I would be curious to know whether you still basically believe this, and whether you have meanwhile became convinced of the robustness of other actions.
(personal views only) In brief, yes, I still basically believe both of these things; and no, I don’t think I know of any other type or action that I’d consider ‘robustly positive’, at least from a strictly consequentialist perspective.
To be clear, my belief regarding (i) and (ii) is closer to “there exist actions of these types that are robustly positive”, as opposed to “any action that purports to be of one these types is robustly positive”. E.g., it’s certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective or even counterproductive (i.e., to on net increase the risk of extinction, or to otherwise cause significant harms such that I’d consider the action impermissible), it’s possible for resources that were acquired for impartial welfarist purposes to eventually be misused, etc.,
I made some nuanced updates about “acquiring resources for longtermist goals”, but they are mostly things like me having become more or less excited about particular examples/substrategies, me having somewhat richer views on some pitfalls of that strategy (though I don’t think I became aware of qualitatively ‘new’ pitfalls), etc., as opposed to sweeping updates about that whole class of actions and whether they can be robustly positive.
Thanks! I think I have converged towards a similar view.