effectively diverting attention and funding from more effective risk-reduction measures
Yeah, if you count âmay distract from an even better interventionâ as a reason why something is ânot obviously goodâ, then I think that basically nothing is obviously good. (Which might be true, just pointing out that this criticism seems pretty general.)
I agree, and in fact ânothing is obviously goodâ describes my (tentative) view reasonably well, at least if (i) the bar for âobviouslyâ is sufficiently high and (ii) âgoodâ is to be understood as roughly âmaximizing long-term aggregate well-being.â
Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: Iâd want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.
(E.g., suppose for the sake of the argument that Iâm going to die in a nuclear war that would not have happened in a world without seed banks. - Iâd think that my death makes the world worse, and Iâd want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)
I think itâs a better idea to first identify ideas that are better than doing nothingâwhich in itself can be difficult! - and then prioritize those.
I think there are talented people who could be convinced to work on the long term future if they are given a task to do which is uncontroversially better than doing nothing. I agree itâs better to prioritize actions than just work on the first one you think of, but starting with a bar of âoptimalâ seems too high.
I agree. However, your reply makes me think that I didnât explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is âbetter than doing nothingââand more generally, that nothing is obviously better than doing nothing.
I suspect that my appeal to âdiverting attention and fundingâ as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).
Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:
Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired Bâwho thus far was planning to become an astronautâto devote her career to setting up seed banks.)
Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.
Yeah, if you count âmay distract from an even better interventionâ as a reason why something is ânot obviously goodâ, then I think that basically nothing is obviously good. (Which might be true, just pointing out that this criticism seems pretty general.)
I agree, and in fact ânothing is obviously goodâ describes my (tentative) view reasonably well, at least if (i) the bar for âobviouslyâ is sufficiently high and (ii) âgoodâ is to be understood as roughly âmaximizing long-term aggregate well-being.â
Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: Iâd want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.
(E.g., suppose for the sake of the argument that Iâm going to die in a nuclear war that would not have happened in a world without seed banks. - Iâd think that my death makes the world worse, and Iâd want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)
I think itâs a better idea to first identify ideas that are better than doing nothingâwhich in itself can be difficult! - and then prioritize those.
I think there are talented people who could be convinced to work on the long term future if they are given a task to do which is uncontroversially better than doing nothing. I agree itâs better to prioritize actions than just work on the first one you think of, but starting with a bar of âoptimalâ seems too high.
I agree. However, your reply makes me think that I didnât explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is âbetter than doing nothingââand more generally, that nothing is obviously better than doing nothing.
I suspect that my appeal to âdiverting attention and fundingâ as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).
Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:
Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired Bâwho thus far was planning to become an astronautâto devote her career to setting up seed banks.)
Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.