I agree, and in fact “nothing is obviously good” describes my (tentative) view reasonably well, at least if (i) the bar for ‘obviously’ is sufficiently high and (ii) ‘good’ is to be understood as roughly ‘maximizing long-term aggregate well-being.’
Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: I’d want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.
(E.g., suppose for the sake of the argument that I’m going to die in a nuclear war that would not have happened in a world without seed banks. - I’d think that my death makes the world worse, and I’d want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)
I think it’s a better idea to first identify ideas that are better than doing nothing—which in itself can be difficult! - and then prioritize those.
I think there are talented people who could be convinced to work on the long term future if they are given a task to do which is uncontroversially better than doing nothing. I agree it’s better to prioritize actions than just work on the first one you think of, but starting with a bar of ‘optimal’ seems too high.
I agree. However, your reply makes me think that I didn’t explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is “better than doing nothing”—and more generally, that nothing is obviously better than doing nothing.
I suspect that my appeal to “diverting attention and funding” as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).
Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:
Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired B—who thus far was planning to become an astronaut—to devote her career to setting up seed banks.)
Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.
I agree, and in fact “nothing is obviously good” describes my (tentative) view reasonably well, at least if (i) the bar for ‘obviously’ is sufficiently high and (ii) ‘good’ is to be understood as roughly ‘maximizing long-term aggregate well-being.’
Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: I’d want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.
(E.g., suppose for the sake of the argument that I’m going to die in a nuclear war that would not have happened in a world without seed banks. - I’d think that my death makes the world worse, and I’d want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)
I think it’s a better idea to first identify ideas that are better than doing nothing—which in itself can be difficult! - and then prioritize those.
I think there are talented people who could be convinced to work on the long term future if they are given a task to do which is uncontroversially better than doing nothing. I agree it’s better to prioritize actions than just work on the first one you think of, but starting with a bar of ‘optimal’ seems too high.
I agree. However, your reply makes me think that I didn’t explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is “better than doing nothing”—and more generally, that nothing is obviously better than doing nothing.
I suspect that my appeal to “diverting attention and funding” as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).
Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:
Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired B—who thus far was planning to become an astronaut—to devote her career to setting up seed banks.)
Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.