There are two subquestions that didn’t feel important/commonly discussed enough to be worth including in the (already long!) post itself, but that felt important/commonly discussed enough to not simply delete. So I’ll add them here.
The first of these subquestions fits under “How will “leverage over the future” change over time?” The second fits under “How effectively can we “punt to the future”?”
Would punting be less likely to be effective in worlds where it’d be most useful?
Plausibly, resources that can be dedicated towards longtermist causes are especially valuable if a global catastrophe is likely to occur. But also plausibly, the likelier it is that such a catastrophe would occur, the likelier it is that punting actions will turn out to fail. This could occur due to, for example, resources being wiped out, the rule of law being disrupted, or relevant social movements unravelling.
Likewise, plausibly, resources that can be dedicated towards longtermist causes are especially valuable if EA, longtermism, and/or related values are likely to become less widespread or disappear entirely. But also plausibly, the likelier it is that that happens, the less likely it is that the people we’d be punting to would act in ways we’d endorse (reducing the effectiveness of our punting).
It seems possible that examples like these point towards a more general correlation between how valuable successful punting would be and how likely punting is to fail. In other words, this may suggest punting would be least likely to work in the worlds where it’d be mot valuable. This may reduce the expected value of punting. (But this is all somewhat speculative.)
I believe Kit and Shulman discuss similar ideas, though I may be misinterpreting them.
There are two subquestions that didn’t feel important/commonly discussed enough to be worth including in the (already long!) post itself, but that felt important/commonly discussed enough to not simply delete. So I’ll add them here.
The first of these subquestions fits under “How will “leverage over the future” change over time?” The second fits under “How effectively can we “punt to the future”?”
How has leverage changed over history?
This is relevant to MacAskill’s “inductive argument against HoH”.
Would punting be less likely to be effective in worlds where it’d be most useful?
Plausibly, resources that can be dedicated towards longtermist causes are especially valuable if a global catastrophe is likely to occur. But also plausibly, the likelier it is that such a catastrophe would occur, the likelier it is that punting actions will turn out to fail. This could occur due to, for example, resources being wiped out, the rule of law being disrupted, or relevant social movements unravelling.
Likewise, plausibly, resources that can be dedicated towards longtermist causes are especially valuable if EA, longtermism, and/or related values are likely to become less widespread or disappear entirely. But also plausibly, the likelier it is that that happens, the less likely it is that the people we’d be punting to would act in ways we’d endorse (reducing the effectiveness of our punting).
It seems possible that examples like these point towards a more general correlation between how valuable successful punting would be and how likely punting is to fail. In other words, this may suggest punting would be least likely to work in the worlds where it’d be mot valuable. This may reduce the expected value of punting. (But this is all somewhat speculative.)
I believe Kit and Shulman discuss similar ideas, though I may be misinterpreting them.