Also note that whereas Holden rejected the Charity Doomsday Argument, clarifying he was talking about relative standing of charities including all flow-through effects (where a big future increases the impact of most interventions astronomically, although some more than others), Dickens embraces it:
I don’t find it plausible that I should be indifferent between $1 to AI safety and $94,200,000,000,000,000 to GiveDirectly...This only considers GiveDirectly’s direct effects and not its flow-through effects, but I still find it implausible that GiveDirectly’s direct effects could matter so much less in expectation than [the flow-through effects of] AI safety work
The specific interventions are a red herring here, it’s saying the future won’t be big and subject to any effect of our actions (like asteroid defense, or speeding up colonization by 1 day).
I’d suggest looking at Carl and Toby’s comments on this GiveWell post if you’re interested in formulating priors.
Also note that whereas Holden rejected the Charity Doomsday Argument, clarifying he was talking about relative standing of charities including all flow-through effects (where a big future increases the impact of most interventions astronomically, although some more than others), Dickens embraces it:
The specific interventions are a red herring here, it’s saying the future won’t be big and subject to any effect of our actions (like asteroid defense, or speeding up colonization by 1 day).
This post is also relevant.