Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you’re comfortable that the granularity of the priors you’re forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.
In the case of ‘cluelessness’, it honestly seems better framed as ‘laziness’ to me. There’s no principled reason why we can’t throw a bunch of resources at refining and parameterising cost-effectiveness analyses like these, but Givewell afaict don’t do it because they like to deal in relatively granular priors and longtermist organisations don’t do it afaict because post-‘Beware Suprising and Suspicious Convergences’ no-one takes the idea seriously that global poverty research could be a good use of longtermist resources. I think that’s a shame, both because it doesn’t seem either surprising or suspicious to me that high granularity interventions could be more effective long-term than low-granularity ones (eg ‘more AI safety research’) - IMO the planning fallacy gets much worse over longer periods—and because this...
Plausibly what we really need is more emphasis on geopolitical stability, well-being enhancing values, and resilient, well-being enhancing governance institutions. If that were the case, I’d expect the case for altruistically donating bednets to help the less well-off is fairly straightforward.
… seems to me like it should be a much larger part of the conversation. The only case I’ve seen for disregarding it amounts to hard cluelessness—we ‘know’ extinction reduces value by a vast amount (assuming we think the future is +EV) - whereas trajectory change is difficult to map out. But as above, that seems like lazy reasoning that we could radically improve if we put some resources into it.
Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you’re comfortable that the granularity of the priors you’re forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.
In the case of ‘cluelessness’, it honestly seems better framed as ‘laziness’ to me. There’s no principled reason why we can’t throw a bunch of resources at refining and parameterising cost-effectiveness analyses like these, but Givewell afaict don’t do it because they like to deal in relatively granular priors and longtermist organisations don’t do it afaict because post-‘Beware Suprising and Suspicious Convergences’ no-one takes the idea seriously that global poverty research could be a good use of longtermist resources. I think that’s a shame, both because it doesn’t seem either surprising or suspicious to me that high granularity interventions could be more effective long-term than low-granularity ones (eg ‘more AI safety research’) - IMO the planning fallacy gets much worse over longer periods—and because this...
… seems to me like it should be a much larger part of the conversation. The only case I’ve seen for disregarding it amounts to hard cluelessness—we ‘know’ extinction reduces value by a vast amount (assuming we think the future is +EV) - whereas trajectory change is difficult to map out. But as above, that seems like lazy reasoning that we could radically improve if we put some resources into it.