Given that you said “robustly” in your first point, it might be that you’re adopting something like risk-neutrality or another alternative to expected value theory. If so, I’d say that:
That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiser’s curse, etc.), at least in situations that don’t involve “fanaticism”. But I acknowledge uncertainty on that front too.
If you just meant “It might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventions”, then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
We know from Tetlock’s work that some people can do better than chance at forecasts over the range of months and years.
We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlock’s work) do on forecasts over much longer timescales (so we don’t have specific evidence that they’ll do well or that they’ll do badly).
We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesn’t hit zero, although maybe it asymptotes to it eventually.
That decline might be sharp enough to offset the increased “scale” of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject “fanaticism” (see Tarsney’s epistemic challenge paper).
I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/capital/knowledge/whatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
One reason is that similar points also apply to short-termist interventions.
Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly.
We can also model these downside risks.
We haven’t done this in detail yet as far as I’m aware
But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralist’s curse, this post of mine [hopefully that’s useful!])
(All that said, you did just say “Two possible objections”, and I do think pointing out possible objections is a useful part of the cause prioritisation project.)
On your specific points:
Given that you said “robustly” in your first point, it might be that you’re adopting something like risk-neutrality or another alternative to expected value theory. If so, I’d say that:
That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiser’s curse, etc.), at least in situations that don’t involve “fanaticism”. But I acknowledge uncertainty on that front too.
If you just meant “It might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventions”, then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
We know from Tetlock’s work that some people can do better than chance at forecasts over the range of months and years.
We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlock’s work) do on forecasts over much longer timescales (so we don’t have specific evidence that they’ll do well or that they’ll do badly).
We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesn’t hit zero, although maybe it asymptotes to it eventually.
That decline might be sharp enough to offset the increased “scale” of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject “fanaticism” (see Tarsney’s epistemic challenge paper).
I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/capital/knowledge/whatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
One reason is that similar points also apply to short-termist interventions.
Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly.
We can also model these downside risks.
We haven’t done this in detail yet as far as I’m aware
But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralist’s curse, this post of mine [hopefully that’s useful!])
And there’s been some basic analysis and estimation for some relevant things. e.g. in relation to “punting to the future”
(All that said, you did just say “Two possible objections”, and I do think pointing out possible objections is a useful part of the cause prioritisation project.)