Given that you said ārobustlyā in your first point, it might be that youāre adopting something like risk-neutrality or another alternative to expected value theory. If so, Iād say that:
That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiserās curse, etc.), at least in situations that donāt involve āfanaticismā. But I acknowledge uncertainty on that front too.
If you just meant āIt might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventionsā, then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
We know from Tetlockās work that some people can do better than chance at forecasts over the range of months and years.
We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlockās work) do on forecasts over much longer timescales (so we donāt have specific evidence that theyāll do well or that theyāll do badly).
We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesnāt hit zero, although maybe it asymptotes to it eventually.
That decline might be sharp enough to offset the increased āscaleā of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject āfanaticismā (see Tarsneyās epistemic challenge paper).
I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/ācapital/āknowledge/āwhatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
One reason is that similar points also apply to short-termist interventions.
Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly.
We can also model these downside risks.
We havenāt done this in detail yet as far as Iām aware
But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralistās curse, this post of mine [hopefully thatās useful!])
(All that said, you did just say āTwo possible objectionsā, and I do think pointing out possible objections is a useful part of the cause prioritisation project.)
On your specific points:
Given that you said ārobustlyā in your first point, it might be that youāre adopting something like risk-neutrality or another alternative to expected value theory. If so, Iād say that:
That in itself is a questionable assumption, and people could do more work on which decision theory we should use.
I personally lean more towards just expected value theory (but with this incorporating skeptical priors, adjusting for the optimiserās curse, etc.), at least in situations that donāt involve āfanaticismā. But I acknowledge uncertainty on that front too.
If you just meant āIt might be too difficult to ever identify ahead of time a long-termist intervention as better in expectation than short-termist interventionsā, then yeah, I think this might be true (at least if fanaticism in the philosophical sense is bad, which seems to be an open question). But I think we actually have extremely little evidence for this claim.
We know from Tetlockās work that some people can do better than chance at forecasts over the range of months and years.
We seem to have basically no evidence about how well people who are actually trying (and especially ones aware of Tetlockās work) do on forecasts over much longer timescales (so we donāt have specific evidence that theyāll do well or that theyāll do badly).
We have a scrap of evidence suggesting that forecasting accuracy declines as the range increases, but relatively slowly (though this was comparing a few months to about a year; ).
So currently it seems to me that our best guess should be that forecasting accuracy continues to decline, but doesnāt hit zero, although maybe it asymptotes to it eventually.
That decline might be sharp enough to offset the increased āscaleā of the future, or might not, depending both on various empirical assumptions and on whether we accept or reject āfanaticismā (see Tarsneyās epistemic challenge paper).
I agree that basically all interventions have downside risks, and that one notable category of downside risks is the risk that resources/ācapital/āknowledge/āwhatever end up being used for bad things by other people. (This could be because they have bad goals or because they have good goals but bad plans; see also.) I think this will definitely mean we should deprioritise some otherwise plausible longtermist interventions. I also agree that it might undermine strong longtermism as a whole, but that seems very unlikely to me.
One reason is that similar points also apply to short-termist interventions.
Another is that it seems very likely that, if we try, we can make it more likely that the resources end up in the hands of people who will (in expectation) use them well, rather than in the hands of people who will (in expectation) use them poorly.
We can also model these downside risks.
We havenāt done this in detail yet as far as Iām aware
But we have come up with a bunch of useful concepts and frameworks for that (e.g., information hazards, unilateralistās curse, this post of mine [hopefully thatās useful!])
And thereās been some basic analysis and estimation for some relevant things. e.g. in relation to āpunting to the futureā
(All that said, you did just say āTwo possible objectionsā, and I do think pointing out possible objections is a useful part of the cause prioritisation project.)