First, by definition, we have no actual evidence about outcomes in the long-term future—it is not as if we can run RCTs where we run Earth 1 for the next 1,000 years with one intervention and Earth 2 with a different intervention. Second, even where experts stand behind short-term treatments and swear that they can observe the outcomes happening right in front of them (everything from psychology to education to medicine), there are many cases where the experts are wrong—even many cases where we do harm while thinking we do good (see Prasad and Cifu’s book Medical Reversals).
Given the lack of evidentiary feedback as well as any solid basis for considering people to be “experts” in the first place, there is a high likelihood that anything we think benefits the long-term future might do nothing or actually make things worse.
The main way to justify long-termist work (especially on AGI) is to claim that there’s a risk of everyone dying (leading to astronomically huge costs), and then claim that there’s a non-zero positive probability of affecting that outcome. There will never be any evidentiary confirmation of either claim, but you can justify any grant to anyone for anything by adjusting the estimated probabilities as needed.
Is your last point meant to be AGI specific or not? I feel like it would be relatively easy to get non-zero evidence that there was a risk of everyone dying from a full nuclear exchange: you’d just need some really good modelling of the atmospheric effects that suggested a sufficiently bad nuclear winter, where the assumptions of the model themselves were ultimately traceable to good empirical evidence. Similarly for climate change being an X-risk. Sure, even good modelling can be wrong, but unless you reject climate modelling entirely, and are totally agnostic about what will happen to world temperature by 2100, I don’t see how there could be an in-principle barrier here. I’m not saying we in fact have evidence that there is a significant X-risk from nuclear war or climate change, just that we could; nothing about “the future is hard to predict” precludes it. .
I generally agree, but I think that we are nowhere near being able to say, “The risk of future climate catastrophe was previously 29.5 percent, but thanks to my organization’s work, that risk has been reduced to 29.4 percent, thus justifying the money spent.” The whole idea of making grants on such a slender basis of unprovable speculation is radically different from the traditional EA approach of demanding multiple RCTs. Might be a great idea, but still a totally different thing. Shouldn’t even be mentioned in the same breath.
There are probably good proxies for climate effects though: i.e. reductions in more measurable stuff, so I think the situation is no that analogous to AI. And some global health and development stuff involves things where the outcome we actually care about is hard to measure: i.e. Deworming and it’s possible positive effects on later earnings, and presumably well-being. We know deworming gets rid of worms, but the literature on the benefits of this is famously contentious.
Although we could potentially derive probabilities of various sorts of nuclear incidents causing extinction, the probabilities of those events occuring in the first place are in the end guesswork. By definition, there can be no “evidentiary confirmation” of the guesswork because once the event occurs, there is no one around to confirm it happened. Thus, the probabilities of event occurence could be well-informed guesswork, but would still be guesswork.
Because the question is impossible to answer.
First, by definition, we have no actual evidence about outcomes in the long-term future—it is not as if we can run RCTs where we run Earth 1 for the next 1,000 years with one intervention and Earth 2 with a different intervention. Second, even where experts stand behind short-term treatments and swear that they can observe the outcomes happening right in front of them (everything from psychology to education to medicine), there are many cases where the experts are wrong—even many cases where we do harm while thinking we do good (see Prasad and Cifu’s book Medical Reversals).
Given the lack of evidentiary feedback as well as any solid basis for considering people to be “experts” in the first place, there is a high likelihood that anything we think benefits the long-term future might do nothing or actually make things worse.
The main way to justify long-termist work (especially on AGI) is to claim that there’s a risk of everyone dying (leading to astronomically huge costs), and then claim that there’s a non-zero positive probability of affecting that outcome. There will never be any evidentiary confirmation of either claim, but you can justify any grant to anyone for anything by adjusting the estimated probabilities as needed.
Is your last point meant to be AGI specific or not? I feel like it would be relatively easy to get non-zero evidence that there was a risk of everyone dying from a full nuclear exchange: you’d just need some really good modelling of the atmospheric effects that suggested a sufficiently bad nuclear winter, where the assumptions of the model themselves were ultimately traceable to good empirical evidence. Similarly for climate change being an X-risk. Sure, even good modelling can be wrong, but unless you reject climate modelling entirely, and are totally agnostic about what will happen to world temperature by 2100, I don’t see how there could be an in-principle barrier here. I’m not saying we in fact have evidence that there is a significant X-risk from nuclear war or climate change, just that we could; nothing about “the future is hard to predict” precludes it.
.
I generally agree, but I think that we are nowhere near being able to say, “The risk of future climate catastrophe was previously 29.5 percent, but thanks to my organization’s work, that risk has been reduced to 29.4 percent, thus justifying the money spent.” The whole idea of making grants on such a slender basis of unprovable speculation is radically different from the traditional EA approach of demanding multiple RCTs. Might be a great idea, but still a totally different thing. Shouldn’t even be mentioned in the same breath.
There are probably good proxies for climate effects though: i.e. reductions in more measurable stuff, so I think the situation is no that analogous to AI. And some global health and development stuff involves things where the outcome we actually care about is hard to measure: i.e. Deworming and it’s possible positive effects on later earnings, and presumably well-being. We know deworming gets rid of worms, but the literature on the benefits of this is famously contentious.
Although we could potentially derive probabilities of various sorts of nuclear incidents causing extinction, the probabilities of those events occuring in the first place are in the end guesswork. By definition, there can be no “evidentiary confirmation” of the guesswork because once the event occurs, there is no one around to confirm it happened. Thus, the probabilities of event occurence could be well-informed guesswork, but would still be guesswork.