Is your last point meant to be AGI specific or not? I feel like it would be relatively easy to get non-zero evidence that there was a risk of everyone dying from a full nuclear exchange: youâd just need some really good modelling of the atmospheric effects that suggested a sufficiently bad nuclear winter, where the assumptions of the model themselves were ultimately traceable to good empirical evidence. Similarly for climate change being an X-risk. Sure, even good modelling can be wrong, but unless you reject climate modelling entirely, and are totally agnostic about what will happen to world temperature by 2100, I donât see how there could be an in-principle barrier here. Iâm not saying we in fact have evidence that there is a significant X-risk from nuclear war or climate change, just that we could; nothing about âthe future is hard to predictâ precludes it. .
I generally agree, but I think that we are nowhere near being able to say, âThe risk of future climate catastrophe was previously 29.5 percent, but thanks to my organizationâs work, that risk has been reduced to 29.4 percent, thus justifying the money spent.â The whole idea of making grants on such a slender basis of unprovable speculation is radically different from the traditional EA approach of demanding multiple RCTs. Might be a great idea, but still a totally different thing. Shouldnât even be mentioned in the same breath.
There are probably good proxies for climate effects though: i.e. reductions in more measurable stuff, so I think the situation is no that analogous to AI. And some global health and development stuff involves things where the outcome we actually care about is hard to measure: i.e. Deworming and itâs possible positive effects on later earnings, and presumably well-being. We know deworming gets rid of worms, but the literature on the benefits of this is famously contentious.
Although we could potentially derive probabilities of various sorts of nuclear incidents causing extinction, the probabilities of those events occuring in the first place are in the end guesswork. By definition, there can be no âevidentiary confirmationâ of the guesswork because once the event occurs, there is no one around to confirm it happened. Thus, the probabilities of event occurence could be well-informed guesswork, but would still be guesswork.
Is your last point meant to be AGI specific or not? I feel like it would be relatively easy to get non-zero evidence that there was a risk of everyone dying from a full nuclear exchange: youâd just need some really good modelling of the atmospheric effects that suggested a sufficiently bad nuclear winter, where the assumptions of the model themselves were ultimately traceable to good empirical evidence. Similarly for climate change being an X-risk. Sure, even good modelling can be wrong, but unless you reject climate modelling entirely, and are totally agnostic about what will happen to world temperature by 2100, I donât see how there could be an in-principle barrier here. Iâm not saying we in fact have evidence that there is a significant X-risk from nuclear war or climate change, just that we could; nothing about âthe future is hard to predictâ precludes it.
.
I generally agree, but I think that we are nowhere near being able to say, âThe risk of future climate catastrophe was previously 29.5 percent, but thanks to my organizationâs work, that risk has been reduced to 29.4 percent, thus justifying the money spent.â The whole idea of making grants on such a slender basis of unprovable speculation is radically different from the traditional EA approach of demanding multiple RCTs. Might be a great idea, but still a totally different thing. Shouldnât even be mentioned in the same breath.
There are probably good proxies for climate effects though: i.e. reductions in more measurable stuff, so I think the situation is no that analogous to AI. And some global health and development stuff involves things where the outcome we actually care about is hard to measure: i.e. Deworming and itâs possible positive effects on later earnings, and presumably well-being. We know deworming gets rid of worms, but the literature on the benefits of this is famously contentious.
Although we could potentially derive probabilities of various sorts of nuclear incidents causing extinction, the probabilities of those events occuring in the first place are in the end guesswork. By definition, there can be no âevidentiary confirmationâ of the guesswork because once the event occurs, there is no one around to confirm it happened. Thus, the probabilities of event occurence could be well-informed guesswork, but would still be guesswork.