it seems extremely likely that specific countermeasures will be taken to prevent a catastrophe, at least indirectly.
This suggests that you hold a view where one of the cruxes with mainstream EA views is “EAs believe there won’t be countermeasures, but countermeasures are very likely, and they significantly mitigate the risk from AI beyond what EAs believe.” (If that is not one of your cruxes, then you can ignore the rest of this!)
The confusing thing about that is, what if EA activities are a key reason why good countermeasures end up being taken against AI? In that case, EA arguments would be a “victim” of their own success (though no one would be complaining!) But that doesn’t seem like a reason to disagree right now, when there is the common ground of “specific countermeasures really need to be taken”.
The confusing thing about that is, what if EA activities are a key reason why good countermeasures end up being taken against AI?
I find that quite unlikely. I think EA activities contribute on the margin, but it seems very likely to me that people would eventually have taken measures against AI risk in the absence of any EA movement.
In general, while I agree we should not take this argument so far, so that EA ideas do not become “victims of their own success”, I also think neglectedness is a standard barometer EAs have used to judge the merits of their interventions. And I think AI risk mitigation will very likely not be a neglected field in the future. This should substantially downweight our evaluation of AI risk mitigation efforts.
In a trivial example, you’d surely concede that EAs should not try to, e.g. work on making sure that future spacecraft designs are safe? Advanced spacecrafts could indeed play a very important role in the future; but it seems unlikely that society would neglect to work on spacecraft safety, making this a pretty unimportant problem to work on right now. To be clear, I definitely don’t think the case for working on AI risk mitigation is as bad as the case for working on spacecraft safety, but my point is that the idea I’m trying to convey here applies in both cases.
This suggests that you hold a view where one of the cruxes with mainstream EA views is “EAs believe there won’t be countermeasures, but countermeasures are very likely, and they significantly mitigate the risk from AI beyond what EAs believe.” (If that is not one of your cruxes, then you can ignore the rest of this!)
The confusing thing about that is, what if EA activities are a key reason why good countermeasures end up being taken against AI? In that case, EA arguments would be a “victim” of their own success (though no one would be complaining!) But that doesn’t seem like a reason to disagree right now, when there is the common ground of “specific countermeasures really need to be taken”.
I find that quite unlikely. I think EA activities contribute on the margin, but it seems very likely to me that people would eventually have taken measures against AI risk in the absence of any EA movement.
In general, while I agree we should not take this argument so far, so that EA ideas do not become “victims of their own success”, I also think neglectedness is a standard barometer EAs have used to judge the merits of their interventions. And I think AI risk mitigation will very likely not be a neglected field in the future. This should substantially downweight our evaluation of AI risk mitigation efforts.
In a trivial example, you’d surely concede that EAs should not try to, e.g. work on making sure that future spacecraft designs are safe? Advanced spacecrafts could indeed play a very important role in the future; but it seems unlikely that society would neglect to work on spacecraft safety, making this a pretty unimportant problem to work on right now. To be clear, I definitely don’t think the case for working on AI risk mitigation is as bad as the case for working on spacecraft safety, but my point is that the idea I’m trying to convey here applies in both cases.