I would add that it’s not just extreme proposals to make “AI go well” like Yudkowsky’s airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even ‘pausing AI’ through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of ‘AI’ restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically.
This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true.
(The simplest argument for why it’s hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn’t increase the threat. It is possible that both donations to a “charity” that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)
I would add that it’s not just extreme proposals to make “AI go well” like Yudkowsky’s airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even ‘pausing AI’ through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of ‘AI’ restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically.
This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true.
(The simplest argument for why it’s hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn’t increase the threat. It is possible that both donations to a “charity” that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)