To be honest, Nate’s analysis about the hope for government action sometimes comes across as if it’s from someone who never studied political economy or other poli-sci topics, assumed governments would act rational, and then concludes governments are hopeless when that assumption is flawed.
If you started out with the assumption that governments are nationally rational and fast-acting, then yes you obviously need to update away from that in light of the lack of COVID response (and many other examples predating COVID). And yes you do need to have some plausible causal chains for success.
But COVID definitely shouldn’t be assumed as proof of hopelessness (or other pessimistic claims like “Warning Shots Probably Wouldn’t Change The Picture Much”), if only given examples of where government did take sensible or at least strong actions in response to threats/events (sometimes without them even occurring). See for example: 9/11, Y2K, Nunn-Lugar Cooperative Threat Reduction program, the asteroid tracking system, etc. Some of these have even been analyzed in posts here on the EA Forum! (Plausibly also worth learning about MADD: Mothers Against Drunk Driving)
Ultimately, factors such as “Probability of affecting the decision-makers personally,” “existing options for response,” “temporal sharpness of harm,” “good political narratives,” “influential advisors,” etc. matter. Are those things guaranteed for AI? No, but I don’t think this COVID situation is a good/sufficient reason to assume that we can’t act rationally in response to a clear warning shot for AI.
Models of success should reflect this uncertainty clearly in their reasoning/modeling, in case later analysis shows this to be false (i.e., warning shots for AI probably won’t matter).
To be honest, Nate’s analysis about the hope for government action sometimes comes across as if it’s from someone who never studied political economy or other poli-sci topics, assumed governments would act rational, and then concludes governments are hopeless when that assumption is flawed.
Nate thinks we should place less of our hope and focus on governments, and more of it on corporations; but corporations obviously aren’t perfect rational actors either.
This isn’t well predicted by “perfect rational actor or bust”, but it’s well predicted by “Nate thinks the problem is at a certain (high) level of difficulty, and the best major governments are a lot further away from clearing that difficulty bar than the best corporations are”.
From Nate’s perspective, AGI is a much harder problem than anything governments have achieved in the past (including the good aspects of our response to nuclear, Y2K, 9/11, and asteroids). In order to put a lot of our hope in sane government response, there should be clear signs that EA intervention can cause at least one government to perform better than any government ever has in history.
COVID’s relevance here isn’t “a-ha, governments failing on COVID proves that they never do anything right, and therefore won’t do AGI right”; it’s “we plausibly won’t get any more opportunities (that are at least this analogous to AGI risk) to test the claim that EAs can make a government perform dramatically better than they ever have before; so we should update on what data we have (insofar as we even need more data for such an overdetermined claim), and pin less of our hopes on government outperformance”.
If EAs can’t even get governments to perform as well as they have on other problems, in the face of an biorisk warning shot, then we’ve failed much more dramatically than if we’d merely succeeded in making a government’s response to COVID as sane as its response to the Y2K bug or the collapse of the Soviet Union.
(This doesn’t mean that we should totally give up on trying to improve government responses — marginal gains might help in some ways, and unprecedented things do happen sometimes. But we should pin less of our hope on it, and treat it as a larger advantage of a plan if the plan doesn’t require gov sanity as a point of failure.)
Are there other things you think show Nate is misunderstanding relevant facts about gov, that aren’t explained by disagreements like “Nate thinks the problem is harder than you do”?
Re “AGI is a harder problem”, see Eliezer’s description:
[...] I think there’s a valid argument about it maybe being more possible to control the supply chain for AI training processors if the global chip supply chain is narrow (also per Carl).
It is in fact a big deal about nuclear tech that uranium can’t be mined in every country, as I understand it, and that centrifuges stayed at the frontier of technology and were harder to build outside the well-developed countries, and that the world ended up revolving around a few Great Powers that had no interest in nuclear tech proliferating any further.
Unfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.
[...]
I would be a lot more cheerful about a few Great Powers controlling AGI if AGI produced wealth, but more powerful AGI produced no more wealth; if AGI was made entirely out of hardware, with no software component that could be keep getting orders of magnitude more efficient using hardware-independent ideas; and if the button on AGIs that destroyed the world was clearly labeled.
That does take AGI to somewhere in the realm of nukes.
To be honest, Nate’s analysis about the hope for government action sometimes comes across as if it’s from someone who never studied political economy or other poli-sci topics, assumed governments would act rational, and then concludes governments are hopeless when that assumption is flawed.
If you started out with the assumption that governments are nationally rational and fast-acting, then yes you obviously need to update away from that in light of the lack of COVID response (and many other examples predating COVID). And yes you do need to have some plausible causal chains for success.
But COVID definitely shouldn’t be assumed as proof of hopelessness (or other pessimistic claims like “Warning Shots Probably Wouldn’t Change The Picture Much”), if only given examples of where government did take sensible or at least strong actions in response to threats/events (sometimes without them even occurring). See for example: 9/11, Y2K, Nunn-Lugar Cooperative Threat Reduction program, the asteroid tracking system, etc. Some of these have even been analyzed in posts here on the EA Forum! (Plausibly also worth learning about MADD: Mothers Against Drunk Driving)
Ultimately, factors such as “Probability of affecting the decision-makers personally,” “existing options for response,” “temporal sharpness of harm,” “good political narratives,” “influential advisors,” etc. matter. Are those things guaranteed for AI? No, but I don’t think this COVID situation is a good/sufficient reason to assume that we can’t act rationally in response to a clear warning shot for AI.
Models of success should reflect this uncertainty clearly in their reasoning/modeling, in case later analysis shows this to be false (i.e., warning shots for AI probably won’t matter).
Nate thinks we should place less of our hope and focus on governments, and more of it on corporations; but corporations obviously aren’t perfect rational actors either.
This isn’t well predicted by “perfect rational actor or bust”, but it’s well predicted by “Nate thinks the problem is at a certain (high) level of difficulty, and the best major governments are a lot further away from clearing that difficulty bar than the best corporations are”.
From Nate’s perspective, AGI is a much harder problem than anything governments have achieved in the past (including the good aspects of our response to nuclear, Y2K, 9/11, and asteroids). In order to put a lot of our hope in sane government response, there should be clear signs that EA intervention can cause at least one government to perform better than any government ever has in history.
COVID’s relevance here isn’t “a-ha, governments failing on COVID proves that they never do anything right, and therefore won’t do AGI right”; it’s “we plausibly won’t get any more opportunities (that are at least this analogous to AGI risk) to test the claim that EAs can make a government perform dramatically better than they ever have before; so we should update on what data we have (insofar as we even need more data for such an overdetermined claim), and pin less of our hopes on government outperformance”.
If EAs can’t even get governments to perform as well as they have on other problems, in the face of an biorisk warning shot, then we’ve failed much more dramatically than if we’d merely succeeded in making a government’s response to COVID as sane as its response to the Y2K bug or the collapse of the Soviet Union.
(This doesn’t mean that we should totally give up on trying to improve government responses — marginal gains might help in some ways, and unprecedented things do happen sometimes. But we should pin less of our hope on it, and treat it as a larger advantage of a plan if the plan doesn’t require gov sanity as a point of failure.)
Are there other things you think show Nate is misunderstanding relevant facts about gov, that aren’t explained by disagreements like “Nate thinks the problem is harder than you do”?
Re “AGI is a harder problem”, see Eliezer’s description: