And you’ve already agreed that it’s implausible that these efforts would lead to tyranny, you think they will just fail.
I think that conditional on the efforts working, the chance of tyranny is quite high (ballpark 30-40%). I don’t think they’ll work, but if they do, it seems quite bad.
And since I think x-risk from technical AI alignment failure is in the 1-2% range, the risk of tyranny is the dominant effect of “actually enforced global AI pause” in my EV calculation, followed by the extra fast takeoff risks, and then followed by “maybe we get net positive alignment research.”
Conditional on “the efforts” working is hooribly underspecified. A global governance mechanism run by a new extranational body with military powers monitoring and stopping production of GPUs, or a standard treaty with a multi-party inspection regime?
I’m not conditioning on the global governance mechanism— I assign nonzero probability mass to the “standard treaty” thing— but I think in fact you would very likely need global governance, so that is the main causal mechanism through which tyranny happens in my model
I think that conditional on the efforts working, the chance of tyranny is quite high (ballpark 30-40%). I don’t think they’ll work, but if they do, it seems quite bad.
And since I think x-risk from technical AI alignment failure is in the 1-2% range, the risk of tyranny is the dominant effect of “actually enforced global AI pause” in my EV calculation, followed by the extra fast takeoff risks, and then followed by “maybe we get net positive alignment research.”
Conditional on “the efforts” working is hooribly underspecified. A global governance mechanism run by a new extranational body with military powers monitoring and stopping production of GPUs, or a standard treaty with a multi-party inspection regime?
I’m not conditioning on the global governance mechanism— I assign nonzero probability mass to the “standard treaty” thing— but I think in fact you would very likely need global governance, so that is the main causal mechanism through which tyranny happens in my model