I think this is the best policy I’ve seen so far, in that it reduces both near-term and far term risks, and has a nonzero chance of actually being implemented in some form.
I think the most plausible paths for AI catastrophe involve taking over large corporations, so anything that limits corporate power probably helps.
I think this is the best policy I’ve seen so far, in that it reduces both near-term and far term risks, and has a nonzero chance of actually being implemented in some form.
I think the most plausible paths for AI catastrophe involve taking over large corporations, so anything that limits corporate power probably helps.