(Comments from skimming the piece and general thoughts from the current state of AI legislation)
->If there is agreement, there should be a pause, building international trust for a pause is crucial- Current verification mechanisms are rather weak. -> Current policy discourse rarely includes X-risks (coming from legislative drafts, frameworks, and National strategies countries are releasing). A very small minority of people in the broader CSO space seem concerned about X-risks. The recent UN AI Advisory Body report on AI also doesn’t really hone in on x-risks. -> There might be strange observer effects wherein proposing the idea of a pause makes that party look weak and makes the tech seem even more important. -> Personally, I am not sure if there is a well-defined end-point to the alignment problem. Any argument for a pause should come with what the “resume” conditions are going to be. In the current paradigm, there seems to be no good definition of acceptable/aligned behavior accepted across stakeholders.
Now, → Pausing is a really bad look for people in office. Without much precedent, they would be treading straight into the path of innovation while also angering the tech lobby. They need a good reason to show their constituents why they want to take a really extreme step, such as pausing progress/innovation in a hot area(this is why trigger events are a thing). This sets bad precedents and spooks other sectors as well(especially in the US where this is going to be painted as a Big Government move). Remember, policymakers have a much broader portfolio than just AI, and they do not necessarily think this is the most pressing problem. → Pausing hurts countries that stand to gain(or think that they do) the most from it (this tends to be Global South, AI For Good/SDGs folk). -> Any arguments for pause will also have to consider the opportunity cost of delaying more capable AI. -> Personally, I don’t update much on the US public being surveyed because of potential framing biases, little downside cost of agreeing, etc. I also don’t think the broader public understands the alignment problem well.
(Comments from skimming the piece and general thoughts from the current state of AI legislation)
->If there is agreement, there should be a pause, building international trust for a pause is crucial- Current verification mechanisms are rather weak.
-> Current policy discourse rarely includes X-risks (coming from legislative drafts, frameworks, and National strategies countries are releasing). A very small minority of people in the broader CSO space seem concerned about X-risks. The recent UN AI Advisory Body report on AI also doesn’t really hone in on x-risks.
-> There might be strange observer effects wherein proposing the idea of a pause makes that party look weak and makes the tech seem even more important.
-> Personally, I am not sure if there is a well-defined end-point to the alignment problem. Any argument for a pause should come with what the “resume” conditions are going to be. In the current paradigm, there seems to be no good definition of acceptable/aligned behavior accepted across stakeholders.
Now,
→ Pausing is a really bad look for people in office. Without much precedent, they would be treading straight into the path of innovation while also angering the tech lobby. They need a good reason to show their constituents why they want to take a really extreme step, such as pausing progress/innovation in a hot area(this is why trigger events are a thing). This sets bad precedents and spooks other sectors as well(especially in the US where this is going to be painted as a Big Government move). Remember, policymakers have a much broader portfolio than just AI, and they do not necessarily think this is the most pressing problem.
→ Pausing hurts countries that stand to gain(or think that they do) the most from it (this tends to be Global South, AI For Good/SDGs folk).
-> Any arguments for pause will also have to consider the opportunity cost of delaying more capable AI.
-> Personally, I don’t update much on the US public being surveyed because of potential framing biases, little downside cost of agreeing, etc. I also don’t think the broader public understands the alignment problem well.