I wonder about the implicit assumption that AI is neutral wrt advancing progressive policy.
For example, you could probably use LLMs today to mass produce letters of support for specific developments. Very soon, they’ll be eloquent and persuasive as well.
You’ll be able to do the same to argue for better legislation, and eventually automate CBO-style scores that are both more specific and more robust.
Of course, this could become an arms race, but I would hope that the superior facts of the progressive policies would mean it is an uneven one.
I wonder about the implicit assumption that AI is neutral wrt advancing progressive policy.
For example, you could probably use LLMs today to mass produce letters of support for specific developments. Very soon, they’ll be eloquent and persuasive as well.
You’ll be able to do the same to argue for better legislation, and eventually automate CBO-style scores that are both more specific and more robust.
Of course, this could become an arms race, but I would hope that the superior facts of the progressive policies would mean it is an uneven one.