Very interesting stuff. I’d be wary of the Streisand Effect, that calling attention to the danger of AI-powered corporate lobbying might cause someone to build AI for corporate lobbying. Your third section clearly explains the risks of such a plan, but might not be heeded by those excited by AI lobbying.
Unfortunately, I think the upside of considering amendments to lobbying disclosure laws to attempt to address implications of this outweigh downsides of people learning more about this.
Also, the more well-funded special interest groups are more likely to independently discover and advance AI-driven lobbying than the less well-funded / more diffuse interests of average citizens.
What kinds of amendments to lobbying disclosure laws could be made? Is it practical to require disclosure of LLM-use in lobbying when detection is not yet reliable? Is disclosure even enough, or is it necessary to ban LLM lobbying entirely? I assume this would need to be a new law passed by Congress rather than an FEC rule — would you know if there is or has been any consideration of similar legislation?
I agree this would require new legislation to fully address (rather than merely a change to a rule under existing statute).
As far as I’m aware there has not been any consideration of relevant legislation, but I would love to learn about anything that others have seen that may be relevant.
OpEds in NYT and WaPo about threats to discourse and democracy from ChatGPT. Both cite your example, though don’t link your paper perhaps from infohazard concerns. Looks like your concerns are gaining traction.
Very interesting stuff. I’d be wary of the Streisand Effect, that calling attention to the danger of AI-powered corporate lobbying might cause someone to build AI for corporate lobbying. Your third section clearly explains the risks of such a plan, but might not be heeded by those excited by AI lobbying.
Unfortunately, I think the upside of considering amendments to lobbying disclosure laws to attempt to address implications of this outweigh downsides of people learning more about this.
Also, the more well-funded special interest groups are more likely to independently discover and advance AI-driven lobbying than the less well-funded / more diffuse interests of average citizens.
That’s a good argument, I think I agree.
What kinds of amendments to lobbying disclosure laws could be made? Is it practical to require disclosure of LLM-use in lobbying when detection is not yet reliable? Is disclosure even enough, or is it necessary to ban LLM lobbying entirely? I assume this would need to be a new law passed by Congress rather than an FEC rule — would you know if there is or has been any consideration of similar legislation?
I agree this would require new legislation to fully address (rather than merely a change to a rule under existing statute).
As far as I’m aware there has not been any consideration of relevant legislation, but I would love to learn about anything that others have seen that may be relevant.
OpEds in NYT and WaPo about threats to discourse and democracy from ChatGPT. Both cite your example, though don’t link your paper perhaps from infohazard concerns. Looks like your concerns are gaining traction.
https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html?smid=nytcore-ios-share&referringSource=articleShare
https://www.washingtonpost.com/business/chatgpt-could-makedemocracy-even-more-messy/2022/12/06/e613edf8-756a-11ed-a199-927b334b939f_story.html