Ride the current way of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).
I agree! In recent days, I’ve been soundboarding an idea of mine:
Idea: AI Generated Content (AIGC) Policy Consultancy
Current Gaps: 1. Policy around services provided by AIGC is probably not gonna be good within the next decade, despite the speed with which AI will begin automating tasks and industries. See: social media, crypto policy.
2. AI Safety community currently struggles with presenting strong, compelling value propositions or near-term inroads into policymaking circles. This is consistent with other x-risk topics. See: climate and pandemic risk.
Proposition: EA community gathers law and tech people together to formulate and AIGC policy framework. Will require ~10 tech/law people which is quite feasible as an EA project.
Benefits:
1. Formulating AIGC policy will establish credibility and political capital to tackle alignment problems
2. AIGC is the most publicly understandable way to present AI risk to the public, allowing AIS to reach mainstream appeal
3. Playing into EA’s core competencies of overanalysing problems
4. Likely high first mover advantage, where if EA can set the tone for AI policy discourse, it will mitigate people believing misconceptions about AI as a new tech, which of course benefits AIS in the long run
Further Thoughts
Coming from a climate advocate background, I think this is the least low-probability way for EA to engage the public and policymakers on AIS. It seeks to answer “How to we get politicians to take EA’s AIS stances seriously”
I find that some AIS people I’ve talked to don’t immediately see the value of this idea. However, my context is that having been a climate advocate, I learned of an incredibly long history of scientists’ input being ignored simply because the public and policymakers did not prioritise the value of climate risk work.
It was ultimately engaging, predominantly youth, advocacy that mobilised institutional resources and demand to the level required. I highly suspect this will hold true for AI Safety, and I hope this time, the x-risk community doesn’t make the same mistake of undervaluing external support. So this plan is meant to provide a value proposition for AI Safety that non-AIS people understand better.
So far, I haven’t been able to make much progress on this idea. Problem being that I am neither in the law field nor technical AIS field (something I hope to work on next year), so if it happens, I essentially need to find someone else to spearhead it.
Anyway, I posted this idea publicly because I’ve procrastinating on developing it for ~1 week, so I figured it was better to send it out into the ether and see if anyone feels inspired, rather than just let it sit in my Drafts. Do reach out if you or anyone you know might be interested!
I agree! In recent days, I’ve been soundboarding an idea of mine:
Idea: AI Generated Content (AIGC) Policy Consultancy
Current Gaps:
1. Policy around services provided by AIGC is probably not gonna be good within the next decade, despite the speed with which AI will begin automating tasks and industries. See: social media, crypto policy.
2. AI Safety community currently struggles with presenting strong, compelling value propositions or near-term inroads into policymaking circles. This is consistent with other x-risk topics. See: climate and pandemic risk.
Proposition: EA community gathers law and tech people together to formulate and AIGC policy framework. Will require ~10 tech/law people which is quite feasible as an EA project.
Benefits:
1. Formulating AIGC policy will establish credibility and political capital to tackle alignment problems
2. AIGC is the most publicly understandable way to present AI risk to the public, allowing AIS to reach mainstream appeal
3. Playing into EA’s core competencies of overanalysing problems
4. Likely high first mover advantage, where if EA can set the tone for AI policy discourse, it will mitigate people believing misconceptions about AI as a new tech, which of course benefits AIS in the long run
Further Thoughts
Coming from a climate advocate background, I think this is the least low-probability way for EA to engage the public and policymakers on AIS. It seeks to answer “How to we get politicians to take EA’s AIS stances seriously”
I find that some AIS people I’ve talked to don’t immediately see the value of this idea. However, my context is that having been a climate advocate, I learned of an incredibly long history of scientists’ input being ignored simply because the public and policymakers did not prioritise the value of climate risk work.
It was ultimately engaging, predominantly youth, advocacy that mobilised institutional resources and demand to the level required. I highly suspect this will hold true for AI Safety, and I hope this time, the x-risk community doesn’t make the same mistake of undervaluing external support. So this plan is meant to provide a value proposition for AI Safety that non-AIS people understand better.
So far, I haven’t been able to make much progress on this idea. Problem being that I am neither in the law field nor technical AIS field (something I hope to work on next year), so if it happens, I essentially need to find someone else to spearhead it.
Anyway, I posted this idea publicly because I’ve procrastinating on developing it for ~1 week, so I figured it was better to send it out into the ether and see if anyone feels inspired, rather than just let it sit in my Drafts. Do reach out if you or anyone you know might be interested!