Strongly agreed about more outreach there. What specifically do you imagine might be best?
I’m extremely concerned about AI safety becoming negatively polarized. I’ve spent the past week in DC meeting Republican staffers and members, who, when approached in the right frame (which most EAs cannot do), are surprisingly open to learning about and are default extremely concerned about AI x-risk.
I’m particularly concerned about a scenario in which Kamala wins and Republicans become anti AI safety as a partisan thing. This doesn’t have to happen, but there’s a decent chance it does. If Trump had won the last election, anti-vaxxers wouldn’t have been as much of a thing–it’d have been “Trump’s vaccine.”
I think if Trump wins, there’s a good chance we see his administration exert leadership on AI (among other things, see Ivanka’s tworecenttweets and the site she seems to have created herself to educate people about AI safety), and then Republicans will fall in line.
If Kamala wins, I think there’s a decent chance Republicans react negatively to AI safety because it’s grouped in with what’s perceived as woke bs–which is just unacceptable to the right. It’s essential that it’s understood as a totally distinct thing. I don’t think left-leaning AI safety people sufficiently understand just how unacceptable it is. A good thought experiment might be to consider if Democrats would be into AI safety if it also meant banning gay marriage.
I’m fairly confident that most EAs simply cannot model the mind of a Republican (though they often think they can). This leads to planning and strategies that are less effective than they could be. In contrast, to be a right of center EA, you also need to effectively model the mind of a left of center EA/person (and find a lot of common ground), or you’d simply not be able to exist in this community. So, this means that the few right of center EAs (or people with previous right of center backgrounds EAs) I know are able to think far more effectively about the best strategies to accomplish optimal bipartisan end results for AI safety.
Things do tend to become partisan inevitably. I see an ideal outcome potentially being that what becomes partisan is just how much AI safety is paired with “woke” stuff, with Democrats encouraging this and Republicans opposing it. The worst outcome might be that they’re conflated and then Republicans, who would ideally exert great leadership on AI x-risk and drive forward a reasonable conservative agenda on it, wind up falling for the Ted Cruz narrative and blocking everything.
Strongly agreed about more outreach there. What specifically do you imagine might be best?
I’m extremely concerned about AI safety becoming negatively polarized. I’ve spent the past week in DC meeting Republican staffers and members, who, when approached in the right frame (which most EAs cannot do), are surprisingly open to learning about and are default extremely concerned about AI x-risk.
I’m particularly concerned about a scenario in which Kamala wins and Republicans become anti AI safety as a partisan thing. This doesn’t have to happen, but there’s a decent chance it does. If Trump had won the last election, anti-vaxxers wouldn’t have been as much of a thing–it’d have been “Trump’s vaccine.”
I think if Trump wins, there’s a good chance we see his administration exert leadership on AI (among other things, see Ivanka’s two recent tweets and the site she seems to have created herself to educate people about AI safety), and then Republicans will fall in line.
If Kamala wins, I think there’s a decent chance Republicans react negatively to AI safety because it’s grouped in with what’s perceived as woke bs–which is just unacceptable to the right. It’s essential that it’s understood as a totally distinct thing. I don’t think left-leaning AI safety people sufficiently understand just how unacceptable it is. A good thought experiment might be to consider if Democrats would be into AI safety if it also meant banning gay marriage.
I’m fairly confident that most EAs simply cannot model the mind of a Republican (though they often think they can). This leads to planning and strategies that are less effective than they could be. In contrast, to be a right of center EA, you also need to effectively model the mind of a left of center EA/person (and find a lot of common ground), or you’d simply not be able to exist in this community. So, this means that the few right of center EAs (or people with previous right of center backgrounds EAs) I know are able to think far more effectively about the best strategies to accomplish optimal bipartisan end results for AI safety.
Things do tend to become partisan inevitably. I see an ideal outcome potentially being that what becomes partisan is just how much AI safety is paired with “woke” stuff, with Democrats encouraging this and Republicans opposing it. The worst outcome might be that they’re conflated and then Republicans, who would ideally exert great leadership on AI x-risk and drive forward a reasonable conservative agenda on it, wind up falling for the Ted Cruz narrative and blocking everything.