Update: I’ve received feedback from the SFF round; we got positive evaluations from two recommenders (so my understanding is the funding allocated to us in the s-process was lower than the speculation grant) and one piece of negative feedback. The negative feedback mentioned that our project might lead to EA getting swamped by normies with high inferential distances, which can have negative consequences; and that because of that risk, “This initiative may be worthy of some support, but unfortunately other orgs in this rather impressive lineup must take priority”.
If you’re considering donating to AIGSI/AISGF, please reach out! My email is ms@contact.ms.
“EA getting swamped by normies with high inferential distances”
This seems like completely the wrong focus! We need huge numbers of normies involved to get the political pressure necessary to act on AI x-risk before it’s too late. We’ve already tried the “EA’s lobbying behind closed doors” approach, and it has failed (/been co-opted by the big AGI companies).
I wouldn’t include OpenAI/Anthropic’s lobbying efforts in the “EA’s lobbying behind closed doors” category. What evidence do you have for movement in that direction among actual EA orgs?
I do think there’s concern with a popular movement that the movement will move in a direction you didn’t want, but empirically this has already happened for “behind closed doors” lobbying so I don’t think a popular movement can do worse.
There’s also an argument that a popular movement would be too anti-AI and end up excessively delaying a post-AGI utopia, but I discussed in my post why I don’t think that’s a sufficiently big concern.
(I agree with you, I’m just anticipating some likely counter-arguments)
Update: I’ve received feedback from the SFF round; we got positive evaluations from two recommenders (so my understanding is the funding allocated to us in the s-process was lower than the speculation grant) and one piece of negative feedback. The negative feedback mentioned that our project might lead to EA getting swamped by normies with high inferential distances, which can have negative consequences; and that because of that risk, “This initiative may be worthy of some support, but unfortunately other orgs in this rather impressive lineup must take priority”.
If you’re considering donating to AIGSI/AISGF, please reach out! My email is ms@contact.ms.
This seems like completely the wrong focus! We need huge numbers of normies involved to get the political pressure necessary to act on AI x-risk before it’s too late. We’ve already tried the “EA’s lobbying behind closed doors” approach, and it has failed (/been co-opted by the big AGI companies).
I wouldn’t include OpenAI/Anthropic’s lobbying efforts in the “EA’s lobbying behind closed doors” category. What evidence do you have for movement in that direction among actual EA orgs?
I do think there’s concern with a popular movement that the movement will move in a direction you didn’t want, but empirically this has already happened for “behind closed doors” lobbying so I don’t think a popular movement can do worse.
There’s also an argument that a popular movement would be too anti-AI and end up excessively delaying a post-AGI utopia, but I discussed in my post why I don’t think that’s a sufficiently big concern.
(I agree with you, I’m just anticipating some likely counter-arguments)