As part of an AMA I put on X, I was asked for my “top five EA hot takes”. If you’ll excuse the more X-suited tone and spiciness, here they are:
1. OpenAI, Anthropic (and to a lesser extent DeepMind) were the worst cases of Unilateralists Curse of all time. EAs love to discourage enthusiastic newcomers by warning to not do “net negative” unilateralist actions (i.e. don’t start new projects in case they crowd out better, more “well thought through” projects in future, with “more competent” people doing them), but nothing will ever top the monumental unilateralist curse fuck up that was supporting Big AGI in it’s beginnings.
2. AI Safety is nothing without a Pause. Too many EAs are stuck in the pre-GPT-4 paradigm of maxing research, when it’ll all be for nothing unless we get a Pause first. More EAs should switch to Notkilleveryoneism/PauseAI/StopAGI.
3. EA is too elitist. We should be triaging the world’s problems like crazy, and the top 1-2% of people are more than capable of that (most jobs that need doing in EA don’t require top 0.1%).
4. EA is too PR focused—to the point where it actually backfires spectacularly and now there is lots of bad press [big example: SBF’s bad character being known about but not addressed].
5. Despite all it’s flaws, EA is good (and much better than the alternatives in most cases).
Regarding 2 - Hammers love Nails. EAs as Hammers, love research, so they bias towards seeing the need for more research (after all, it is what smart people do). Conversely, EAs are less likely (personality-wise) to be comfortable with advocacy and protests (smart people don’t do this). It is the wrong type of nail.
Although what you said might be part of the explanation for why many EAs focus on alignment or governance research rather than pause advocacy, I think the bigger part is that many EAs think that pause advocacy isn’t as good as research. See, e.g., some of these posts.
See all my comments and replies on the anti-pause posts. I don’t think any of the anti-pause arguments stand up if you put significant weight on timelines being short and p(doom) high (and viscerally grasp that yes, that means your own life is in danger, and those of your friends and family too, in the short term! It’s no longer just an abstract concern!).
Maybe half the community sees it that way. But not the half with all the money and power it seems. There aren’t (yet) large resources being put into playing the “outside game”. And there hasn’t been anything in the way of EA leadership (OpenPhil, 80k) admitting the error afaik.
Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naïve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/theoretical work on AI Safety is no?
(Sorry I missed this before.) There is strong public support for a Pause already. Arguably all that’s needed is galvanising a critical mass of the public into taking action.
Could you explain why you think ‘too much focus being placed on PR’ resulted in bad press?
Perhaps something like: because people were worried about harming SBF’s public reputation they didn’t share their concerns with others, and thus the community as a whole wasn’t able to accurately model his character and act appropriately?
More like, some people did share their concerns, but those they shared them with didn’t do anything about it (because of worrying about bad PR, but also maybe just as a kind of “ends justify the means” thing re his money going to EA. The latter might actually have been the larger effect.).
Ah ok—I guess I would phrase it as ‘not doing anything about concerns because they were too focused on short-term PR’.
I would phrase it this way because, in a world where EA had been more focused on PR, I think we would have been less likely to end up with a situation like SBF (because us having more of a focus on PR would have resulted in doing a better job of PR).
As part of an AMA I put on X, I was asked for my “top five EA hot takes”. If you’ll excuse the more X-suited tone and spiciness, here they are:
1. OpenAI, Anthropic (and to a lesser extent DeepMind) were the worst cases of Unilateralists Curse of all time. EAs love to discourage enthusiastic newcomers by warning to not do “net negative” unilateralist actions (i.e. don’t start new projects in case they crowd out better, more “well thought through” projects in future, with “more competent” people doing them), but nothing will ever top the monumental unilateralist curse fuck up that was supporting Big AGI in it’s beginnings.
2. AI Safety is nothing without a Pause. Too many EAs are stuck in the pre-GPT-4 paradigm of maxing research, when it’ll all be for nothing unless we get a Pause first. More EAs should switch to Notkilleveryoneism/PauseAI/StopAGI.
3. EA is too elitist. We should be triaging the world’s problems like crazy, and the top 1-2% of people are more than capable of that (most jobs that need doing in EA don’t require top 0.1%).
4. EA is too PR focused—to the point where it actually backfires spectacularly and now there is lots of bad press [big example: SBF’s bad character being known about but not addressed].
5. Despite all it’s flaws, EA is good (and much better than the alternatives in most cases).
Regarding 2 - Hammers love Nails. EAs as Hammers, love research, so they bias towards seeing the need for more research (after all, it is what smart people do). Conversely, EAs are less likely (personality-wise) to be comfortable with advocacy and protests (smart people don’t do this). It is the wrong type of nail.
Although what you said might be part of the explanation for why many EAs focus on alignment or governance research rather than pause advocacy, I think the bigger part is that many EAs think that pause advocacy isn’t as good as research. See, e.g., some of these posts.
Yes, my guess is they (like most people!) are motivated by things they’re (1) good at (2) see as high status.
My guess is that many EAs would find protesting cringy and/or awkward!
See all my comments and replies on the anti-pause posts. I don’t think any of the anti-pause arguments stand up if you put significant weight on timelines being short and p(doom) high (and viscerally grasp that yes, that means your own life is in danger, and those of your friends and family too, in the short term! It’s no longer just an abstract concern!).
These all seem good topics to flesh out further! Is 1 still a “hot take” though? I thought this was pretty much the consensus here at this point?
Maybe half the community sees it that way. But not the half with all the money and power it seems. There aren’t (yet) large resources being put into playing the “outside game”. And there hasn’t been anything in the way of EA leadership (OpenPhil, 80k) admitting the error afaik.
Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naïve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/theoretical work on AI Safety is no?
Do you think there is tension between 2 and 4 insofar as mechanisms to get a pause done may rely strongly on public support?
(Sorry I missed this before.) There is strong public support for a Pause already. Arguably all that’s needed is galvanising a critical mass of the public into taking action.
Could you explain why you think ‘too much focus being placed on PR’ resulted in bad press?
Perhaps something like: because people were worried about harming SBF’s public reputation they didn’t share their concerns with others, and thus the community as a whole wasn’t able to accurately model his character and act appropriately?
More like, some people did share their concerns, but those they shared them with didn’t do anything about it (because of worrying about bad PR, but also maybe just as a kind of “ends justify the means” thing re his money going to EA. The latter might actually have been the larger effect.).
Ah ok—I guess I would phrase it as ‘not doing anything about concerns because they were too focused on short-term PR’.
I would phrase it this way because, in a world where EA had been more focused on PR, I think we would have been less likely to end up with a situation like SBF (because us having more of a focus on PR would have resulted in doing a better job of PR).