Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
Anti-claims
(I.e. claims I am not trying to make and actively disagree with)
No one should be doing EA-qua-EA talent pipeline work
I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years.
The terms EA and longtermism aren’t useful and we should stop using them
I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree
It’s implausible that existential risk reduction will come apart from EA/LT goals
E.g. it might come to seem (I don’t know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another.
This seems like a reason people who aren’t EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I don’t think it outweighs the arguments in favor of more existential risk-centric outreach work.
This isn’t mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didn’t decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term “GCR” over “longtermism”, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
This isn’t mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didn’t decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term “GCR” over “longtermism”, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.