I think it is pretty important that, by its own internal logic, longtermism has had negative impact. The AI safety community probably accelerated AI progress. Open AI is still pretty connected to the EA community and has been starting arm races (at least until recently the 80K jobs board listed jobs at Open aI). This is well known but true. Longtermism has also been connected to all sorts of scandals.
As far as I can tell neartermist EA has been reasonably successful. So its kind of concerning that institutional EA is dominated by longtermists. Would be nice to have institutions run by people who genuinely prioritize neartermism.
I think the problem here is that it makes a category mistake about how the move to longtermism happened. It wasn’t because of any success or failure metric that moved things but the actual underlying arguments becoming convincing to people. For example, Holden Karnofsky moving from founding Givewell to heading the longtermist side of OpenPhil and focusing on AI.
The people who made neartermist causes successful chose on their own accord to move to the longtermist. They aren’t being coerced away. GHW donations are growing in absolute terms. The weird feeling that there isn’t enough institutional support isn’t a funding problem it’s a weird vibes problem.
Additionally, I don’t even know if people would say longtermism has had a negative impact outside of the doomiest people given it also accelerated alignment organisations (obviously contingent on your optimism on solving alignment). Most people think there’s decent headway insofar as Greg Brockman is talking about alignment seriously and this salience doesn’t spiral into a race dynamic.
Is the idea of an EA split to force Holden back to Givewell? Is it to make it so that Ord and Macaskill go back to GWWC? I just find these posts kind of weird in that they imagine people being pushed into longtermism forgetting that a lot of longtermists were neartermists at one point and made the choice to switch.
I think OP’s idea is not to get longermists to switch back, but to insulate neartermists from the harms that one might argue come from sharing a broader movement name with the longtermist movement.
Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).
One issue for me is just that EA has radically different standards for what constitutes “impact.” If near-term: lots of rigorous RCTs showing positive effect sizes.
If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . . BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact.
I think it is pretty important that, by its own internal logic, longtermism has had negative impact. The AI safety community probably accelerated AI progress. Open AI is still pretty connected to the EA community and has been starting arm races (at least until recently the 80K jobs board listed jobs at Open aI). This is well known but true. Longtermism has also been connected to all sorts of scandals.
As far as I can tell neartermist EA has been reasonably successful. So its kind of concerning that institutional EA is dominated by longtermists. Would be nice to have institutions run by people who genuinely prioritize neartermism.
I think the problem here is that it makes a category mistake about how the move to longtermism happened. It wasn’t because of any success or failure metric that moved things but the actual underlying arguments becoming convincing to people. For example, Holden Karnofsky moving from founding Givewell to heading the longtermist side of OpenPhil and focusing on AI.
The people who made neartermist causes successful chose on their own accord to move to the longtermist. They aren’t being coerced away. GHW donations are growing in absolute terms. The weird feeling that there isn’t enough institutional support isn’t a funding problem it’s a weird vibes problem.
Additionally, I don’t even know if people would say longtermism has had a negative impact outside of the doomiest people given it also accelerated alignment organisations (obviously contingent on your optimism on solving alignment). Most people think there’s decent headway insofar as Greg Brockman is talking about alignment seriously and this salience doesn’t spiral into a race dynamic.
Is the idea of an EA split to force Holden back to Givewell? Is it to make it so that Ord and Macaskill go back to GWWC? I just find these posts kind of weird in that they imagine people being pushed into longtermism forgetting that a lot of longtermists were neartermists at one point and made the choice to switch.
I think OP’s idea is not to get longermists to switch back, but to insulate neartermists from the harms that one might argue come from sharing a broader movement name with the longtermist movement.
Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).
One issue for me is just that EA has radically different standards for what constitutes “impact.” If near-term: lots of rigorous RCTs showing positive effect sizes.
If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . . BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact.
Also: https://twitter.com/moskov/status/1624058113119645699