Hi Ollie, thanks for sharing your thoughts here. A lot has already been covered in the comments, so perhaps some unexplored points here:
Most moral and political ideologies at some point imply power-seeking? Highly revolutionary leftist ideologies imply, well… revolution. Conservative ideologies at some level imply gaining enough power to conserve the parts of society that are valued. After some reflection, I agree that I don’t think that longtermism necessarily implies power seeking, at least I’m not sure it’s out of the same class here as other political theories.
So I think what does seem to be causing this is not the philosophy but the practical on-the-ground response of a) longtermism growing rapidly in prominence in EA and gaining funding[1] as well as b) EA increasing in visibility to the wider world, as well as wider influence (e.g. Toby Ord talking to the UN, EA being associated with the recent wave of concern about AIXR policy, the significant amount of promotion for WWOTF). From the outside, making a naive extrapolation[2], it would appear like longtermism would be on the way to becoming a lot more influential in the near future.
The best examples of where this might actually be true comes from some highly speculative/ambitious/dangerous ideas present in Bay-Area Rationalist orgs, though I think these happened before ‘longtermism’ was actually a thing:[3]
There are suggestions that Leverage Research had some half-baked grand plan to convert their hoped-for breakthroughs about human behaviour/psychology to lead them to be able to take over the US Government (source1, source2 - ctrl+F “take over”)
There are also hints at the plan from MIRI involving a “pivotal act”, a plan that seems to cash out as getting researchers to develop a somewhat aligned proto-AGI and using that to take over the world and preventing any unaligned AGIs from being built (source1, source2 - not as sure what the truth is here, but I think there is some smoke)
Finally, I think a big thing is that contributes to this idea of EA and longtermism as inherently and/or dangerously power seeking is framing by its ideological enemies. For instance, I don’t think Crary or Torres[4] are coming at this from a perspective of ‘mistake theory’ - it’s ‘conflict theory’ all the way for them. I don’t think they’re misreading longtermist works by accident, I think that they view longtermism as inherently dangerous because of the ideological differences, and they’re shouting loudly about it. This is a lot of people’s first impression of longtermism/EA—and unfortunately I think it often sticks. Critically, prominent longtermists seem to have ceded this ground to their critics, and don’t prominently push back against it, which I think is a mistaken strategy.
Would be really interested to hear what you (or others) think about this.
IMPORTANT NOTE: I don’t claim to know fully what the truth behind these claims are, but it did stick in my mind thinking about the post. But I’m happy to amend/retract if provided clearer evidence by those in the know. I don’t think it was likely any of these plans had any chance of succeeding, but it still points to a concerning background trend if true.
Especially Torres—who seems to be fighting a personal holy war against longtermism. I remained perplexed about what happened here, since he seems to have been an active EA concerned about xRisk for some time.
Hi Ollie, thanks for sharing your thoughts here. A lot has already been covered in the comments, so perhaps some unexplored points here:
Most moral and political ideologies at some point imply power-seeking? Highly revolutionary leftist ideologies imply, well… revolution. Conservative ideologies at some level imply gaining enough power to conserve the parts of society that are valued. After some reflection, I agree that I don’t think that longtermism necessarily implies power seeking, at least I’m not sure it’s out of the same class here as other political theories.
So I think what does seem to be causing this is not the philosophy but the practical on-the-ground response of a) longtermism growing rapidly in prominence in EA and gaining funding[1] as well as b) EA increasing in visibility to the wider world, as well as wider influence (e.g. Toby Ord talking to the UN, EA being associated with the recent wave of concern about AIXR policy, the significant amount of promotion for WWOTF). From the outside, making a naive extrapolation[2], it would appear like longtermism would be on the way to becoming a lot more influential in the near future.
The best examples of where this might actually be true comes from some highly speculative/ambitious/dangerous ideas present in Bay-Area Rationalist orgs, though I think these happened before ‘longtermism’ was actually a thing:[3]
There are suggestions that Leverage Research had some half-baked grand plan to convert their hoped-for breakthroughs about human behaviour/psychology to lead them to be able to take over the US Government (source1, source2 - ctrl+F “take over”)
There are also hints at the plan from MIRI involving a “pivotal act”, a plan that seems to cash out as getting researchers to develop a somewhat aligned proto-AGI and using that to take over the world and preventing any unaligned AGIs from being built (source1, source2 - not as sure what the truth is here, but I think there is some smoke)
Finally, I think a big thing is that contributes to this idea of EA and longtermism as inherently and/or dangerously power seeking is framing by its ideological enemies. For instance, I don’t think Crary or Torres[4] are coming at this from a perspective of ‘mistake theory’ - it’s ‘conflict theory’ all the way for them. I don’t think they’re misreading longtermist works by accident, I think that they view longtermism as inherently dangerous because of the ideological differences, and they’re shouting loudly about it. This is a lot of people’s first impression of longtermism/EA—and unfortunately I think it often sticks. Critically, prominent longtermists seem to have ceded this ground to their critics, and don’t prominently push back against it, which I think is a mistaken strategy.
Would be really interested to hear what you (or others) think about this.
though again, to the best of my knowledge GH&D is still the number 1 cause area by funding
especially pre-FTX collapse
IMPORTANT NOTE: I don’t claim to know fully what the truth behind these claims are, but it did stick in my mind thinking about the post. But I’m happy to amend/retract if provided clearer evidence by those in the know. I don’t think it was likely any of these plans had any chance of succeeding, but it still points to a concerning background trend if true.
Especially Torres—who seems to be fighting a personal holy war against longtermism. I remained perplexed about what happened here, since he seems to have been an active EA concerned about xRisk for some time.