Thanks for writing this Will. I feel a bit torn on this so will lay out some places where I agree and some where I disagree:
I agree that some of these AI-related cause areas beyond takeover risk deserve to be seen as their own cause areas as such and that lumping them all under “AI” risks being a bit inaccurate.
That said, I think the same could be said of some areas of animal work—wild animal welfare, invertebrate welfare, and farmed vertebrate welfare should perhaps get their own billing. And then this can keep expanding—see, e.g., OP’s focus areas, which list several things under the global poverty bracket.
Perhaps on balance I’d vote for poverty, animals, AI takeover, and post-AGI governance or something like that.
I also very much agree that “making the transition to a post-AGI society go well” beyond AI takeover is highly neglected given its importance.
I’m not convinced EA is intellectually adrift and tend to agree with Nick Laing’s comment. My quick take is that it feels boring to people who’ve been in it a while but still is pretty incisive for people who are new to it, which describes most of the world.
I think principles-first EA goes better with a breadth of focuses and cause areas, because it shows the flexibility of the principles and the room for disagreement within them. I tend to think that too much focus on AI can take away with this, so it would concern me if >50-60% of the discussion were around AI.
I very much agree with the PR mentality comments—in particular, I find many uses of the “EA adjacent” term to be farcical. I added effective altruism back into my Twitter bio inspired by this post and @Alix Pham’s.
I agree it would be good for the EA Forum to be a place where more of the AI discussion happens, and I think it’s particularly suited for post-AGI society—it’s been a good place for digital minds conversations, for example.
So I guess I come down on the side of thinking (a) members of the EA community should recognize that there’s a lot more to discuss around AI than takeover, and it merits a rich and varied conversation, but (b) I would be wary of centering the transition to a post-AGI society go well at the expense of other cause areas.
I very much agree with the PR mentality comments—in particular, I find many uses of the “EA adjacent” term to be farcical. I added effective altruism back into my Twitter bio inspired by this post and @Alix Pham’s.
Thank you for acting on this! It’s a team effort :)
Thanks for writing this Will. I feel a bit torn on this so will lay out some places where I agree and some where I disagree:
I agree that some of these AI-related cause areas beyond takeover risk deserve to be seen as their own cause areas as such and that lumping them all under “AI” risks being a bit inaccurate.
That said, I think the same could be said of some areas of animal work—wild animal welfare, invertebrate welfare, and farmed vertebrate welfare should perhaps get their own billing. And then this can keep expanding—see, e.g., OP’s focus areas, which list several things under the global poverty bracket.
Perhaps on balance I’d vote for poverty, animals, AI takeover, and post-AGI governance or something like that.
I also very much agree that “making the transition to a post-AGI society go well” beyond AI takeover is highly neglected given its importance.
I’m not convinced EA is intellectually adrift and tend to agree with Nick Laing’s comment. My quick take is that it feels boring to people who’ve been in it a while but still is pretty incisive for people who are new to it, which describes most of the world.
I think principles-first EA goes better with a breadth of focuses and cause areas, because it shows the flexibility of the principles and the room for disagreement within them. I tend to think that too much focus on AI can take away with this, so it would concern me if >50-60% of the discussion were around AI.
I very much agree with the PR mentality comments—in particular, I find many uses of the “EA adjacent” term to be farcical. I added effective altruism back into my Twitter bio inspired by this post and @Alix Pham’s.
I agree it would be good for the EA Forum to be a place where more of the AI discussion happens, and I think it’s particularly suited for post-AGI society—it’s been a good place for digital minds conversations, for example.
So I guess I come down on the side of thinking (a) members of the EA community should recognize that there’s a lot more to discuss around AI than takeover, and it merits a rich and varied conversation, but (b) I would be wary of centering the transition to a post-AGI society go well at the expense of other cause areas.
Thank you for acting on this! It’s a team effort :)