It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.
Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).
I’m the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA’s different cause areas being under the single umbrella of effective altruism. I’m guessing this is one of the schism posts being referred to here, so I’d be interested in reading more fleshed out rebuttals.
The comments section contained some good discussion with a variety of perspectives—some supporting my arguments, some opposing, some mixed—so it seems to have struck a chord with some at least. I do plan to continue making my case for why I think these problems should be taken seriously, though I’m still unsure what the right solution is.
I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It’s great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don’t see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it’d be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I’ve never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.
Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.
It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.
Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).
DMed examples.
I’m the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA’s different cause areas being under the single umbrella of effective altruism. I’m guessing this is one of the schism posts being referred to here, so I’d be interested in reading more fleshed out rebuttals.
The comments section contained some good discussion with a variety of perspectives—some supporting my arguments, some opposing, some mixed—so it seems to have struck a chord with some at least. I do plan to continue making my case for why I think these problems should be taken seriously, though I’m still unsure what the right solution is.
Good post!
I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It’s great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don’t see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it’d be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I’ve never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.
Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.