I think Iâm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
I think the idea is that EA institutions look much worse after FTX but EA causes do not. SBF being a fraud may cause you to update about whether (e.g.) CEA is a good organization but should not cause you to update on bednets/âAI.
After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via âinstitutional reformsâ and âdemocratization.â
Iâm not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/âweaponized (or even just sidestepped) by bad actors. Also, âdemocratizationâ sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who donât have a clue of what theyâre missing. There comes a point where youâll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people â see this comment by Jonas Vollmer).
Maybe Iâm strawmanning the calls for reform and people who want governance safeguards mostly mean things that I would also agree with. I want to make clear that there are probably quite a few suggestions in the spirit of âinstitutional reformsâ where Iâd be in favor. Itâs not that I think all governance overhead is bad â e.g., I think boards are quite essential if the board members are actually engaged and committed to an orgâs mission. Also, I think EA orgs should give regular updates where the leadership communicates transparently the reasons why they did what they did, speaking real talk instead of putting up a PR front. (I think thereâs a lot of room for improvement here!)
*Itâs not like governance safeguards will magically change people who donât have the required qualities into competent good actors. I concede that thereâs a lot of truth to âpower (without accountability) corrupts.â So, one might argue âeven good leaders may turn bad if they arenât accountable to anyone.â However, that seems like a definitional dispute. As a âgood leader,â youâd be terribly scared of mission-drift and becoming corrupted, so youâd seek out a way to stay accountable to people whose judgment you respect. If youâre not terribly scared of these things, or if youâre the only person in the world whose judgment you respect, then youâre not a good leader in the first place. Processes work well when theyâre designed by a founder (or CEO) whoâs highly committed to the orgâs mission and who has a vision. Stuff thatâs externally imposed on people rarely has its desired effects. If we want to reap the fruits of highly impactful organizations or institutions, we have to be prepared to give some founders or CEOs (the right ones!) a cushion of initial trust. (And then continue to watch them carefully so they donât use it all up and go negative.)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformersâ fault. In my view, itâs not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremerâs proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, âthe community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costsâ is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/âelaboration) to the definitely-plausible-if-fleshed-out, so I think itâs important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas Iâve seen described on the forum are better because they are less grand /â more targeted + specific.
Yeah, makes sense. I just donât know why itâs not just: âItâs conceivable, therefore, that EA community building has net negative impact.â
If you think that EA is/â EAs are net-negative value, then surely the more important point is that we should disband EA totally /â collectively rid ourselves of the foolish notion that we should ever try to optimise anything/â commit seppuku for the greater good, rather than ease up on the community building.
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
I think I did a poor job of distinguishing what I call âinstitutional EAâ (or âEA community buildingâ) from EA (or âEA as an ideaâ). But basically, thereâs a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/âaffiliate with effective altruists. The former is what Iâm calling EA/âidea of EA and the latter is community building.
As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.
I think Iâm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
I think the idea is that EA institutions look much worse after FTX but EA causes do not. SBF being a fraud may cause you to update about whether (e.g.) CEA is a good organization but should not cause you to update on bednets/âAI.
Reading the first paragraph of the OP, hereâs me trying to excavate the argument:
Just like positive impact is likely âheavy-tailed,â so is negative impact (see also this paper)
Introducing people to EA ideas increases their agentiness and âattempts to optimizeâ
Sometimes when people try to optimize something, things go badly wrong (e.g., FTX)
Itâs conceivable, therefore, that EA community building has net negative impact
I think the argument is incomplete. Other things to think about:
Are there any reasons why it might be systematically easier to destroy value than to create it?
Seems plausible.
But: Whatâs the alternative, whatâs the default trajectory without an EA âmovementâ of some sort?
Doesnât seem like much value?
Beware of false dichotomies: Instead of movement building vs. no movement building, are there ways to increase the robustness of movement building?
E.g., not promoting individuals with a particular psychology who may be disproportionally likely to end up with outsized negative impact?
Edit: worth saying that the OP does provide constructive suggestions!
After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via âinstitutional reformsâ and âdemocratization.â
Iâm not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/âweaponized (or even just sidestepped) by bad actors. Also, âdemocratizationâ sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who donât have a clue of what theyâre missing. There comes a point where youâll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people â see this comment by Jonas Vollmer).
Maybe Iâm strawmanning the calls for reform and people who want governance safeguards mostly mean things that I would also agree with. I want to make clear that there are probably quite a few suggestions in the spirit of âinstitutional reformsâ where Iâd be in favor. Itâs not that I think all governance overhead is bad â e.g., I think boards are quite essential if the board members are actually engaged and committed to an orgâs mission. Also, I think EA orgs should give regular updates where the leadership communicates transparently the reasons why they did what they did, speaking real talk instead of putting up a PR front. (I think thereâs a lot of room for improvement here!)
*Itâs not like governance safeguards will magically change people who donât have the required qualities into competent good actors. I concede that thereâs a lot of truth to âpower (without accountability) corrupts.â So, one might argue âeven good leaders may turn bad if they arenât accountable to anyone.â However, that seems like a definitional dispute. As a âgood leader,â youâd be terribly scared of mission-drift and becoming corrupted, so youâd seek out a way to stay accountable to people whose judgment you respect. If youâre not terribly scared of these things, or if youâre the only person in the world whose judgment you respect, then youâre not a good leader in the first place. Processes work well when theyâre designed by a founder (or CEO) whoâs highly committed to the orgâs mission and who has a vision. Stuff thatâs externally imposed on people rarely has its desired effects. If we want to reap the fruits of highly impactful organizations or institutions, we have to be prepared to give some founders or CEOs (the right ones!) a cushion of initial trust. (And then continue to watch them carefully so they donât use it all up and go negative.)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformersâ fault. In my view, itâs not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremerâs proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, âthe community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costsâ is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/âelaboration) to the definitely-plausible-if-fleshed-out, so I think itâs important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas Iâve seen described on the forum are better because they are less grand /â more targeted + specific.
Yeah, makes sense. I just donât know why itâs not just: âItâs conceivable, therefore, that EA
community buildinghas net negative impact.âIf you think that EA is/â EAs are net-negative value, then surely the more important point is that we should disband EA totally /â collectively rid ourselves of the foolish notion that we should ever try to optimise anything/â commit seppuku for the greater good, rather than ease up on the community building.
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
I think I did a poor job of distinguishing what I call âinstitutional EAâ (or âEA community buildingâ) from EA (or âEA as an ideaâ). But basically, thereâs a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/âaffiliate with effective altruists. The former is what Iâm calling EA/âidea of EA and the latter is community building.
As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.