After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via “institutional reforms” and “democratization.”
I’m not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/weaponized (or even just sidestepped) by bad actors. Also, “democratization” sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who don’t have a clue of what they’re missing. There comes a point where you’ll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people – see this comment by Jonas Vollmer).
Maybe I’m strawmanning the calls for reform and people who want governance safeguards mostly mean things that I would also agree with. I want to make clear that there are probably quite a few suggestions in the spirit of “institutional reforms” where I’d be in favor. It’s not that I think all governance overhead is bad – e.g., I think boards are quite essential if the board members are actually engaged and committed to an org’s mission. Also, I think EA orgs should give regular updates where the leadership communicates transparently the reasons why they did what they did, speaking real talk instead of putting up a PR front. (I think there’s a lot of room for improvement here!)
*It’s not like governance safeguards will magically change people who don’t have the required qualities into competent good actors. I concede that there’s a lot of truth to “power (without accountability) corrupts.” So, one might argue “even good leaders may turn bad if they aren’t accountable to anyone.” However, that seems like a definitional dispute. As a “good leader,” you’d be terribly scared of mission-drift and becoming corrupted, so you’d seek out a way to stay accountable to people whose judgment you respect. If you’re not terribly scared of these things, or if you’re the only person in the world whose judgment you respect, then you’re not a good leader in the first place. Processes work well when they’re designed by a founder (or CEO) who’s highly committed to the org’s mission and who has a vision. Stuff that’s externally imposed on people rarely has its desired effects. If we want to reap the fruits of highly impactful organizations or institutions, we have to be prepared to give some founders or CEOs (the right ones!) a cushion of initial trust. (And then continue to watch them carefully so they don’t use it all up and go negative.)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformers’ fault. In my view, it’s not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremer’s proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, “the community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costs” is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/elaboration) to the definitely-plausible-if-fleshed-out, so I think it’s important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas I’ve seen described on the forum are better because they are less grand / more targeted + specific.
Yeah, makes sense. I just don’t know why it’s not just: “It’s conceivable, therefore, that EA community building has net negative impact.”
If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
I think I did a poor job of distinguishing what I call “institutional EA” (or “EA community building”) from EA (or “EA as an idea”). But basically, there’s a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/affiliate with effective altruists. The former is what I’m calling EA/idea of EA and the latter is community building.
As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.
Reading the first paragraph of the OP, here’s me trying to excavate the argument:
Just like positive impact is likely “heavy-tailed,” so is negative impact (see also this paper)
Introducing people to EA ideas increases their agentiness and “attempts to optimize”
Sometimes when people try to optimize something, things go badly wrong (e.g., FTX)
It’s conceivable, therefore, that EA community building has net negative impact
I think the argument is incomplete. Other things to think about:
Are there any reasons why it might be systematically easier to destroy value than to create it?
Seems plausible.
But: What’s the alternative, what’s the default trajectory without an EA “movement” of some sort?
Doesn’t seem like much value?
Beware of false dichotomies: Instead of movement building vs. no movement building, are there ways to increase the robustness of movement building?
E.g., not promoting individuals with a particular psychology who may be disproportionally likely to end up with outsized negative impact?
Edit: worth saying that the OP does provide constructive suggestions!
After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via “institutional reforms” and “democratization.”
I’m not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/weaponized (or even just sidestepped) by bad actors. Also, “democratization” sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who don’t have a clue of what they’re missing. There comes a point where you’ll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people – see this comment by Jonas Vollmer).
Maybe I’m strawmanning the calls for reform and people who want governance safeguards mostly mean things that I would also agree with. I want to make clear that there are probably quite a few suggestions in the spirit of “institutional reforms” where I’d be in favor. It’s not that I think all governance overhead is bad – e.g., I think boards are quite essential if the board members are actually engaged and committed to an org’s mission. Also, I think EA orgs should give regular updates where the leadership communicates transparently the reasons why they did what they did, speaking real talk instead of putting up a PR front. (I think there’s a lot of room for improvement here!)
*It’s not like governance safeguards will magically change people who don’t have the required qualities into competent good actors. I concede that there’s a lot of truth to “power (without accountability) corrupts.” So, one might argue “even good leaders may turn bad if they aren’t accountable to anyone.” However, that seems like a definitional dispute. As a “good leader,” you’d be terribly scared of mission-drift and becoming corrupted, so you’d seek out a way to stay accountable to people whose judgment you respect. If you’re not terribly scared of these things, or if you’re the only person in the world whose judgment you respect, then you’re not a good leader in the first place. Processes work well when they’re designed by a founder (or CEO) who’s highly committed to the org’s mission and who has a vision. Stuff that’s externally imposed on people rarely has its desired effects. If we want to reap the fruits of highly impactful organizations or institutions, we have to be prepared to give some founders or CEOs (the right ones!) a cushion of initial trust. (And then continue to watch them carefully so they don’t use it all up and go negative.)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformers’ fault. In my view, it’s not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremer’s proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, “the community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costs” is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/elaboration) to the definitely-plausible-if-fleshed-out, so I think it’s important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas I’ve seen described on the forum are better because they are less grand / more targeted + specific.
Yeah, makes sense. I just don’t know why it’s not just: “It’s conceivable, therefore, that EA
community buildinghas net negative impact.”If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
I think I did a poor job of distinguishing what I call “institutional EA” (or “EA community building”) from EA (or “EA as an idea”). But basically, there’s a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/affiliate with effective altruists. The former is what I’m calling EA/idea of EA and the latter is community building.
As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.