I think Lark’s response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal “OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman”. This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.
I don’t think the post properly engages with the question “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”. I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn’t contribute much novel insight.
I encourage people to write posts on the topic of “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”; perhaps such posts could look at historical examples, or mechanisms via which powerful people can get the echo-chamber-reduction effects without the random-people-now-use-your-resources-to-do-their-random-goals effect.
Producing reasoning transparancy would I think yield echo-chamber-reduction effects, and also inform the powerful person practicing it how to (to them, for starters) weigh pros and cons of transferring power. Moreover, without it, I don’t see using reason and evidence to do the most good practiced, nor with that a license to be a powerful EA, as opposed to simply powerful. And if EA membership would instead just be about trying to do the most good, that would include all of humanity minus some deviants.
I appreciate your point that people that donate are under no obligation. As such an advisory (instead of instructing) role to them seems fitting. On the other hand, the intellectual EA community should however also have the freedom to: not take on certain money, or not take on certain money coupled to certain actions, or disassociate with people, e.g. when this otherwise puts the community’s (intellectual) integrity at risk, e.g. their reasoning transparancy. (And even chosen intransparancy one can be transparant about at a higher level.) In that, also being that much of EA charity work is research-based, an analogy to the scientific community, where such potent integrity risk is also tantamount, seems quite fitting.
All in all there should I think be some balance in the democratic power at both ends, including on the burden of proof, instead of this being fully one-sided. Take in FTX maybe as another (historical) example. And ideally both sides are practicing (reasoning transparancy and are) getting better in being informed by reason and evidence to do good better. Potentially this identifies (and resolves?) some (but not all) cruxes, and fleshes out new ones, while also responding to some of your encouragements, to move the conversation (or reasoning transparancy) forward?
I think Lark’s response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal “OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman”. This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.
I don’t think the post properly engages with the question “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”. I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn’t contribute much novel insight.
I encourage people to write posts on the topic of “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”; perhaps such posts could look at historical examples, or mechanisms via which powerful people can get the echo-chamber-reduction effects without the random-people-now-use-your-resources-to-do-their-random-goals effect.
Producing reasoning transparancy would I think yield echo-chamber-reduction effects, and also inform the powerful person practicing it how to (to them, for starters) weigh pros and cons of transferring power. Moreover, without it, I don’t see using reason and evidence to do the most good practiced, nor with that a license to be a powerful EA, as opposed to simply powerful. And if EA membership would instead just be about trying to do the most good, that would include all of humanity minus some deviants.
I appreciate your point that people that donate are under no obligation. As such an advisory (instead of instructing) role to them seems fitting. On the other hand, the intellectual EA community should however also have the freedom to: not take on certain money, or not take on certain money coupled to certain actions, or disassociate with people, e.g. when this otherwise puts the community’s (intellectual) integrity at risk, e.g. their reasoning transparancy. (And even chosen intransparancy one can be transparant about at a higher level.) In that, also being that much of EA charity work is research-based, an analogy to the scientific community, where such potent integrity risk is also tantamount, seems quite fitting.
All in all there should I think be some balance in the democratic power at both ends, including on the burden of proof, instead of this being fully one-sided. Take in FTX maybe as another (historical) example. And ideally both sides are practicing (reasoning transparancy and are) getting better in being informed by reason and evidence to do good better. Potentially this identifies (and resolves?) some (but not all) cruxes, and fleshes out new ones, while also responding to some of your encouragements, to move the conversation (or reasoning transparancy) forward?