EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that.
There is a lot of content packed within that “the best”. An org, movement, or even person, can only be “the best” on a certain amount of measures, before you’re asking for a solution to an optimization problem with too many constraints, or if I can name it on a simpler way, superpowers.
Should EA be ethical? Sure. Wholesome? Maybe, why not. A place for intellectual and epistemic humbleness? Very important. Freedom for intellectual debate? Very useful for arriving at the truth. A safe space? Depending what you mean by that. A place completely free from interpersonal drama? That would be nice, certainly. Can all of this be achieved at the same time? No, I don’t think so. Some values do funge against others to some extent. I hope I don’t need to offer specific examples.
I’m worried about this (and other recent developments) on EA. Calling for a more perfect world is, by itself, good. But asking for optimizations in one front frequently means, implicitly, de-prioritizing other causes, if only because the proposed optimizations take a good chunk of the limited collective time, attention span and ability to intelligently communicate and discuss.
Do I think that changes can be made to EA to make it more ethical, with less misconduct (including sexual misconduct), etc? Yes, certainly. Do I think this will have a cost? Yes, there is no such thing as a free lunch. Do I think this will cause, all things considered, more or less suffering in the world? I’m not sure. But since what EA is unquestionably “the best” at, is in identifying opportunities to do the most good, in the margin. And while all improvements are changes, not all changes are improvements. I think that any changes (the more sweeping, the worse on expectation) on community composition, governance, etc. will be in the best of possible worlds, neutral to distracting for the main goal of doing most good with available resources, and in the worst, actively harmful. Thus, my proposal should be that any proposed changes should pass the bar, not only of improving the situation they purport to improve (and these articles with practical examples of what to do, how to do it, and how did it all result are certainly useful for that), but also, a reasonable case that they are at least neutral to the main mission (doing good better, if I may quote).
So, am I advocating for an “abandon all hope, every one for themselves” policy? Not at all. I’m merely stating, if on a roundabout way, that “average” ability, as an organization, to keep your members safe, sane, and wholesome is good. Quite probably, good enough. And this is key. Since you cannot optimize for everything at the same time, one must find a compromise for most things that are not the main mission. I think sexual misconduct is one of those many, many things.
(Edit: Vasco Grilo’s comment says it better, and in less words)
There is a lot of content packed within that “the best”. An org, movement, or even person, can only be “the best” on a certain amount of measures, before you’re asking for a solution to an optimization problem with too many constraints, or if I can name it on a simpler way, superpowers.
Should EA be ethical? Sure. Wholesome? Maybe, why not. A place for intellectual and epistemic humbleness? Very important. Freedom for intellectual debate? Very useful for arriving at the truth. A safe space? Depending what you mean by that. A place completely free from interpersonal drama? That would be nice, certainly. Can all of this be achieved at the same time? No, I don’t think so. Some values do funge against others to some extent. I hope I don’t need to offer specific examples.
I’m worried about this (and other recent developments) on EA. Calling for a more perfect world is, by itself, good. But asking for optimizations in one front frequently means, implicitly, de-prioritizing other causes, if only because the proposed optimizations take a good chunk of the limited collective time, attention span and ability to intelligently communicate and discuss.
Do I think that changes can be made to EA to make it more ethical, with less misconduct (including sexual misconduct), etc? Yes, certainly. Do I think this will have a cost? Yes, there is no such thing as a free lunch. Do I think this will cause, all things considered, more or less suffering in the world? I’m not sure. But since what EA is unquestionably “the best” at, is in identifying opportunities to do the most good, in the margin. And while all improvements are changes, not all changes are improvements. I think that any changes (the more sweeping, the worse on expectation) on community composition, governance, etc. will be in the best of possible worlds, neutral to distracting for the main goal of doing most good with available resources, and in the worst, actively harmful. Thus, my proposal should be that any proposed changes should pass the bar, not only of improving the situation they purport to improve (and these articles with practical examples of what to do, how to do it, and how did it all result are certainly useful for that), but also, a reasonable case that they are at least neutral to the main mission (doing good better, if I may quote).
So, am I advocating for an “abandon all hope, every one for themselves” policy? Not at all. I’m merely stating, if on a roundabout way, that “average” ability, as an organization, to keep your members safe, sane, and wholesome is good. Quite probably, good enough. And this is key. Since you cannot optimize for everything at the same time, one must find a compromise for most things that are not the main mission. I think sexual misconduct is one of those many, many things.
(Edit: Vasco Grilo’s comment says it better, and in less words)