I’d like to respond to your description of what some people’s worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:
the risk of losing flexibility by enforcing what is an “EA view” or not
It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things “count” as EA endeavors isn’t a strictly necessary prerequisite for forming such a panel.
I can see why some people held this concern, partly because “defining what does and doesn’t count as an EA endeavor” clusters in thing-space with “keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs”, but these two things don’t have to go hand in hand.
the risk of consolidating too much influence over EA in any one organisation or panel
Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations “counted” as EAs.
the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling
This seems like a concern that’s good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights’ poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.
In response, we toned back the ambitions of the proposed ideas.
I’d have likely done the same. But that’s the wrong thing to do.
In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:
There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process.
Community action is all that we had before the Intentional Insights fiasco, and community action is all that we’re back to having now.
I didn’t get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we’ve lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.
I’d like to respond to your description of what some people’s worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:
It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things “count” as EA endeavors isn’t a strictly necessary prerequisite for forming such a panel.
I can see why some people held this concern, partly because “defining what does and doesn’t count as an EA endeavor” clusters in thing-space with “keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs”, but these two things don’t have to go hand in hand.
Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations “counted” as EAs.
This seems like a concern that’s good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights’ poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.
I’d have likely done the same. But that’s the wrong thing to do.
In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:
Community action is all that we had before the Intentional Insights fiasco, and community action is all that we’re back to having now.
I didn’t get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we’ve lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.
FWIW, as someone who contributed to the InIn document, I approve of (and recommended during discussion) the less ambitious project this represents.