In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?