We’re very happy to hear that you have seriously considered these issues.
If the who-gets-to-vote problem was solved, would your opinion change?
We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.
We discuss some of these issues in our response to Halstead on Doing EA Better:
There are several possible factors to be used to draw a hypothetical boundary, e.g.
Committing to and fulfilling the Giving Pledge for a certain length of time
Working at an EA org
Doing community-building work
Donating a certain amount/fraction of your income
Active participation at an EAG
Etc.
These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.
Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.
What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?
What would your criteria for “success” be, and would you be likely to change your mind if those were met?
Given that your proposal is to start small, why do you need my blessing? If this is a good idea, then you should be able to fund it and pursue it with other EA donors and effectively end up with a competitor to the MIF. And if the grants look good, it would become a target for OP funds. I don’t think OP feels their own grants are the best possible, but rather the best possible within their local specialization. Hence the regranting program.
Speaking for myself, I think your list of criteria make sense but are pretty far from a democracy. And the smaller you make the community of eligible deciders, the higher the chance they will be called for duty, which they may not actually want. How is this the same or different from donor lotteries, and what can be learned from that ? (To round this out a little, I think your list is effectively skin in the game in the form of invested time rather than dollars)
Because the donor lottery weights by donation size, the Benefactor or a large earning-to-give donor are much more likely to win than someone doing object-level work who can only afford a smaller donation. Preferences will still get funded in proportion to the financial resources of each donor, so the preferences of those with little money remain almost unaccounted for (even though there is little reason to think they wouldn’t do as well as the more likely winners). Psychologically, I can understand why the current donor lottery would be unappealing to most smaller donors.
Weighting by size is necessary if you want to make the donor lottery trustless—because a donor’s EV is the same as if they donated to their preferred causes directly, adding someone who secretly wants to give to a cat rescue doesn’t harm other donors. But if you employ methods of verifying trustworthiness, a donor lottery doesn’t have to be trustless. Turning the pot over to a committee of lottery winners, rather than a single winner, would further increase confidence that the winners would make reasonable choices.
Thus, one moderate step toward amplifying the preferences of those with less money would be a weighted donor lottery—donors would get a multiplier on their monetary donation amount based on how much time-commitment skin in the game they had. Of course, this would require other donors to accept a lower percentage of tickets than their financial contribution percentage, which would be where people or organizations with a lot of money would come in. The amount of funding directed by of Open Phil (and formerly, FTX) has caused people to move away from earning-to-give, which reduced the supply of potential entrants who would be willing to accept a significantly lower share of tickets per dollar than smaller donors. So I would support large donors providing some funds to a weighted donor lottery in a way that boosts the winning odds—either solo or as part of a committee—for donors who can demonstrate time-commitment skin in the game.[1]
Contributing a smaller amount to the pot without taking any tickets is mostly equivalent—and perhaps optically superior—to taking tickets on a somewhat larger contribution.
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?
Hi Dustin,
We’re very happy to hear that you have seriously considered these issues.
If the who-gets-to-vote problem was solved, would your opinion change?
We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.
We discuss some of these issues in our response to Halstead on Doing EA Better:
There are several possible factors to be used to draw a hypothetical boundary, e.g.
Committing to and fulfilling the Giving Pledge for a certain length of time
Working at an EA org
Doing community-building work
Donating a certain amount/fraction of your income
Active participation at an EAG
Etc.
These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.
Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.
What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?
What would your criteria for “success” be, and would you be likely to change your mind if those were met?
Given that your proposal is to start small, why do you need my blessing? If this is a good idea, then you should be able to fund it and pursue it with other EA donors and effectively end up with a competitor to the MIF. And if the grants look good, it would become a target for OP funds. I don’t think OP feels their own grants are the best possible, but rather the best possible within their local specialization. Hence the regranting program.
Speaking for myself, I think your list of criteria make sense but are pretty far from a democracy. And the smaller you make the community of eligible deciders, the higher the chance they will be called for duty, which they may not actually want. How is this the same or different from donor lotteries, and what can be learned from that ? (To round this out a little, I think your list is effectively skin in the game in the form of invested time rather than dollars)
Because the donor lottery weights by donation size, the Benefactor or a large earning-to-give donor are much more likely to win than someone doing object-level work who can only afford a smaller donation. Preferences will still get funded in proportion to the financial resources of each donor, so the preferences of those with little money remain almost unaccounted for (even though there is little reason to think they wouldn’t do as well as the more likely winners). Psychologically, I can understand why the current donor lottery would be unappealing to most smaller donors.
Weighting by size is necessary if you want to make the donor lottery trustless—because a donor’s EV is the same as if they donated to their preferred causes directly, adding someone who secretly wants to give to a cat rescue doesn’t harm other donors. But if you employ methods of verifying trustworthiness, a donor lottery doesn’t have to be trustless. Turning the pot over to a committee of lottery winners, rather than a single winner, would further increase confidence that the winners would make reasonable choices.
Thus, one moderate step toward amplifying the preferences of those with less money would be a weighted donor lottery—donors would get a multiplier on their monetary donation amount based on how much time-commitment skin in the game they had. Of course, this would require other donors to accept a lower percentage of tickets than their financial contribution percentage, which would be where people or organizations with a lot of money would come in. The amount of funding directed by of Open Phil (and formerly, FTX) has caused people to move away from earning-to-give, which reduced the supply of potential entrants who would be willing to accept a significantly lower share of tickets per dollar than smaller donors. So I would support large donors providing some funds to a weighted donor lottery in a way that boosts the winning odds—either solo or as part of a committee—for donors who can demonstrate time-commitment skin in the game.[1]
Contributing a smaller amount to the pot without taking any tickets is mostly equivalent—and perhaps optically superior—to taking tickets on a somewhat larger contribution.
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?