If folks don’t mind, a brief word from our sponsors...
I saw Cremer’s post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the “EA community” is, as far as I can tell, a complete non-starter.
My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it’s effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population.
I have not myself come up with a non-geographic strategy that doesn’t seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become “dues-paying” members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit for reasons I support!).
There is a serious, live question of what defines an EA right now. Are they longtermists? Do they include animals in the circle of moral concern? Shrimp? I’m not sure how you could establish a clear membership criteria without first answering these questions, and that feels backwards. I do think you could have separate pools of money based on separate worldviews, but you’d probably have to cut pretty narrowly which defeats the point.
Working on climate change is certainly important, but I see that as fairly suggestive evidence that a more democratric approach would be dilutive to EA principles (i.e. neglectedness in this case) and result in more popular cause selection.
It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn’t compare OP against the rest but against the ideal.
One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn’t vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.
More realistically, some proposals in that broad direction which I think could actually be implementable could be:
allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes.
allowing people to bet against OP’s strategic decisions (e.g., against worldview diversification)
I’d love to see bets between OP and other organizations about whose funding is more effective, e.g., I’d love to see a bet between your and Jaan Tallinn on who’s approach is better, where the winner gets some large amount (e.g., $200M) towards their philanthropic approach
I’m particularly attracted to bets which have the shape of “you will change your mind about this in the future”.
At various points in the past, I think I would have personally appreciated having the option to bet...
against hypothetically continued funding towards Just Impact beating GiveDirectly
against your $8M towards INFER having been efficiently spent
that the marginal $5M given out as grants in an ACX grants-type process would be better than your marginal $5M to forecasting (you are giving more than $5M/yeear to forecasting, cf. your $8M grant to INFER).
against worldview diversification being evaluated positively by a neutral third party.
for closer or later AI timelines.
on more abstract topics, e.g., “your forecasting grantmaking is understaffed/underrated”, or “your forecasting grantmaking is too institutional”, “OP finds it too hard to exercise trust and would obtain better results by having more grant officers”.
Note that individual people inside OP may agree with some of the above propositions, even though “OP as a whole” may act as if they believe the opposite.
I have not myself come up with a non-geographic strategy that doesn’t seem highly vulnerable to corrupt intent or vote brigading.
You could also delegate the research of a strategy for democratic participation to other researchers, rather than doing it yourself, e.g., Robin Hanson’s time is probably buy-able with money. It would really surprise me if he (or other researchers) wasn’t able to come up with a few futarchy-adjacent ideas that were at least worth considering.
More broadly, I think that there is a spectrum between:
OpenPhilanthropy makes all decisions democratically and we all sing Kumbaya
Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers. Karnofsky writes tens of thousands of words in blogposts but does not answer comments. At the same time OP ultimately makes decisions which steer the EA community and reverberate across many lives.
Both extremes are caricatures, but we are closer to the second. Contrast with the Survival and Flourishing Fund, which has a number of regrantors with pots which grow proportionally to their estimated success.
I also think that comparison with FTX’s FF is instructive, because it was willing to trust a larger number of regrantors much earlier, and I think was able to produce a number of more experimental, ambitious and innovative grants as a result. For what it’s worth, my impression here is that Beckstead and MacAskill & the others in the FFF team did a great job here which was pretty much independent of FTX’s fraud.
So anyways, I’ve brought up some mechanisms here:
Allowing people to bet against the success of your grants
Allowing people to bet against the success of your strategic decisions
Allowing people to bet that they are better at giving out grants than OP is
Or generally trying out systems other than grants officers.
Using a wide number of regrantors rather than a small number of grant officers.
which perhaps get some the same benefits that democratization could produce for decision-making, namely information aggregation from a wider pool, and distribution of trust.
My sense is that OP could take these and other steps, and they could have some value of information, while perhaps not being all that risky if tried out at a small scale. It’s unclear though whether the managerial effort would be worth it.
PS: I liked the idea behind the Cause Exploration prizes, though I think that they did fail to produce a mechanism for addressing the above points, since the cause proposals were limited to Global Health & Wellbeing, and the worldview questions were too specific, whereas I think that the most important decisions are at the strategic level.
Strongly disagree about betting and prediction markets being useful for this; strongly agree about there being a spectrum here, where at different points the question “how do we decide who’s an EA” is less critical and can be experimented with.
One point on the spectrum could be, for example, that the organisation is mostly democratically run but the board still has veto power (over all decisions, or ones above some sum of money, or something).
Both extremes are caricatures, but we are closer to the second. Contrast with the Survival and Flourishing Fund, which has a number of regrantors with pots which grow proportionally to their estimated success.
We’re all interested in mostly agent-neutral goals, so these should be much more aligned by default than agent-relative goals such as profit. That’s a huge advantage that we’re not using sufficiently (I think). Impact markets such as ours or that of the SFF make use of the alignment with regrantors and that between funders (through the S-Process).
The upshot is that there are plenty of mechanism that promise to solve problems for funders while (almost as a side-effect) democratizing funding.
With impact markets in particular we want to enable funders to find more funding opportunities and fund more projects that would otherwise be too small for them to review. On the flip side that means that a much more diverse set of fledgling projects gets funded. It’s a win-win.
FWIW I also don’t particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn’t necessarily look like “democracy” per se and might look more like more regranting, forecasting tournaments, etc.
Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
This argument seems to be fair to apply towards CEA’s funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.
That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it’s practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it’s just convenient to give them a broad label like “democratizing”. (At Asana, we’re similarly “democratizing” project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more specific solutions along those lines, like an appeals board for COI or retaliation claims. The formal power might still lie with OP, but we would have strong soft reasons for wanting to defer.
In the meantime, I think the forum serves that role, and from my POV we seem reasonably responsive to it? Esp. the folks with high karma.
I probably should have been clearer in my first comment that my interest in democratizing the decisions more was quite selfish: I don’t like having the responsibility, even when I’m largely deferring it to you (which itself is a decision).
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion.
My guess is that the current non-democratic EA institutions have serious flaws, and democratic replacement institutions would have even more serious flaws, and it’s still worth trying the democratic institutions (in parallel to the current ones) because 2 flawed structures are better than 1. (For example, because the democratic institutions fund important critical work that the current institutions do not.)
I think this likely depends on who else is funding work in a given area, and what the other funders’ flaws/blind spots are. For instance, if the democratic EA alternative has many of the same flaws/blind spots of larger funders in a cause area, diverting resources from current EA efforts would likely lead to worse outcomes in the cause area as a whole.
An idea I’ve been kicking around in my head for a while is ‘someone should found an organization that investigates what existing humans’ moral priorities are’ - like, if there were a world democracy, what would it vote for?
An idea for a limited version of this within EA could be representatives for interest groups or nations. E.g., the Future Design movement suggests that in decision-making bodies, there should be some people whose role is to advocate for the interests of future generations. There could similarly be a mechanism where (eg) animals got a certain number of votes through human advocates.
GiveWell did some of this research in 2019 (summary, details):
We provided funding and guidance to IDinsight, a data analytics, research, and advisory organization, to survey about 2,000 people living in extreme poverty in Kenya and Ghana in 2019 about how they value different outcomes.
(Sorry, the formatting here doesn’t seem to work but I don’t know how to fix it)
I think there are two aspects that make “the EA community” a good candidate for who should make decisions:
The need to balance between “getting all perspectives by involving the entire world” and “making sure it’s still about doing the most good possible”. It’s much less vetting over value-alignment than the current state, but still some. I’m not sure it’s the best point on the scale, but I think it might be better than where we are currently.
1.1. another thought about this is that maybe we ought to fix the problem where “value alignment” is, as the other post argues, actually taken much more narrowly than agreeing about “doing the most good”.
The fact that EA is, in the end, a collaborative project and not a corporation. It seems wrong and demotivating to me that EAs have to compete and take big risks on themselves individually to try to have a say about the project they’re still expected to participate in.
2.1. Maybe a way for funders to test this is to ask yourselves—if there weren’t an EA community, would your plans still work as you expect them to? If not, than I think the community ought to also have some say on making decisions.
A couple replies imply that my research on the topic was far too shallow and, sure, I agree.
But I do think that shallow research hits different from my POV, where the one person I have worked most closely with across nearly two decades happens to be personally well researched on the topic. What a fortuitous coincidence! So the fact that he said “yea, that’s a real problem” rather than “it’s probably something you can figure out with some work” was a meaningful update for me, given how many other times we’ve faced problems together.
I can absolutely believe that a different person, or further investigation generally, would yield a better answer, but I consider this a fairly strong prior rather than an arbitrary one. I also can’t point at any clear reference examples of non-geographic democracies that appear to function well and have strong positive impact. A priori, it seems like a great idea, so why is that?
The variations I’ve seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.
“ The variations I’ve seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.”
Agree, but I think we should explore what decision making looks like at different points of that path, instead of only looking at the ends.
I think we’re already along the path, rather than at one end, and thus am inclined to evaluate the merits of specific ideas for change rather than try to weigh the philosophical stance.
I think Open Phil is unique in the EA Community for its degree of transparency which allows this level of community evaluation (with the exception of the Wytham Abbey purchase), and Open Phil should encourage other EA orgs should follow suit.
The EA Community voting on grants that Open Phil considers to be just above or below its funding bar
The EA community voting on how to make grants from a small pot of Open Phil money
Using different voting methods (eg—quadratic voting, one person one vote, EA Forum weighted karma)
And different definitions of ‘the EA Community’ (staff and ex-staff across EA affiliated orgs, a karma cut off on the EA Forum, people accepted to EAG, people who have donated to EA Funds, etc)
I am agnostic wrt to your general argument but as the fund manager of the FP Climate Fund I wanted quickly weigh in, so some brief comments (I am on a retreat so my comments are more quickly written than otherwise and I won’t be able to get back immediately, please excuse lack of sourcing and polishing):
At this point, it seems the dominant view on climate interventions seems to be one of uncertainty on how they compare to GW style charity, certainly on neartermist grounds. E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute. While I am not particularly convinced by that grant (though I am arguably not neutral here!), I do think this correctly captures that it seems plausible that the best climate interventions can compare with GW-style-options, esp. when taking into account very strong “co-benefits” of climate interventions around air pollution and reducing energy poverty.
The key meta theory of impact underlying the Climate Fund is to turn neglectedness on its head and try to leverage the large societal attention to climate and improve how it is spent through targeted advocacy addressing blindspots of the mainstream response. Ultimately, as we are after maximizing impact, not maximizing ITN scores. it is quite plausible that neglectedness naively construed misleads us, neglectedness is an impact multiplier in expectation but not a good in itself, and, I’d argue, the ability to leverage large resource streams through advocacy is another impact multiploier, a mechanism by which high cost-effectiveness in a field as crowded as climate becomes at least plausible as long as one focuses on improving the vast resource allocation rather than just piling on. AFAIK, OP also believes that advocacy generally provides an impact multiplier when being risk neutral.
Given that GiveWell interventions are direct service delivery interventions that do not leverage any mechanisms such as advocacy or induced technological change, it also seems possible that leveraged climate interventions are competitive even if climate looks strictly worse than helping the poorest humans on a cause level. Put differently, the impact penalty imposed by risk aversion (if one is risk neutral wrt impact) or the different structure of the impact space (maybe in GHD, direct interventions are indeed better than more leveraged approaches) could make climate and other interventions competitive to GW style charity as well.
In case this sounds like I am very sure that climate should be included, this is not so.
I am just trying to say that it seems less clear than it would look on “naive” ITN grounds and that the “right” answer here might be independent of “democratic dilution”.
Sure, I think you can make an argument like that for almost any cause area (find neglected tactics within the cause to create radical leverage). However, I’ve become more skeptical of it over time, both bc I’ve angled for those opps myself, and because there are increasingly a lot of very smart, strategic climate funders deploying capital. On some level, we should expect the best opps to find funders.
>> E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute.
The award page has this line “We are particularly interested in funding work to decarbonize the power sector because of the large and neglected impacts of harmful ambient air pollution, to which coal is a meaningful contributor. i.e. it’s part of our new air quality focus area. Without actually haven read the write-up, I’m sure they considered climate impact too, but I doubt it would have gotten the award without that benefit.
That said, the Kigali grant from 2017 is more like your framing. (There was much less climate funding then.)
Sure, I think you can make an argument like that for almost any cause area (find neglected tactics within the cause to create radical leverage).
Thank you for your reply and apologies for the delay!
To be clear, the reason I think this is a more convincing argument in climate than in many other causes is (a) the vastness of societal and philanthropic climate attention and (b) it’s very predictable brokenness, with climate philanthropy and climate action more broadly generally “captured” by one particular vision of solving the problem (mainstream environmentalism, see qualification below).
Yes, one can make this argument for more causes but the implication of this does not seem clear – one implication could be that there actually are much more interventions that in fact do meet the bar and that discounting leverage arguments unduly constrains our portfolio.
However, I’ve become more skeptical of it over time, both bc I’ve angled for those opps myself, and because there are increasingly a lot of very smart, strategic climate funders deploying capital. On some level, we should expect the best opps to find funders.
That seems fair and my own uncertainty primarily stems from this – I do think overall climate philanthropy has improved significantly over the last years and, in particular, with the influx of climate philanthropy from tech has become more ideologically diverse which seems good and more balanced overall (different biases from different funders).
>> E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute.
The award page has this line “We are particularly interested in funding work to decarbonize the power sector because of the large and neglected impacts of harmful ambient air pollution, to which coal is a meaningful contributor. i.e. it’s part of our new air quality focus area. Without actually haven read the write-up, I’m sure they considered climate impact too, but I doubt it would have gotten the award without that benefit.
It might be that this grant needs air pollution benefits to put it over the bar, but – as far as I can tell – there is no reason to think this grant is more strongly correlated with air pollution benefits than many other climate grants.
Cause areas are mental abstractions and whether one abstracts as “climate” or “clean energy acceleration” then seems to affect whether or not they can meet the bar. The Climate Fund is quite explicit about taking the integrated perspective and generally makes grants that also have significant co-benefits in the form of avoided air pollution and driving down the cost of clean energy.
It seems to me that consistency would require that we should assume other climate grants can meet the OP bar as well and that, by implication, the featuring of the Climate Fund on GWWC seems unproblematic.
FWIW, I strongly agree with you that most climate grants do not meet the bar and one needs to spend a lot more time per grant than in other areas and one needs an explicit model of the blindspots left by the mainstream climate response.
>> To be clear, the reason I think this is a more convincing argument in climate than in many other causes is (a) the vastness of societal and philanthropic climate attention and (b) it’s very predictable brokenness, with climate philanthropy and climate action more broadly generally “captured” by one particular vision of solving the problem (mainstream environmentalism, see qualification below).
Vast attention is the mechanism that causes popular causes to usually have lower ROI on the margin; i.e. some of all that attention is likely competent.
I’m not sure what other causes you have in mind here. I think the argument with your two conditions applies equally well to large philanthropic areas like education, poverty/homelessness, and art.
>> It seems to me that consistency would require that we should assume other climate grants can meet the OP bar as well
Absolutely, I agree they can. Do you publish your cost effectiveness estimates?
I think this idea is worth an orders-of-magnitude deeper investigation than what you’ve described. Such investigations seem worth funding.
It’s also worth noting that OP’s quotation is somewhat selective, here I include the sub-bullets:
Within 5 years: EA funding decisions are made collectively
First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms
Subject matter experts are always used and weighed appropriately in this decision mechanism
Experiment in parallel with: randomly selected samples of EAs are to evaluate the decisions of one existing funding committee—existing decision-mechanisms are thus ‘passed through’ an accountability layer
All decision mechanisms have a deliberation phase (arguments are collected and weighed publicly) and a voting phase (majority voting, quadratic voting..)
Depending on the cause area and the type of choice, either fewer (experts + randomised sample of EAs) or more people (any EA or beyond) will take part in the funding decision. “”″
Absolutely, I did not mean my comment to be the final word, and in fact was hoping for interesting suggestions to arise.
Good point on the more detailed plan, though I think this starts to look a lot more like what we do today if you squint the right way. e.g. OP program officers are subject matter experts (who also consult with external subject matter experts), and the forum regularly tears apart their decisions via posts and discussion, which then gets fed back into the process.
I think this might be misleading: there are 12 top-rated funds. The Climate Change one is one of the three “Top-rated funds working across multiple cause areas”.
But I think I don’t really buy the first point. We could come up with some kind of electorate that’s frustrating but better than the whole world. Forum users weighted by forum karma is a semi democratic system that’s better than any you suggest and pretty robust to takeover (though the forum would get a lot of spam)
My issue with that is I don’t believe the forum makes better decisions than OpenPhil. Heck we could test it, get the forum to vote on it’s allocation of funds each year and then compare in 5 years to what OpenPhil did and see which we’d prefer.
I bet that we’d pick OpenPhil’s slate from 5 years ago to the forum average from then.
So yeah, mainly I buy your second point that democratic approaches would lead to less effective resource allocation.
(As an aside there is democratic power here. When we all turned on SBF, that was democratic power—turns out that his donations did not buy him cover after the fact and I think that was good)
In short. I think the current system works. It annoys me a bit, but I can’t come up with a better one.
The proposal of forum users weighted by karma can be taken over if you have a large group of new users all voting for each other. You could require a minimum number of comments, lag in karma score by a year or more, require new comments within the past few months and so on to make it harder for a takeover, but if a large enough group is invested in takeover and willing to put in the time and effort, I think they could do it. I suppose if the karma lags are long enough and engagement requirement great enough, they might lose interest and be unable to coordinate the takeover.
You could stop counting karma starting from ~now (or some specific date), but that would mean severely underweighting legitimate newcomers. EDIT: But maybe you could just do this again in the future without letting everyone know ahead of time when or what your rules will be, so newcomers can eventually have a say, but it’ll be harder to game.
You could also try to cluster users by voting patterns to identify and stop takeovers, but this would be worrying, since it could be used to target legitimate EA subgroups.
I was trying to highlight a bootstrapping problem, but by no means meant it to be the only problem.
It’s not crazy to me to create some sort of formal system to weigh the opinions of high-karma forums posters, though as you say that is only semi-democratic, and so reintroduces some of the issues Cremer et al were trying to solve in the first place.
I am open-minded about whether it would be better than openphil, assuming they get the time to invest in making decisions well after being chosen (sortition S.O.P.).
I agree that some sort of periodic rules reveal could significantly mitigate corruption issues. Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
A simpler version of this is to have a system of membership, where existing members can nominate new members. Maybe every year some percentage of the membership gets chosen randomly and given the opportunity to nominate someone. In addition to having a process for becoming a member, there could also be processes for achieving higher levels of seniority, with more senior members granted greater input into membership decisions, and processes for nudging people who’ve lost interest in EA to let their membership lapse, and processes to kick out people found guilty of wrongdoing.
I assume there are a lot of membership-based organizations which could be studied: Rotary International, the Red Cross, national fraternities & sororities, etc.
A membership system might sound like a lot of overhead, but I think we’re already doing an ad-hoc, informal version of something like this. As NegativeNuno put it: “Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers.” My vague impression is that at least a few grantmakers like this system, and believe it is a good and necessary way for people to build trust. So if we step back and acknowledge that “building trust” is an objective, and it’s currently being pursued in an ad-hoc way which is probably not very robust, we can ask: “is there a better way to achieve that objective?”
How much are you thinking karma would be “worth”? It’s not that hard for an intelligent person to simulate being an EA if the incentives are right. If significant money were involved, you’d have to heavily restrict the list of organizations the user could vote for, which limits the point of a semi-democratic process in the first place.
E.g., if climate change were not out of bounds and karma were worth $10 a point, arguably the most impactful thing for climate change a non-EA moderately bright university student could do would be . . . mine karma by pretending to be an EA. I haven’t tried, but 40 to 60 karma per hour from someone consciously trying to mine karma sounds plausible.
I do potentially like the idea of karma giving the right to direct a very small amount of funding, as much for the information value as anything else.
Dustin, I’m pleased that you seriously considered the proposal. I do think that it could be worth funding deeper research into this (assuming you haven’t done this already) - both ‘should we expect better outcomes if funding decisions were made more democratically?’ and ‘if we were coming up with a system to do this, how could we get around the problems you describe?’ One way to do this would be a sort of ‘adversarial collaboration’ between someone who is sceptical of the proposal and someone who’s broadly in favour.
We’re very happy to hear that you have seriously considered these issues.
If the who-gets-to-vote problem was solved, would your opinion change?
We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.
We discuss some of these issues in our response to Halstead on Doing EA Better:
There are several possible factors to be used to draw a hypothetical boundary, e.g.
Committing to and fulfilling the Giving Pledge for a certain length of time
Working at an EA org
Doing community-building work
Donating a certain amount/fraction of your income
Active participation at an EAG
Etc.
These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.
Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.
What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?
What would your criteria for “success” be, and would you be likely to change your mind if those were met?
Given that your proposal is to start small, why do you need my blessing? If this is a good idea, then you should be able to fund it and pursue it with other EA donors and effectively end up with a competitor to the MIF. And if the grants look good, it would become a target for OP funds. I don’t think OP feels their own grants are the best possible, but rather the best possible within their local specialization. Hence the regranting program.
Speaking for myself, I think your list of criteria make sense but are pretty far from a democracy. And the smaller you make the community of eligible deciders, the higher the chance they will be called for duty, which they may not actually want. How is this the same or different from donor lotteries, and what can be learned from that ? (To round this out a little, I think your list is effectively skin in the game in the form of invested time rather than dollars)
Because the donor lottery weights by donation size, the Benefactor or a large earning-to-give donor are much more likely to win than someone doing object-level work who can only afford a smaller donation. Preferences will still get funded in proportion to the financial resources of each donor, so the preferences of those with little money remain almost unaccounted for (even though there is little reason to think they wouldn’t do as well as the more likely winners). Psychologically, I can understand why the current donor lottery would be unappealing to most smaller donors.
Weighting by size is necessary if you want to make the donor lottery trustless—because a donor’s EV is the same as if they donated to their preferred causes directly, adding someone who secretly wants to give to a cat rescue doesn’t harm other donors. But if you employ methods of verifying trustworthiness, a donor lottery doesn’t have to be trustless. Turning the pot over to a committee of lottery winners, rather than a single winner, would further increase confidence that the winners would make reasonable choices.
Thus, one moderate step toward amplifying the preferences of those with less money would be a weighted donor lottery—donors would get a multiplier on their monetary donation amount based on how much time-commitment skin in the game they had. Of course, this would require other donors to accept a lower percentage of tickets than their financial contribution percentage, which would be where people or organizations with a lot of money would come in. The amount of funding directed by of Open Phil (and formerly, FTX) has caused people to move away from earning-to-give, which reduced the supply of potential entrants who would be willing to accept a significantly lower share of tickets per dollar than smaller donors. So I would support large donors providing some funds to a weighted donor lottery in a way that boosts the winning odds—either solo or as part of a committee—for donors who can demonstrate time-commitment skin in the game.[1]
Contributing a smaller amount to the pot without taking any tickets is mostly equivalent—and perhaps optically superior—to taking tickets on a somewhat larger contribution.
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?
What if you limited it to an Open Phil-selected list of organizations (e.g. Open Phil and EA Fund grantees) and set organization maximums (either uniformly or based on past budgets/revenue, say)? Of course, that may defeat some of the purpose because it rules out a lot, but still gives the community more say in relative priority between Open Phil’s priorities. You could also set maximums per cause area (also defined by Open Phil) to prevent it from almost all going to a small number of causes.
Instead of voting, you could do donation matching with individual maximums to make sure people have some skin in the game. Basically like Double Up Drive, but with many more options.
Or just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.
The geographic strategy might work for economic development in poverty-stricken geographic regions. It seems plausible to me that this would e.g. help pay for public goods in Kenya that the GiveDirectly approach doesn’t currently do a good job of funding. I wonder if Justin Rosenstein would be interested in running a pilot?
If folks don’t mind, a brief word from our sponsors...
I saw Cremer’s post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the “EA community” is, as far as I can tell, a complete non-starter.
My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it’s effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population.
I have not myself come up with a non-geographic strategy that doesn’t seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become “dues-paying” members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit for reasons I support!).
There is a serious, live question of what defines an EA right now. Are they longtermists? Do they include animals in the circle of moral concern? Shrimp? I’m not sure how you could establish a clear membership criteria without first answering these questions, and that feels backwards. I do think you could have separate pools of money based on separate worldviews, but you’d probably have to cut pretty narrowly which defeats the point.
As an example,
thea top-rated fund at GWWC is the one for Climate Change: https://www.givingwhatwecan.org/charities/founders-pledge-climate-change-fundWorking on climate change is certainly important, but I see that as fairly suggestive evidence that a more democratric approach would be dilutive to EA principles (i.e. neglectedness in this case) and result in more popular cause selection.
It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn’t compare OP against the rest but against the ideal.
One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn’t vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.
More realistically, some proposals in that broad direction which I think could actually be implementable could be:
allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes.
allowing people to bet against OP’s strategic decisions (e.g., against worldview diversification)
I’d love to see bets between OP and other organizations about whose funding is more effective, e.g., I’d love to see a bet between your and Jaan Tallinn on who’s approach is better, where the winner gets some large amount (e.g., $200M) towards their philanthropic approach
I’m particularly attracted to bets which have the shape of “you will change your mind about this in the future”.
At various points in the past, I think I would have personally appreciated having the option to bet...
against hypothetically continued funding towards Just Impact beating GiveDirectly
against your $8M towards INFER having been efficiently spent
that the marginal $5M given out as grants in an ACX grants-type process would be better than your marginal $5M to forecasting (you are giving more than $5M/yeear to forecasting, cf. your $8M grant to INFER).
against worldview diversification being evaluated positively by a neutral third party.
for closer or later AI timelines.
on more abstract topics, e.g., “your forecasting grantmaking is understaffed/underrated”, or “your forecasting grantmaking is too institutional”, “OP finds it too hard to exercise trust and would obtain better results by having more grant officers”.
at the odds implied by some of your public forecasts.
Note that individual people inside OP may agree with some of the above propositions, even though “OP as a whole” may act as if they believe the opposite.
You could also delegate the research of a strategy for democratic participation to other researchers, rather than doing it yourself, e.g., Robin Hanson’s time is probably buy-able with money. It would really surprise me if he (or other researchers) wasn’t able to come up with a few futarchy-adjacent ideas that were at least worth considering.
More broadly, I think that there is a spectrum between:
OpenPhilanthropy makes all decisions democratically and we all sing Kumbaya
Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers. Karnofsky writes tens of thousands of words in blogposts but does not answer comments. At the same time OP ultimately makes decisions which steer the EA community and reverberate across many lives.
Both extremes are caricatures, but we are closer to the second. Contrast with the Survival and Flourishing Fund, which has a number of regrantors with pots which grow proportionally to their estimated success.
I also think that comparison with FTX’s FF is instructive, because it was willing to trust a larger number of regrantors much earlier, and I think was able to produce a number of more experimental, ambitious and innovative grants as a result. For what it’s worth, my impression here is that Beckstead and MacAskill & the others in the FFF team did a great job here which was pretty much independent of FTX’s fraud.
So anyways, I’ve brought up some mechanisms here:
Allowing people to bet against the success of your grants
Allowing people to bet against the success of your strategic decisions
Allowing people to bet that they are better at giving out grants than OP is
Or generally trying out systems other than grants officers.
Using a wide number of regrantors rather than a small number of grant officers.
which perhaps get some the same benefits that democratization could produce for decision-making, namely information aggregation from a wider pool, and distribution of trust.
My sense is that OP could take these and other steps, and they could have some value of information, while perhaps not being all that risky if tried out at a small scale. It’s unclear though whether the managerial effort would be worth it.
PS: I liked the idea behind the Cause Exploration prizes, though I think that they did fail to produce a mechanism for addressing the above points, since the cause proposals were limited to Global Health & Wellbeing, and the worldview questions were too specific, whereas I think that the most important decisions are at the strategic level.
Strongly disagree about betting and prediction markets being useful for this; strongly agree about there being a spectrum here, where at different points the question “how do we decide who’s an EA” is less critical and can be experimented with.
One point on the spectrum could be, for example, that the organisation is mostly democratically run but the board still has veto power (over all decisions, or ones above some sum of money, or something).
Why would you bet against worldview diversification? All in on one worldview? Or something more specific about the way Open Phil does it?
Likely to do with this; a little more discussion on that point here.
Yes, thanks
I’d like to highlight this paragraph some more:
We’re all interested in mostly agent-neutral goals, so these should be much more aligned by default than agent-relative goals such as profit. That’s a huge advantage that we’re not using sufficiently (I think). Impact markets such as ours or that of the SFF make use of the alignment with regrantors and that between funders (through the S-Process).
The upshot is that there are plenty of mechanism that promise to solve problems for funders while (almost as a side-effect) democratizing funding.
With impact markets in particular we want to enable funders to find more funding opportunities and fund more projects that would otherwise be too small for them to review. On the flip side that means that a much more diverse set of fledgling projects gets funded. It’s a win-win.
Hi Dustin :)
FWIW I also don’t particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn’t necessarily look like “democracy” per se and might look more like more regranting, forecasting tournaments, etc.
Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
It seems like the critics would claim that EA is, if not coercing or subjugating, at least substantially influencing something like the world population in a way that meets the criteria for democratisation. This seems to be the claim in arguments about billionaire philanthropy, for example. I’m not defending or vouching for that claim, but I think whether we are in a sufficiently different situation may be contentious.
This argument seems to be fair to apply towards CEA’s funding decisions as they influence the community, but I do not think I as a self described EA have more justification to decide over bed net distribution than the people of Kenya who are directly affected.
Yes, that seems right.
That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.
I guess I would think that if one wants to argue for democracy as an intrinsic good, that would get you global democracy (and global control of EA funds), and it’s practical and instrumental considerations (which, anyway, are all the considerations in my view) that bite against it.
This is a great point, Alexander. I suspect some people, like ConcernedEAs, believe the specific ideas are superior in some way to what we do now, and it’s just convenient to give them a broad label like “democratizing”. (At Asana, we’re similarly “democratizing” project management!)
Others seem to believe democracy is intrinsically superior to other forms of governance; I’m quite skeptical of that, though agree with tylermjohn that it is often the best way to avoid specific kinds of abuse and coercion. Perhaps in our context there might be more specific solutions along those lines, like an appeals board for COI or retaliation claims. The formal power might still lie with OP, but we would have strong soft reasons for wanting to defer.
In the meantime, I think the forum serves that role, and from my POV we seem reasonably responsive to it? Esp. the folks with high karma.
I probably should have been clearer in my first comment that my interest in democratizing the decisions more was quite selfish: I don’t like having the responsibility, even when I’m largely deferring it to you (which itself is a decision).
My guess is that the current non-democratic EA institutions have serious flaws, and democratic replacement institutions would have even more serious flaws, and it’s still worth trying the democratic institutions (in parallel to the current ones) because 2 flawed structures are better than 1. (For example, because the democratic institutions fund important critical work that the current institutions do not.)
I think this likely depends on who else is funding work in a given area, and what the other funders’ flaws/blind spots are. For instance, if the democratic EA alternative has many of the same flaws/blind spots of larger funders in a cause area, diverting resources from current EA efforts would likely lead to worse outcomes in the cause area as a whole.
Yeah, I definitely agree with this!
An idea I’ve been kicking around in my head for a while is ‘someone should found an organization that investigates what existing humans’ moral priorities are’ - like, if there were a world democracy, what would it vote for?
An idea for a limited version of this within EA could be representatives for interest groups or nations. E.g., the Future Design movement suggests that in decision-making bodies, there should be some people whose role is to advocate for the interests of future generations. There could similarly be a mechanism where (eg) animals got a certain number of votes through human advocates.
GiveWell did some of this research in 2019 (summary, details):
Oh awesome! I’ll check that out.
(Sorry, the formatting here doesn’t seem to work but I don’t know how to fix it)
I think there are two aspects that make “the EA community” a good candidate for who should make decisions:
The need to balance between “getting all perspectives by involving the entire world” and “making sure it’s still about doing the most good possible”. It’s much less vetting over value-alignment than the current state, but still some. I’m not sure it’s the best point on the scale, but I think it might be better than where we are currently. 1.1. another thought about this is that maybe we ought to fix the problem where “value alignment” is, as the other post argues, actually taken much more narrowly than agreeing about “doing the most good”.
The fact that EA is, in the end, a collaborative project and not a corporation. It seems wrong and demotivating to me that EAs have to compete and take big risks on themselves individually to try to have a say about the project they’re still expected to participate in. 2.1. Maybe a way for funders to test this is to ask yourselves—if there weren’t an EA community, would your plans still work as you expect them to? If not, than I think the community ought to also have some say on making decisions.
A couple replies imply that my research on the topic was far too shallow and, sure, I agree.
But I do think that shallow research hits different from my POV, where the one person I have worked most closely with across nearly two decades happens to be personally well researched on the topic. What a fortuitous coincidence! So the fact that he said “yea, that’s a real problem” rather than “it’s probably something you can figure out with some work” was a meaningful update for me, given how many other times we’ve faced problems together.
I can absolutely believe that a different person, or further investigation generally, would yield a better answer, but I consider this a fairly strong prior rather than an arbitrary one. I also can’t point at any clear reference examples of non-geographic democracies that appear to function well and have strong positive impact. A priori, it seems like a great idea, so why is that?
The variations I’ve seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.
“ The variations I’ve seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.”
Agree, but I think we should explore what decision making looks like at different points of that path, instead of only looking at the ends.
I think we’re already along the path, rather than at one end, and thus am inclined to evaluate the merits of specific ideas for change rather than try to weigh the philosophical stance.
https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money?commentId=PP7dbfkQQRsXddCGb
Fair!
I think Open Phil is unique in the EA Community for its degree of transparency which allows this level of community evaluation (with the exception of the Wytham Abbey purchase), and Open Phil should encourage other EA orgs should follow suit.
In addition to FTX style regranting experiments, I think (https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve) it would be worth experimenting with, and evaluating:
The EA Community voting on grants that Open Phil considers to be just above or below its funding bar
The EA community voting on how to make grants from a small pot of Open Phil money
Using different voting methods (eg—quadratic voting, one person one vote, EA Forum weighted karma)
And different definitions of ‘the EA Community’ (staff and ex-staff across EA affiliated orgs, a karma cut off on the EA Forum, people accepted to EAG, people who have donated to EA Funds, etc)
Thanks Dustin!
I am agnostic wrt to your general argument but as the fund manager of the FP Climate Fund I wanted quickly weigh in, so some brief comments (I am on a retreat so my comments are more quickly written than otherwise and I won’t be able to get back immediately, please excuse lack of sourcing and polishing):
At this point, it seems the dominant view on climate interventions seems to be one of uncertainty on how they compare to GW style charity, certainly on neartermist grounds. E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute. While I am not particularly convinced by that grant (though I am arguably not neutral here!), I do think this correctly captures that it seems plausible that the best climate interventions can compare with GW-style-options, esp. when taking into account very strong “co-benefits” of climate interventions around air pollution and reducing energy poverty.
The key meta theory of impact underlying the Climate Fund is to turn neglectedness on its head and try to leverage the large societal attention to climate and improve how it is spent through targeted advocacy addressing blindspots of the mainstream response. Ultimately, as we are after maximizing impact, not maximizing ITN scores. it is quite plausible that neglectedness naively construed misleads us, neglectedness is an impact multiplier in expectation but not a good in itself, and, I’d argue, the ability to leverage large resource streams through advocacy is another impact multiploier, a mechanism by which high cost-effectiveness in a field as crowded as climate becomes at least plausible as long as one focuses on improving the vast resource allocation rather than just piling on. AFAIK, OP also believes that advocacy generally provides an impact multiplier when being risk neutral.
Given that GiveWell interventions are direct service delivery interventions that do not leverage any mechanisms such as advocacy or induced technological change, it also seems possible that leveraged climate interventions are competitive even if climate looks strictly worse than helping the poorest humans on a cause level. Put differently, the impact penalty imposed by risk aversion (if one is risk neutral wrt impact) or the different structure of the impact space (maybe in GHD, direct interventions are indeed better than more leveraged approaches) could make climate and other interventions competitive to GW style charity as well.
In case this sounds like I am very sure that climate should be included, this is not so.
I am just trying to say that it seems less clear than it would look on “naive” ITN grounds and that the “right” answer here might be independent of “democratic dilution”.
Sure, I think you can make an argument like that for almost any cause area (find neglected tactics within the cause to create radical leverage). However, I’ve become more skeptical of it over time, both bc I’ve angled for those opps myself, and because there are increasingly a lot of very smart, strategic climate funders deploying capital. On some level, we should expect the best opps to find funders.
>> E.g. your own Regranting Challenge allocated 10m to a climate org on renewables in Southeast Asia. This illustrates that OP seems to believe that climate interventions can clear the near-termist bar absent democratic pressures that dilute.
The award page has this line “We are particularly interested in funding work to decarbonize the power sector because of the large and neglected impacts of harmful ambient air pollution, to which coal is a meaningful contributor. i.e. it’s part of our new air quality focus area. Without actually haven read the write-up, I’m sure they considered climate impact too, but I doubt it would have gotten the award without that benefit.
That said, the Kigali grant from 2017 is more like your framing. (There was much less climate funding then.)
Thank you for your reply and apologies for the delay!
To be clear, the reason I think this is a more convincing argument in climate than in many other causes is (a) the vastness of societal and philanthropic climate attention and (b) it’s very predictable brokenness, with climate philanthropy and climate action more broadly generally “captured” by one particular vision of solving the problem (mainstream environmentalism, see qualification below).
Yes, one can make this argument for more causes but the implication of this does not seem clear – one implication could be that there actually are much more interventions that in fact do meet the bar and that discounting leverage arguments unduly constrains our portfolio.
That seems fair and my own uncertainty primarily stems from this – I do think overall climate philanthropy has improved significantly over the last years and, in particular, with the influx of climate philanthropy from tech has become more ideologically diverse which seems good and more balanced overall (different biases from different funders).
It might be that this grant needs air pollution benefits to put it over the bar, but – as far as I can tell – there is no reason to think this grant is more strongly correlated with air pollution benefits than many other climate grants.
Cause areas are mental abstractions and whether one abstracts as “climate” or “clean energy acceleration” then seems to affect whether or not they can meet the bar. The Climate Fund is quite explicit about taking the integrated perspective and generally makes grants that also have significant co-benefits in the form of avoided air pollution and driving down the cost of clean energy.
It seems to me that consistency would require that we should assume other climate grants can meet the OP bar as well and that, by implication, the featuring of the Climate Fund on GWWC seems unproblematic.
FWIW, I strongly agree with you that most climate grants do not meet the bar and one needs to spend a lot more time per grant than in other areas and one needs an explicit model of the blindspots left by the mainstream climate response.
>> To be clear, the reason I think this is a more convincing argument in climate than in many other causes is (a) the vastness of societal and philanthropic climate attention and (b) it’s very predictable brokenness, with climate philanthropy and climate action more broadly generally “captured” by one particular vision of solving the problem (mainstream environmentalism, see qualification below).
Vast attention is the mechanism that causes popular causes to usually have lower ROI on the margin; i.e. some of all that attention is likely competent.
I’m not sure what other causes you have in mind here. I think the argument with your two conditions applies equally well to large philanthropic areas like education, poverty/homelessness, and art.
>> It seems to me that consistency would require that we should assume other climate grants can meet the OP bar as well
Absolutely, I agree they can. Do you publish your cost effectiveness estimates?
Nit: “a top-rated fund”
(GWWC brands many funds as “top-rated”, reflecting the views of their trusted evaluators. )
I think this idea is worth an orders-of-magnitude deeper investigation than what you’ve described. Such investigations seem worth funding.
It’s also worth noting that OP’s quotation is somewhat selective, here I include the sub-bullets:
Absolutely, I did not mean my comment to be the final word, and in fact was hoping for interesting suggestions to arise.
Good point on the more detailed plan, though I think this starts to look a lot more like what we do today if you squint the right way. e.g. OP program officers are subject matter experts (who also consult with external subject matter experts), and the forum regularly tears apart their decisions via posts and discussion, which then gets fed back into the process.
Thanks so much for sharing your perspective, as the main party involved.[1] A minor nitpick:
I think this might be misleading: there are 12 top-rated funds. The Climate Change one is one of the three “Top-rated funds working across multiple cause areas”.
(and thank you for all the good you’re doing)
Ah I didn’t know that, thank you!
Good comment.
But I think I don’t really buy the first point. We could come up with some kind of electorate that’s frustrating but better than the whole world. Forum users weighted by forum karma is a semi democratic system that’s better than any you suggest and pretty robust to takeover (though the forum would get a lot of spam)
My issue with that is I don’t believe the forum makes better decisions than OpenPhil. Heck we could test it, get the forum to vote on it’s allocation of funds each year and then compare in 5 years to what OpenPhil did and see which we’d prefer.
I bet that we’d pick OpenPhil’s slate from 5 years ago to the forum average from then.
So yeah, mainly I buy your second point that democratic approaches would lead to less effective resource allocation.
(As an aside there is democratic power here. When we all turned on SBF, that was democratic power—turns out that his donations did not buy him cover after the fact and I think that was good)
In short. I think the current system works. It annoys me a bit, but I can’t come up with a better one.
The proposal of forum users weighted by karma can be taken over if you have a large group of new users all voting for each other. You could require a minimum number of comments, lag in karma score by a year or more, require new comments within the past few months and so on to make it harder for a takeover, but if a large enough group is invested in takeover and willing to put in the time and effort, I think they could do it. I suppose if the karma lags are long enough and engagement requirement great enough, they might lose interest and be unable to coordinate the takeover.
You could stop counting karma starting from ~now (or some specific date), but that would mean severely underweighting legitimate newcomers. EDIT: But maybe you could just do this again in the future without letting everyone know ahead of time when or what your rules will be, so newcomers can eventually have a say, but it’ll be harder to game.
You could also try to cluster users by voting patterns to identify and stop takeovers, but this would be worrying, since it could be used to target legitimate EA subgroups.
As I say, seems like this isn’t the actual problem, even if we did get the right group I wouldn’t trust them to be better than openphil.
I was trying to highlight a bootstrapping problem, but by no means meant it to be the only problem.
It’s not crazy to me to create some sort of formal system to weigh the opinions of high-karma forums posters, though as you say that is only semi-democratic, and so reintroduces some of the issues Cremer et al were trying to solve in the first place.
I am open-minded about whether it would be better than openphil, assuming they get the time to invest in making decisions well after being chosen (sortition S.O.P.).
I agree that some sort of periodic rules reveal could significantly mitigate corruption issues. Maybe each generation of the chosen council could pick new rules that determine the subsequent one.
A simpler version of this is to have a system of membership, where existing members can nominate new members. Maybe every year some percentage of the membership gets chosen randomly and given the opportunity to nominate someone. In addition to having a process for becoming a member, there could also be processes for achieving higher levels of seniority, with more senior members granted greater input into membership decisions, and processes for nudging people who’ve lost interest in EA to let their membership lapse, and processes to kick out people found guilty of wrongdoing.
I assume there are a lot of membership-based organizations which could be studied: Rotary International, the Red Cross, national fraternities & sororities, etc.
A membership system might sound like a lot of overhead, but I think we’re already doing an ad-hoc, informal version of something like this. As NegativeNuno put it: “Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers.” My vague impression is that at least a few grantmakers like this system, and believe it is a good and necessary way for people to build trust. So if we step back and acknowledge that “building trust” is an objective, and it’s currently being pursued in an ad-hoc way which is probably not very robust, we can ask: “is there a better way to achieve that objective?”
How much are you thinking karma would be “worth”? It’s not that hard for an intelligent person to simulate being an EA if the incentives are right. If significant money were involved, you’d have to heavily restrict the list of organizations the user could vote for, which limits the point of a semi-democratic process in the first place.
E.g., if climate change were not out of bounds and karma were worth $10 a point, arguably the most impactful thing for climate change a non-EA moderately bright university student could do would be . . . mine karma by pretending to be an EA. I haven’t tried, but 40 to 60 karma per hour from someone consciously trying to mine karma sounds plausible.
I do potentially like the idea of karma giving the right to direct a very small amount of funding, as much for the information value as anything else.
Dustin, I’m pleased that you seriously considered the proposal. I do think that it could be worth funding deeper research into this (assuming you haven’t done this already) - both ‘should we expect better outcomes if funding decisions were made more democratically?’ and ‘if we were coming up with a system to do this, how could we get around the problems you describe?’ One way to do this would be a sort of ‘adversarial collaboration’ between someone who is sceptical of the proposal and someone who’s broadly in favour.
Hi Dustin,
We’re very happy to hear that you have seriously considered these issues.
If the who-gets-to-vote problem was solved, would your opinion change?
We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.
We discuss some of these issues in our response to Halstead on Doing EA Better:
There are several possible factors to be used to draw a hypothetical boundary, e.g.
Committing to and fulfilling the Giving Pledge for a certain length of time
Working at an EA org
Doing community-building work
Donating a certain amount/fraction of your income
Active participation at an EAG
Etc.
These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.
Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.
What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?
What would your criteria for “success” be, and would you be likely to change your mind if those were met?
Given that your proposal is to start small, why do you need my blessing? If this is a good idea, then you should be able to fund it and pursue it with other EA donors and effectively end up with a competitor to the MIF. And if the grants look good, it would become a target for OP funds. I don’t think OP feels their own grants are the best possible, but rather the best possible within their local specialization. Hence the regranting program.
Speaking for myself, I think your list of criteria make sense but are pretty far from a democracy. And the smaller you make the community of eligible deciders, the higher the chance they will be called for duty, which they may not actually want. How is this the same or different from donor lotteries, and what can be learned from that ? (To round this out a little, I think your list is effectively skin in the game in the form of invested time rather than dollars)
Because the donor lottery weights by donation size, the Benefactor or a large earning-to-give donor are much more likely to win than someone doing object-level work who can only afford a smaller donation. Preferences will still get funded in proportion to the financial resources of each donor, so the preferences of those with little money remain almost unaccounted for (even though there is little reason to think they wouldn’t do as well as the more likely winners). Psychologically, I can understand why the current donor lottery would be unappealing to most smaller donors.
Weighting by size is necessary if you want to make the donor lottery trustless—because a donor’s EV is the same as if they donated to their preferred causes directly, adding someone who secretly wants to give to a cat rescue doesn’t harm other donors. But if you employ methods of verifying trustworthiness, a donor lottery doesn’t have to be trustless. Turning the pot over to a committee of lottery winners, rather than a single winner, would further increase confidence that the winners would make reasonable choices.
Thus, one moderate step toward amplifying the preferences of those with less money would be a weighted donor lottery—donors would get a multiplier on their monetary donation amount based on how much time-commitment skin in the game they had. Of course, this would require other donors to accept a lower percentage of tickets than their financial contribution percentage, which would be where people or organizations with a lot of money would come in. The amount of funding directed by of Open Phil (and formerly, FTX) has caused people to move away from earning-to-give, which reduced the supply of potential entrants who would be willing to accept a significantly lower share of tickets per dollar than smaller donors. So I would support large donors providing some funds to a weighted donor lottery in a way that boosts the winning odds—either solo or as part of a committee—for donors who can demonstrate time-commitment skin in the game.[1]
Contributing a smaller amount to the pot without taking any tickets is mostly equivalent—and perhaps optically superior—to taking tickets on a somewhat larger contribution.
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
My decision criteria would be whether the chosen grants look likely to be better than OPs own grants in expectation. (n.b. I don’t think comparing to the grants people like least ex post is a good way to do this).
So ultimately, I wouldn’t be willing to pre-commit large dollars to such an experiment. I’m open-minded that it could be better, but I don’t expect it to be, so that would violate the key principle of our giving.
Re: large costs to small-scale experiments, it seems notable that those are all costs incurred by the community rather than $ costs. So if the community believes in the ROI, perhaps they are worth the risk?
Appreciate you engaging!
What if you limited it to an Open Phil-selected list of organizations (e.g. Open Phil and EA Fund grantees) and set organization maximums (either uniformly or based on past budgets/revenue, say)? Of course, that may defeat some of the purpose because it rules out a lot, but still gives the community more say in relative priority between Open Phil’s priorities. You could also set maximums per cause area (also defined by Open Phil) to prevent it from almost all going to a small number of causes.
Instead of voting, you could do donation matching with individual maximums to make sure people have some skin in the game. Basically like Double Up Drive, but with many more options.
EDIT: AndrewDoris suggested employee donation matching at EA orgs here: https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money?commentId=hDmQarAwpZLyRTCD7
That’s something Open Phil could earmark money to EA orgs for, and that would allow donations to organizations not already on Open Phil’s radar.
Or just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.
The geographic strategy might work for economic development in poverty-stricken geographic regions. It seems plausible to me that this would e.g. help pay for public goods in Kenya that the GiveDirectly approach doesn’t currently do a good job of funding. I wonder if Justin Rosenstein would be interested in running a pilot?
People who get accepted to EAG?