I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
I don’t think that is surprising where only one’s first-place vote among non-eliminated orgs counts. The screenshots below suggest that when Arthropoda is eliminated, about half of its votes go to SWP, with most of the rest going to WAI and RP. From public information, we don’t know where SWP votes would go if it were eliminated, but it’s plausible that many would go to Arthropoda if it were still in the race.
Moreover, at the time of Arthropoda’s elimination, it was behind SWP 46-31, so while there’s evidence of a clear rank ordering preference among the electorate I would not call it overwhelming.
I have been doing that, but from a UI/UX perspective people need to first intuit that there is a race between the three listed and the ~2 next in line and then click 2-3 times in succession. I think top-three only was the correct default UI/UX early, but at this stage in the process the choice between those pairwise comparisons is pretty important.
It’s hard for me to assess how successful the current mechanism is, but I noticed that ~20-25% of people with votes for orgs that made the top 8 do not have a vote listed when we get down to the top 3. There are various possible reasons for that, but it does raise the possibility that nudging people toward the outcome-determinative elements of the ranking process would be helpful in the final days.
I read this at first as about the programming of the leaderboard and thought of an answer, but (now with caffeine) think that isn’t what you are asking! Anyway, the answer was to include both the current “in the money” section and a “last ones eliminated / in the hunt” section of the 4th/5th/maybe 6th place choices on the front page.
This would allow the reader to see at a glance what the most outcome-relevant pairwise comparisons are and how close they are. In turn, this would encourage them to vote if they had a clear opinion on those and focus their attention on the most impactful elements of their ranking. (They should still vote honestly for statistical purposes but may want to pay special attention to the pairwise comparisons that will determine money moved.)
I don’t think academic philosophy is the right frame of reference here.
We can imagine a range of human pursuits that form a continuum of concern about COIs. On the one end, chess is a game of perfect information trivially obtained by chess critics. Even if COIs somehow existed in chess, thinking about them is really unlikely to add value because evaluating the player’s moves will ~always be easier and more informative.[1] On the other hand, a politician may vote on the basis of classified information, very imperfect information, and considerations for which it is very difficult to display reasoning transparency. I care about COIs a lot there!
I’m not a professional (or even amateur) philosopher, but philosophical discourse strikes me as much closer to the chess side of the continuum. Being a billionaire philanthropist seems closer to the middle of the continuum. If we were grading EA/OP/GV by academic philosophy norms, I suspect we would fail some of their papers. As Thorstad has mentioned, there is little public discussion of key biorisk information on infohazard grounds (and he was unsuccessful in obtaining the information privately either). We lack information—such as a full investigation into various concerns that have been raised—to fully evaluate whether GV has acted wisely in channeling tens of millions of dollars into CEA and other EVF projects. The recent withdrawal from certain animal-welfare subareas was not a paragon of reasoning transparency.
To be clear, it would be unfair to judge GV (or billionaire philanthropists more generally) by the standards of academic philosophy or chess. There’s a good reason that the practice of philanthropy involves consideration of non-public (even sensitive) information and decisions that are difficult to convey with reasoning transparency. But I don’t think it is appropriate to then apply those standards—which are premised on the ready availability of information and very high reasoning transparency—to the critics of billionaire philanthropists.
In the end, I don’t find the basic argument for a significant COI against “anti-capitalist” interventions by a single random billionaire philanthropist (or by Dustin and Cari specifically) to be particularly convincing. But I do find the argument stronger on a class basis of billionaire philanthropists. I don’t think that’s because I am anti-capitalist—I would also be skeptical of a system in which university professors controlled large swaths of the philanthropic funding base (they might be prone to dismissing the downsides of the university-industrial complex) or in which people who had made their money through crypto did (I expect they would be quite prone to dismissing the downsides of crypto).
~~~~
As for we non-billionaires, the effect of (true and non-true) beliefs about what funders will / won’t fund on what gets proposed and what gets done seems obvious. There’s on-Forum evidence that being too far away from GV’s political views (i.e., being “right-coded”) is seen as a liability. So that doesn’t seem like psychologizing or a proposition that needs much support.
- ^
I set aside the question of whether someone is throwing matches or otherwise colluding.
- ^
As noted in this comment, my willingness to update on the results themselves is limited by concerns that the results could be significantly influenced by different levels of get-out-the-vote-efforts (which I would consider noise). Unless I can find a way to minimize that potential noise source, I expect to change my mind insofar as promoting a few organizations that ranked higher than expected to my consider/research list for early 2025 donations—but I won’t assign significant weight per se to the vote totals in my final decisions.
To what extent are people voting in a manner that is consistent / not consistent with their past or intended future personal donations? I notice that my current ranking doesn’t really align with the last time I handed out donations of my own money (end of 2023, for tax reasons). Some of that may reflect changed priorities and development over the past year, but I doubt all of it does.
To the extent that some of us have different impulses when handing out my own money versus (largely) other people’s money, how might we disentangle the extent to which each set of impulses is correct? (For a third data point, my votes in the Equal Hands pilot have been somewhere between these two.)
I think the difference may largely come down to psychological factors, such as:
I am probably more conservative with my own donations than in voting for where to send pooled donations.
I sometimes want to give most people at least half a loaf, and this tendency feels more realizable when spending other people’s money because I am thinking more at a group level. In contrast, I don’t have enough in the pot to give a meaningful amount of money to a dozen different orgs, and still have most of my money for those I think most important to fund.
A kinder concept than bias would be conflict of interest. In the broader society, we normally don’t expect a critic to prove actual biased decision-making to score a point; identifying a meaningful conflict of interest is enough. And it’s not generally considered “psychologizing those [one] disagrees with” to point to a possible COI, even if the identification is mediated by assumptions about the person’s internal mental functions.
Instead, they just seem to presuppose that a broadly anti-capitalist leftism is obviously correct, such that anyone who doesn’t share their politics (for which, recall, we have been given no argument whatsoever) [ . . . .]
I don’t think EAs are Thorstad’s primary intended audience here. To the extent that most of that audience thinks what you characterize as “a broadly anti-capitalist leftism” is correct, or at least is aware of the arguments that are advanced in favor of that position, it isn’t necessarily a good use of either his time or reader time to reinvent the wheel. This is roughly similar to how most posts here generally assume the core ideas associated with EAs and are not likely to move the needle with people who are either not informed of or are unpersuaded by the same. I’m guessing he would write differently if writing specifically to an EA audience.
More broadly, one could argue that the flipside of the aphorism that extraordinary claims require extraordinary evidence is that one only needs to put on (at most) a minimal case to refute an extraordinary claim unless and until serious evidence has been marshalled in its favor. It’s plausible to think—for instance—that “it is right and proper for billionaires (and their agents) to have so much influence and discretion over philanthropy” or “it is right and proper for Dustin and Cari, and their agents, to have so much influence and discretion over EA” are indeed extraordinary claims, and I haven’t seen what I would characterize as serious evidence in support of them. Relatedly, capitalism doesn’t have a better claim to being the default starting point than does anti-capitalism.
people are literally voting based on what OP is not funding
Given that Aaron’s point was about “marginal dollars,” this doesn’t strike me as a major reason against it. RP is currently #1. EA Animal Welfare Fund is currently #2, and I don’t think it the kinds of work it funds are necessarily things OP won’t fund.
I didn’t even vote for GG bc I know it won’t win, but it does warm my cold dead heart that four whole people did
You should vote for your honest preference for data-gathering purposes (and because it’s epistemically good for your cold dead heart!). Under the IRV system, your vote will be transferred to your next-highest-ranked charity once GG is eliminated, so it is not a “wasted vote” by any means.
I’d update significantly more in that direction if the final outcomes for the subset of voters with over X karma (1000? 2000? I dunno) were similar to the current all-voter data.
I say that not because I think only medium-plus karma voters have value, but because it’s the cleanest way I can think of to mitigate the risk that the results have been affected by off-Forum advocacy and organizing. Those efforts have been blessed by the mods within certain bounds, but the effects of superior get-out-the-vote efforts are noise insofar as determining what the “consensus EA” is, and the resulting electorate may be rather unrepresentative. In contrast, the set of medium-plus karma voters seems more like to be representative of the broader community’s thinking regarding cause areas. (If there are other voter characteristics that could be analyzed and would be expected to be broadly representative, those would be worth looking at too.)
For example, it seemed fairly clear to me that animal-advocacy folks were significantly more on the ball in claiming funds during Manifund’s EA Community Choice event than other folks. This makes sense given how funding constrained animal advocacy is. So the possibility that something similar could be going on here caps how much I’d be willing to update on the current data.
Marcus says:
But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.)
Matrice says:
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists).
These seem to be hinting at an important crux. On the one hand, I can see that cooperating with people who have other concerns about AI could water down the content of one’s advocacy.
On the other hand, might it be easier to get a broader coalition behind a pause, or some other form of regulation that many others in an AI-concerned coalition would view as a win? At least at a cursory level, many of the alternatives Marcus mentioned sound like things that wouldn’t interest other members of a broader coalition, only people focused on x-risk.
Whether x-risk focused advocates alone can achieve enough policy wins against the power of Big AI (and corporations interested in harnessing it) is unclear to me. If other members of the AI-concerned coalition have significantly more influence than the x-risk group—such that a coalition-based strategy would excessively “risk focusing on policies and AI systems that have little to do with existential risk”—then it is unclear to me whether the x-risk group had enough influence to go it alone either. In that case, would they have been better off with the coalition even if most of the coalition’s work only generically slowed down AI rather than bringing specific x-risk reductions?
My understanding is that most successful political/social movements employ a fairly wide range of strategies—from elite lobbying to grassroots work, from narrow focus on the movement’s core objectives to building coalitions with those who may have common opponents or somewhat associated concerns. Ultimately, elites care about staying in power, and most countries important to AI do have elections. AI advocates are not wrong that imposing a bunch of regulations of any sort will slow down AI, make it harder for AI to save someone like me from cancer 25-35 years down the road, and otherwise impose some real costs. There has to be enough popular support for paying those costs.
So my starting point would be an “all of the above” strategy, rather than giving up on coalition building without first making a concerted effort first. Maybe PauseAI the org, or pause advocacy the idea, isn’t the best way to go about coalition building or to build broad-based public support. But I’m not seeing too much public discussion of better ways?
Press release is from Stop AI, which I think is a separate outfit?
The cognitive burden of any election with 39 candidates will always be significant. What about a system—whether score-based on ranking-based—in which each voter is only presented with 8-12 of the candidates?
While the nominal goal of the election is to identify three winners, I think the information-gathering objective is much more important here than in political elections. The broader ranking list, and more so than the ultimate outcome, is what matters for helping donors identify orgs they should research more, should re-consider, etc. I’d rather get a chance at the considered opinion of ~25% of the electorate vs. a possible but more cursory assessment by 100%.
There may not be, I don’t feel I’ve exhausted the list of possibilities so hedged my comment a bit.
I can envision worlds in which supporters of one’s second choice would have an incentive to knock your first choice out—so the vote would flow to their supported charity instead. I suppose those people could write a comment critical of your first choice to try to get it eliminated before theirs? That seems awfully speculative, only seems a plausible attack if one knows most/all of the voting orders for people who ranked the org first. Especially since only the top three are in the money and changing the order of elimination ordinarily won’t change the top three.
One’s second, third, etc. choices would only come into play when/if their first choice is eliminated by the IRV system. Although there could be some circumstances in which voting solely for one’s #1 choice could be tactically wise, I believe they are rather narrow and would only be knowable in the last day or two.
I think it’s reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can’t be publicly discussed.
While I appreciate why orgs may not want to release public information on all initiatives, the unavoidable consequence of that decision is that small/medium donors are not in a position to consider those initiatives when deciding whether to donate. Moreover, I think Open Phil et al. are capable of adjusting their own donation patterns in consideration of the fact that some orgs’ ability to fundraise from the broader EA & AIS communities is impaired by their need for unusually-low-for-EA levels of public transparency.
“Run posts by orgs” is ordinarily a good practice, at least where you are conducting a deep dive into some issue on which one might expect significant information to be disclosed. Here, it seems reasonable to assume that orgs will have made a conscious decision about what general information they want to share with would-be small/medium donors. So there isn’t much reason to expect that an inquiry (along with notice that the author is planning to publish on-Forum) would yield material additional information.[1] Against that, the costs of reaching out to ~28 orgs is not insignificant and would be a significant barrier to people authoring this kind of post. The post doesn’t seem to rely on significant non-public information, accuse anyone of misconduct, or have other characteristics that would make advance notice and comment particularly valuable.
Balancing all of that, I think the opportunity for orgs to respond to the post in comments was and is adequate here.
- ^
In contrast, when one is writing a deep dive on a narrower issue, the odds seem considerably higher that the organization has material information that isn’t published because of opportunity costs, lack of any reason to think there would be public interest, etc. But I’d expect most orgs’ basic fundraising ask to have been at least moderately deliberate.
- ^
You may want to add something like [AI Policy] to the title to clue readers into the main subject matter and whether they’d like to invest the time to click on and read it. There’s the AI tag, but that doesn’t show up on the frontpage, at least on my mobile.
(Your ranking isn’t displayed on the comment thread, so if you were intending to communicate which organizations you were referring to with the readership you may want to edit your comment here)
I don’t have a lot of confidence in this vote, and it’s quite possible my ranking will change in important ways. Because only the top three organizations place in the money, we will all have the ability to narrow down which placements are likely to be outcome-relevant as the running counts start displaying. I’m quite sure I have not given all 36 organizations a fair shake in the 5-10 minutes I devoted to actually voted.
Time for the strategic voting to begin!
One observation is that RP has a strategic advantage here as a cross-cause org where voters may be unsure which cause area the marginal funding will benefit. This makes it a potentially attractive second-choice option when the top votegetter in a cause area is eliminated. Compare, for instance, its current significant lead in the top-3 with the top-5 results (with AMF and PauseAI present as the last orgs standing in global health and x-risk).
Rules are rules and should be followed, but I think the top-5 better represents the will of the electorate than the top-3. (There are also a non-trivial number of voters who did not indicate a preference in the top-3 but who did in the top-5 or at least top-8.)