I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
(Your ranking isn’t displayed on the comment thread, so if you were intending to communicate which organizations you were referring to with the readership you may want to edit your comment here)
I don’t have a lot of confidence in this vote, and it’s quite possible my ranking will change in important ways. Because only the top three organizations place in the money, we will all have the ability to narrow down which placements are likely to be outcome-relevant as the running counts start displaying. I’m quite sure I have not given all 36 organizations a fair shake in the 5-10 minutes I devoted to actually voted.
Has there been any consideration of creating sub-funds for some or all of the critical ecosystem gaps? Conditioned on areas A, B, and C being both critical and ~not being addressed elsewhere, it would feel a bit unexpected if donors have no way to give monies to A, B, or C exclusively.
If a donor values A, B, and C differently—and yet the donor’s only option is defer to LTFF’s allocation of their marginal donation between A, B, and C—they may “score” LTFF less well than they would score an opportunity to donate to whichever area they rated most highly by their own lights.
The best reason to think this might not make a difference: If enough donors wanted to defer to LTFF’s allocation among the three areas, then donor choice of a specific cause would have no practical effect due to funging.
Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:
One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don’t look that extensive relative to OpenAI’s money burn.
Broader “equitable” remedies are sometimes available, but they are more discretionary and there may be some significant barriers to them here. Specifically, a court would need to consider the effects of any equitable relief on third parties who haven’t done anything wrongful (like the bulk of OpenAI employees, or investors who weren’t part of an alleged conspiracy, etc.), and consider whether Musk unreasonably delayed bringing this lawsuit (especially in light of those third-party interests). On hot take, I am inclined to think these factors would weigh powerfully against certain types of equitable remedies.
Stated more colloquially, the adverse effects on third parties and the delay (“laches”) would favor a conclusion that Musk will have to be content with money damages, even if they fall short of giving him full relief.
Third-party interests and delay may be less of a barrier to equitable relief against Altman himself.
Musk is an extremely sophisticated party capable of bargaining for what he wanted out of his grants (e.g., a board seat), and he’s unlikely to get the same sort of solicitude on an implied contract theory that an ordinary individual might. For example, I think it was likely foreseeable in 2015 to January 2017 -- when he gave the bulk of the funds in question—that pursuing AGI could be crazy expensive and might require more commercial relationships than your average non-profit would ever consider. So I’d be hesitant to infer much in the way of implied-contractual constraints on OpenAI’s conduct than section 501(c)(3) of the Internal Revenue Code and California non-profit law require.
The fraud theories are tricky because the temporal correspondence between accepting the bulk of the funds and the alleged deceit feels shaky here. By way of rough analogy, running up a bunch of credit card bills you never intended to pay back is fraud. Running up bills and then later deciding that you aren’t going to pay them back is generally only a contractual violation. I’m not deep into OpenAI drama, but a version of the story in which the heel turn happened later in the game than most/all of Musk’s donations and assistance seems plausible to me.
The surprise for me was that QURI has only been able to fundraise for ~24% of its lower-estimate CY25 funding needs. Admittedly, I don’t follow funding trends in this space, so maybe that news isn’t surprising to others. The budget seems sensible to me, by the way. Having less runway also makes sense in light of events over the past two years.
I think the confusion for me involves a perceived tension between numbers that might suggest a critical budget shortfall at present and text that seemed more optimistic in tone (e.g., talking about eagerness to expand). Knowing that there’s a possible second major funder helps me understand why that tension might be there—depending on the possible major funder’s decision, it sounds like the effect of Forum-reader funding on the margin might range from ~”keeping the lights on” to “funding some expansion”?
We’re looking to raise another ~$200k for 2025, to cover our current two-person team plus expenses. We’d also be enthusiastic about expanding our efforts if there is donor interest.
What is QURI’s total budget for 2025? If I’m reading this correctly—that there’s currently a $200K funding gap for the fast-approaching calendar year—that is surprising information to me in light of what I assumed the total budget for a two-person org would be.
Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
- ^
I place this in quotes because the term is ambiguous.
- ^
Not that I expect the election administrators to be unsporting, but there should be an explicit norm that they do not vote after the evening of December 2 as they could not only snipe but maybe even cast a de facto tiebreaking vote on December 3 with inside knowledge. (I know of at least EA-adjacent place where using inside information to one’s advantage is seen as fine, hence the desire to be clear here.)
There are non-animal welfare reasons one might vote to ban slaughterhouses or factory farms in one’s city (but be more okay with them elsewhere). Doing ~zero research to approximate the median voter, they sound like things with some potentially significant negative local externalities (adverse environmental effects, reduced property values, etc.) So you may have some NIMBY-motivated voters.
In addition, because the meat market is a regional or even national one, opponents cannot plausibly point to any effect of a localized slaughterhouse/factory farm ban on the prices that local voters pay at the grocery store. I think there’s probably a subset of voters who would vote yes for a measure if and only if it has no plausible economic effect on the prices they pay.
Finally, these cities are more progressive than the states in which they exist, and a state can almost always pre-empt any city legislation that the state political system doesn’t like. So I’d want to see evidence that the city voters weren’t too far out of step with the state median voter before updating too much on city-level results. (Unlike the states—which American political theory holds to pre-exist the Federal government and possess their own inherent sovereignty—cities and counties are generally creations of the states without anything like their own inherent sovereignty.)
Despite these benefits, many funders focus exclusively on supporting regranting organisations’ outgoing grants rather than funding the operational infrastructure that enables these organisations to provide holistic support.
Reading this post as someone not really in the animal-advocacy or grantmaking spaces, this sentence triggers the question: why do funders take that stance?[1]
Presumably they have some reason for thinking the grantmaking functions are above their bar but the holistic support functions are not. Without knowing their rationale, it is difficult to evaluate what is functionally an implied response to that rationale. Advocacy pieces are fine, but if I were considering a donation then I would want to be confident I understood both sides of the dispute rather than dismissing the views of “many funders” without an attempt at understanding them.
(Of course, if the funders won’t tell you what the rationale is, then there’s not much you can do to respond!)
- ^
It’s not clear where operational costs directly and essentially related to grantmaking (e.g., evaluating grants, accounting, etc.) fall into the dichotomy. I’ll assume for now that operational costs directly and essentially related to grantmaking are in the same category as the grants themselves.
- ^
It seems plausible that J/309/etc advocates knew at some point that the initiatives were very unlikely to pass, and that low financial investment from that juncture onward was thus more a consequence of low public support earlier in the campaign season more than a cause of low public support.
Does anyone have information that could evaluate that possibility, such as longitudinal records of spending and polling outcomes?
Manifest was advertised on Forum and the controversial speakers were IIRC largely advertised and invited guests. Some of the talks were at least adjacent to the objected-to views.
That seems a significantly tighter connection than “someone in EA associated with someone who has some right wing views.”
What results were people expecting for Measure J & Ordinance 309 specifically?
My (not very informed) take is that most voters would see these kinds of city/county level restrictions on production and processing as largely performative. (They would assume, probably correctly, that the CAFOs and slaughterhouses would just move elsewhere.) Given the broader revealed electoral zeitgeist (“It’s the economy, stupid”), I’m not surprised that otherwise potentially sympathetic voters would have little appetite if they perceived these measures as accepting local job losses & tax revenue losses merely to force CAFOs or slaughterhouses to relocate other cities/counties.
Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome.
How much of that do you think was about what the legacy institutions knew vs. what they publicly communicated? The Polymarket hive mind doesn’t necessarily care about things like maintaining democratic institutions (like not making calls that could influence elections elsewhere with still-open polls) or long-term individual reputation (like having to walk the Florida call back in 2000). I don’t see those as weaknesses.
Upvoted, but I don’t think one could develop and even-handedly enforce a rule on community-health disputes that didn’t drive out content that (a) needed to be here, because it was very specifically related to this community or an adjacent one, and (b) called for action by this community. So I think those factors warrant treating community-health dispute content as frontpage content, even though it lets a lot of suboptimal content slip through.
I think you may have a point on “positions on EA issues” narrowly defined—but that is going to be a tough boundary to enforce. Once someone moves to the implied conclusion of “vote for X,” then commenters will understandably feel that all the reasons not to vote for X are fair commentary whether or not they involve “positions on EA issues.” [ETA: I say narrowly defined because content about how so-and-so is a fascist, or mentally unstable, or what have you is not exactly in short supply. I have little reason to believe that anyone is going to change their minds about such things from reading discussions on the Forum.]
There’s also a cost to having a bunch of partisan political content—the vast majority of which would swing in one direction for the US—showing up when people come to EA’s flagship public square. We have to work with whoever wins, and tying ourselves to one team or the other more than has already happened poses some considerable costs. There is much, much less broader risk on community-health disputes like Nonlinear (one can simply choose not to read them).
Maybe less so in EA than in other charities, but at the ~100K point a hypothetical charity may rely more significantly on volunteer labor compared to the ~1M version of that charity. One could argue that the volunteer labor is a non-economic cost that should be factored into the cost-effectiveness analysis, or could view it as essentially a freebie. From a counterfactual perspective, the correct answer will probably vary.
I’m hesitant to support giving the moderators license to decide which discussions of which candidates should get default front-page visibility. There are also possible legal implications to selective elevation of explicitly partisan political content to default visibility in light of EVF US’s status as a 501(c)(3) charity and the limitations on partisan political activity that come with that status.
Do you think there’s a difference between developmentally and otherwise appropriate engagement focused on younger people and problematic targeting? Your statement that the cringe-inducing activities would basically include “most early-stage college targeting” along with “any” targeting at the high school level implies that there may be some difference at the young adult level in your mind, but maybe not at the not-quite-adult level.
My usual approach on these sorts of questions is to broaden the question to include what kinds of stuff I would think appropriate for analogous altruistic/charitable movements, and then decide whether EA has any special features that justify a deviation from that baseline. If I deploy that approach, my baseline would be (e.g.) that there are certainly things that are inappropriate for under-20s but that one could easily extend a norm too broadly. Obviously, the younger the age in question, the less that would be appropriate—but I don’t think I’m left with a categorical bar for engagement directed at under-18s.
(Whether investing resources in under-20s is a strategically wise use of resources is a different question to me, but does not bring up feelings of cringe for me.)
While I am not aware of any norms or consensus, I would be okay with that. My own view is that use of generative AI should be proactively disclosed where the AI could fairly be considered the primary author of the post/comment. I am unsure how much support this view has, though.
You may want to add something like [AI Policy] to the title to clue readers into the main subject matter and whether they’d like to invest the time to click on and read it. There’s the AI tag, but that doesn’t show up on the frontpage, at least on my mobile.