Ok great. Well I just want to re-emphasize the distinction again between “OP” and the people who work at OP. It’s not a homogenous blob of opinions, and AFAIK we didn’t fire anybody related to this, so a lot of the individuals who work there definitely agree with you/want to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I don’t work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasn’t what I was aiming for, and I’ll try to mitigate it if I can.
Dustin Moskovitz
Apologies, again, for putting words in your mouth. I was using a little gallows humor to try to break the tension. It didn’t work.
Oh sorry I wasn’t speaking precisely enough—I only meant you wouldn’t want them working with OP and would advise them not to. I didn’t mean to put words in your mouth and I agree they could help recruit a donor to work with another group.
I can confirm that Oliver dislikes us especially, and that other people dislike us as well.
- 13 Dec 2024 12:12 UTC; 29 points) 's comment on Upcoming changes to Open Philanthropy’s university group funding by (
I don’t think it’s true that no other donors exist for these areas. My understanding is Alexander and his colleagues are engaging some folks already and expect to get more inbounds now that this is better known.
It seems clear you actually do not want them to recruit donors for the grantees you’re focused on, which is ok, but there are also areas that have nothing to do with you.
Re: attack surface in my early comment, I actually meant attacks from EAs. People want to debate the borders, quite understandably. I have folks in my DMs as well as in the comments. Q: “Why did we not communicate more thoroughly on the forum”
A: “Because we’ve communicated on the forum before”
I don’t think endorse vs. not endorse describes everything here, but it describes some if it. I do think I spend some energy on ~every cause area, and if I am lacking conviction, that is a harder expenditure from a resource I consider finite.
An example of a non-monetary cost where I have conviction: anxiety about potential retribution from our national political work. This is arguably not even EA (and not new), but it is a stressful side hustle we have this year. I had hoped it wouldn’t be a recurring thing, but here we are.
An example of a non-monetary cost where I have less conviction: the opportunity cost of funding insect welfare instead of chicken, cow, or pig welfare. I think I could be convinced, but I haven’t been yet and I’ve been thinking about it a long time! I’d much prefer to just see someone who actually feels strongly about that take the wheel. It is not a lot of $s in itself, but it keeps building, and there are an increasing number of smaller FAW areas like this.
I failed to forecast this issue for myself well when we were in an expansionary mindset, and I found that the further we went, the more each area on the margin had some element of this problem. I deferred for a really long time, until it became too much. Concurrently, I saw the movement becoming less and less appealing to other funders, and I believe these are related issues.- 11 Oct 2024 18:57 UTC; 158 points) 's comment on Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent by (
- 10 Nov 2024 12:29 UTC; 24 points) 's comment on Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes) by (
I’ve long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I’m only trying to get back into integrity with my own.
OP is not an abstraction, of course, and I hope you continue talking to the individuals you know and have known there.- 13 Dec 2024 12:12 UTC; 29 points) 's comment on Upcoming changes to Open Philanthropy’s university group funding by (
The question is inseparable from the lack of other donors. Of course it is true right now, because they have no one else to refer the grants to.
My hope is that having other donors for OP would genuinely create governance independence as my apparent power comes from not having alternate funding sources*, not from structural control. Consequently, you and others lay blame on me even for the things we don’t do. I would be happy to leave the board even, and happy to expand it to diminish my (non-controlling) vote further. I did not want to create a GVF hegemony any more than you wanted one to exist. (If the future is a bunch of different orgs, or some particular “pure” org, that’s good by me too; I don’t care about OP aggregating the donors if others don’t see that as useful.)
But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don’t want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.
* Notably just for the “weird” stuff. We do successfully partner with other donors now! I don’t get in their way at all, as far as I know.
That could well be, but my experience was having another foundation, like FTX, didn’t insulate me from reputation risks either. I’m just another “adherent of SBF’s worldview” to outsiders.
I’d like to see a future OP that is not synonymous with GVF, because we’re just one of the important donors instead of THE important donor, and having a division of focus areas currently seems viable to me. If other donors don’t agree or if staff behaves as if it isn’t true, then of course it won’t happen.
+1 to Alexander’s POV
>> In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
I agree people are paraphrasing me like this. Let’s go back to the quote I affirmed: “Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari’s public reputation in an adverse way, or are generally “weird” in a way that might impose more costs on Dustin and Cari.”
I read the part after “or” as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it. The example in my comment is not about reputation. Later comments from Oliver seem to imply he really did mean just PR risk so I was wrong to affirm this.
If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything.I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now.[Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.- 15 Jun 2024 21:09 UTC; 64 points) 's comment on [Linkpost] An update from Good Ventures by (
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.- 17 Dec 2024 22:06 UTC; 6 points) 's comment on (The) Lightcone is nothing without its people: LW + Lighthaven’s big fundraiser by (LessWrong;
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
I’m not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is “give an inch, fight a mile”. So I’ll choose to be less legible instead.
- 11 Oct 2024 18:57 UTC; 158 points) 's comment on Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent by (
- 18 Jun 2024 1:22 UTC; 147 points) 's comment on [Linkpost] An update from Good Ventures by (
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.- 10 Nov 2024 12:29 UTC; 24 points) 's comment on Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes) by (
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
“PR risk” is an unnecessarily narrow mental frame for why we’re focusing.
Risky things are risky in multiple ways. Diffusing across funders mitigates some of them, some of the time.
AND there are other bandwidth issues: energy, attention, stress, political influence. Those are more finite than capital.
- 18 Jun 2024 1:22 UTC; 147 points) 's comment on [Linkpost] An update from Good Ventures by (
Yes, I’m explicitly pro-funding by others. Framing the costs as “PR” limits the way people think about mitigating costs. It’s not just “lower risk” but more shared responsibility and energy to engage with decision making, persuading, defending, etc.
- 10 Nov 2024 12:29 UTC; 24 points) 's comment on Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes) by (
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.