I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.
Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have decided to exit a handful of sub-causes (amounting to less than 5% of our annual grantmaking), and we are no longer planning to expand into new causes in the near term by default.”
A few follow-ups on this from an Open Phil perspective:
I want to apologize to directly affected grantees (who’ve already been notified) for the negative surprise here, and for our part in not better anticipating it.
While this represents a real update, we remain deeply aligned with Good Ventures (they’re expecting to continue to increase giving via OP over time), and grateful for how many of the diverse funding opportunities we’ve recommended that they’ve been willing to tackle.
An example of a new potential focus area that OP staff had been interested in exploring that Good Ventures is not planning to fund is research on the potential moral patienthood of digital minds. If any readers are interested in funding opportunities in that space, please reach out.
Good Ventures has told us they don’t plan to exit any overall focus areas in the near term. But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk. I think it’s important to diversify funding in many of the fields Good Ventures currently funds, and that doing so could make the funding base more stable both directly (by diversifying funding sources) and indirectly (by lowering the time and energy costs to Good Ventures from being such a disproportionately large funder).
Another implication of these changes is that going forward, OP will have a higher bar for recommending grants that could draw on limited Good Ventures bandwidth, and so our program staff will face more constraints in terms of what they’re able to fund. We always knew we weren’t funding every worthy thing out there, but that will be even more true going forward. Accordingly, we expect marginal opportunities for other funders to look stronger going forward.
Historically, OP has been focused on finding enough outstanding giving opportunities to hit Good Ventures’ spending targets, with a long-term vision that once we had hit those targets, we’d expand our work to support other donors seeking to maximize their impact. We’d already gotten a lot closer to GV’s spending targets over the last couple of years, but this update has accelerated our timeline for investing more in partnerships and advising other philanthropists. If you’re interested, please consider applying or referring candidates to lead our new partnerships function. And if you happen to be a philanthropist looking for advice on how to invest >$1M/year in new cause areas, please get in touch.
There are 8.1 billion people on the planet and afaict 8,099,999,999 of them donate less to my favorite causes & orgs than @Dustin Moskovitz. That was true before this update and it will remain true after it. Like everyone else I have elaborate views on how GV/OP should spend money/be structured etc but let the record also show that I appreciate the hell out of Dustin & Cari, we got so lucky 🥲
It’s also noteworthy that Dustin/GV/OP is both by far the largest EA donor and the one EA folks most often single out to express frustration & disappointment about. I get why that is but you gotta laugh
I’ve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything / do anything that’s not associated with Lightcone and its affiliates, so I don’t know if he should count as expressing frustration at Dustin in particular.
Are you including things like criticizing OP for departing grantees too quickly and for not departing grantees quickly enough, or do you have something else in mind?
I view Dustin and OP as quite separate, especially before Holden’s departure, so that might also explain our different experience.
There is a huge amount of work I am deeply grateful for that as far as I can tell is not “associated with Lightcone and its affiliates”. Some examples:
The historical impact of the Future of Humanity Institute
The historical impact of MIRI
Gwern’s writing and work
Joe Carlsmith’s writing
Basically all of Holden’s intellectual output (even if I disagree with his leadership of OP and EA in a bunch of very important ways)
Basically all of Owen Cotton Barratt’s contributions (bar the somewhat obvious fuckups that came out last year, though I think they don’t outshine his positive contributions)
John Wentworth’s contributions
Ryan Greenblatt’s and Buck Shlegeris’s contributions
Paul Christiano’s and Mark Xu’s research (I have disagreements with Paul on EA leadership and governance things, but I think his research overall has been great)
Rohin Shah’s many great contributions over the years
More broadly the Deepmind safety team
Zach Stein Perlman’s work on carefully analyzing lab policies and commitments
There are also many others that I am surely forgetting. There is an enormous number of extremely talented, moral, and smart people involved in the extended rationality/EA/AI-x-risk ecosystem, and I am deeply grateful to many of them. It is rare that my relationship to someone is purely positive and completely devoid of grievance, as I think is normal for relationships, but there are many people for which my assessment of their good vastly outshines the grievances I have.
I can confirm that Oliver dislikes us especially, and that other people dislike us as well.
(“Disliking” feels a bit shallow, though I think a fair gloss. I have huge respect for a huge number of people at Open Philanthropy as well as many of the things you’ve done, in addition to many hard feelings and grievances.
It does sadly to me seem like we are in a world where getting many things right, but some things wrong, still can easily flip the sign on the impact of one’s actions and make it possible to cause large amounts of harm, which is my feeling with regards to OP.
I feel confused what exact emotional and social relationship that should cause me to have with OP. I have similar feelings about e.g. many people at Anthropic. In many respects they seem so close to having an enormous positive impact, they think carefully through many important considerations, and try pretty hard to create an environment for good thinking, but in expected impact space so far away from where I wish they were.)
Apologies, again, for putting words in your mouth. I was using a little gallows humor to try to break the tension. It didn’t work.
Lol, it did totally work and I quite liked your comment and laughed about it. I just wanted to clarify since IDK, it does all still feel a bit high-stakes and being clear seems valuable, but I think your comment was great and did indeed ease a bunch of tension in me.
Ok great. Well I just want to re-emphasize the distinction again between “OP” and the people who work at OP. It’s not a homogenous blob of opinions, and AFAIK we didn’t fire anybody related to this, so a lot of the individuals who work there definitely agree with you/want to keep working with you on things and disagree with me.
Based on your read of their feelings and beliefs, which I sincerely trust is superior to my own (I don’t work out of the office or anything like that), there is empirically a chilling effect from my decisions. All I can say is that wasn’t what I was aiming for, and I’ll try to mitigate it if I can.
Thanks, I appreciate that. I might message you at random points in the coming months/years with chilling effects I notice (in as much as that won’t exacerbate them), and maybe ideas to mitigate them.
I won’t expect any response or much engagement, I am already very grateful for the bandwidth you’ve given me here, as one of the people in the world with the highest opportunity cost.
Sometimes I wish we had a laughing emoji here...
I’m pleasantly surprised at the DeepMind alignment team being the only industry team called out! I’m curious what you think we’re doing right?
You don’t appear to be majorly used for safety-washing
You don’t appear to be under the same amount of crazy NDAs as I’ve seen from OpenAI and Anthropic
You don’t seem to have made major capabilities advances
You generally seem to take the hard part of the problem more seriously and don’t seem to be institutionally committed in the way Anthropic or OpenAI seems to me to only look at approaches that are compatible with scaling as quickly as possible (this isn’t making a statement about what Google is doing, or Deepmind at large is doing, it’s just saying that the safety team in-particular seems to not be committed this way)
To be clear, I am concerned many of these things will get worse with the Deepming/Brain merger, and a lot of my datapoints are from before then, but I think the track record overall is still quite good.
I’d say from a grantee pov, which is I guess a large fraction of the highly-engaged EA community (eg commenters on a post like this), Dustin/GV/OP have mostly appeared as an aggregate blob—“where the money comes from”. And I’ve heard so much frustration & disappointment about OP over the years! (Along with a lot of praise of course.) That said, I get the spirit of your comment, I wouldn’t want to overstate how negative people are about Dustin or OP.
And for the record I’ve spent considerable energy criticizing OP myself, though not quite so far as “frustration” or “disappointment”.
As the author of the comment linked for “criticizing OP for departing grantees too quickly,” I’d note that (in my pre-Forum days) I expressed concern that GiveWell was ending the Standout Charity designation too abruptly. So I don’t see my post here expressing potential concern about OP transitioning out of these subareas as evidence of singling out OP for criticism.
Thanks for the update! Are there any plans to release the list of sub areas? I couldn’t see it in this post or the blog post, and it seems quite valuable for other funders, small donors (like me!) and future grantees/org founders to know which areas might now be less well funded.
Here are the ones I know about: wild animal welfare (including averting human-caused harms), all invertebrate welfare (including farmed shrimp), digital minds, gene editing*.
I think this is close to all of them in the animal space. I believe there are also some things losing funding in other areas (e.g., see Habryka’s comments), but I’m less familiar with that community.
*I don’t know about gene editing for humans, like for malaria.
[edited to fix typo]
Good to know, what’s the source of this info?
Edit: I retracted the below because I think it is unkind and wasn’t truth-seeking enough. I apologise if I caused too much stress to @Dustin Moskovitz or @Alexander_Berger, even if I have disagreements with GVF/OP about things I very much appreciate what both of you are doing for the world, let alone ‘EA’ or its surrounding community.
Wait what, we’re (or GV is) defunding animal stuff to focus more on AI stuff? That seems really bad to me, I feel like ‘PR’ damage to EA is much more coming from the ‘AI eschaton’ side than the ‘help the animals’ side (and also that interventions on animal welfare are plausibly much more valuable than AI)[1]e.g. here and here
No, the farm animal welfare budget is not changing, and some of the substreams GV are exiting (or not entering) are on the AI side. So any funding from substratgies that GV is no longer funding within FAW would be reallocated to other strategies within FAW (and as Dustin notes below, hopefully the strategies that GV will no longer fund can be taken forward by others).
Yep was about to ask the same question, it’s such a glaring omission I imagine there might be reasons it has not been shared?
Even if there are it might be good to know what those reasons for not sharing are, otherwise it might draw even more attention.
I think some critiques of GVF/OP in this comments section could have been made more warmly and charitably.
The main funder of a movement’s largest charitable foundation is spending hours seriously engaging with community members’ critiques of this strategic update. For most movements, no such conversation would occur at all.
Some critics in the comments are practicing rationalist discussion norms (high decoupling & reasoning transparency) and wish OP’s communications were more like that too. However, it seems there’s a lot we don’t know about what caused GFV/OP leadership to make this update. Dustin seems very concerned about GFV/OP’s attack surface and conserving the bandwidth of their non-monetary resources. He’s written at length about how he doesn’t endorse rationalist-level decoupling as a rule of discourse. Given all of this, it’s understandable that from Dustin’s perspective, he has good reasons for not being as legible as he could be. Dishonest outside actors could quote statements or frame actions far more uncharitably than anything we’d see on the EA Forum.
Dustin is doing the best he can to balance between explaining his reasoning and adhering to legibility constraints we don’t know about in order to engage with the rest of the community. We should be grateful for that.
I generally agree with the spirit of empathy in this comment, but I also think you may be misinterpreting Dustin in a similar way to how others are. My understanding is that Dustin is not primarily driven by how other actors might use his funding / public comments against him. Instead, it is something like the following:
“Dustin doesn’t want to be continually funding stuff that he doesn’t endorse, because he thinks that doing things well and being responsible for the consequences of your actions is intrinsically important. He is a virtue ethicist and not a utilitarian in this regard. He feels that OP has funded things he doesn’t endorse enough times in enough areas to not want to extend blanket trust, and thus feels more responsibility than before to evaluate cases himself, to make sure that both individual grants and higher-level funding strategies are aligned with his values. He believes in doing fewer things well than more things poorly, which is why some areas are being cut.”
Obviously this could be wrong and I don’t want Dustin to feel any obligation to confirm/not confirm it. I’m writing it because I’m fairly confident that it’s at least more right than the prevailing narrative currently in the comments, and because the reasoning makes a fair amount of sense to me (and much more sense than the PR-based narrative that many are currently projecting).
(I don’t want to spend too much time psychologizing here, though I do think a root cause analysis is useful, so I will comment a bit, but will bow out if this gets too much)
I feel like this doesn’t really match with Dustin’s other comments about repeatedly emphasizing non-financial concrete costs. I think Dustin’s model is closer to “every time I fund something I don’t endorse, I lose important forms of social capital, political capital, and feel a responsibility to defend and explain my choices towards others, for which I only have limited bandwidth and don’t have time or energy for. As a result, I am restricting my giving to things that I feel excited to stand behind, which is a smaller set, and where I feel good about the tradeoffs of financial and non-financial costs”.
Re: attack surface in my early comment, I actually meant attacks from EAs. People want to debate the borders, quite understandably. I have folks in my DMs as well as in the comments. Q: “Why did we not communicate more thoroughly on the forum”
A: “Because we’ve communicated on the forum before”
I don’t think endorse vs. not endorse describes everything here, but it describes some if it. I do think I spend some energy on ~every cause area, and if I am lacking conviction, that is a harder expenditure from a resource I consider finite.
An example of a non-monetary cost where I have conviction: anxiety about potential retribution from our national political work. This is arguably not even EA (and not new), but it is a stressful side hustle we have this year. I had hoped it wouldn’t be a recurring thing, but here we are.
An example of a non-monetary cost where I have less conviction: the opportunity cost of funding insect welfare instead of chicken, cow, or pig welfare. I think I could be convinced, but I haven’t been yet and I’ve been thinking about it a long time! I’d much prefer to just see someone who actually feels strongly about that take the wheel. It is not a lot of $s in itself, but it keeps building, and there are an increasing number of smaller FAW areas like this.
I failed to forecast this issue for myself well when we were in an expansionary mindset, and I found that the further we went, the more each area on the margin had some element of this problem. I deferred for a really long time, until it became too much. Concurrently, I saw the movement becoming less and less appealing to other funders, and I believe these are related issues.
That’s brilliant Ariel and you articulate this far better and more forum-appropriate than my attempt at similar! I think in discussion whatever our own norms and preferences, we are likely to get more meaningful discussion if we bend at least somewhat towards those of the person we are talking with.
I find myself particularly disappointed in this as I was working for many years on projects that were intended to diversify the funding landscape, but Open Phil declined to fund those projects, and indeed discouraged me from working on them multiple times (most notably SFF and most recently Lightspeed Grants).
I think Open Phil could have done a much better job at using the freedom it had to create a diverse funding landscape, and I think Open Phil is largely responsible for the degree to which the current funding landscape is as centralized as it currently is.
I’m surprised to hear you say SFF and Lightspeed were trying to diversify the funding landscape, AND that it was bad that OpenPhil didn’t fund them. My understanding was that there was already another donor (Jaan Tallinn) who wanted to make large donations, and you were trying to help them. To me, it seems natural for Jaan to fund these, and that this is great because it results in a genuinely independent donor. OpenPhil funding it feels more like a regranting program, and I don’t see how that genuinely diversifies the landscape in the longterm (unless eg OpenPhil funded a longterm endowment for such a program that they can’t later take away). Was the ask for them to fund the operations, or to add to the pool of money donated? Was the idea that, with more funding, these programs could be more successful and attract more mega donors from outside the community?
The point of Lightspeed Grants was explicitly to create a product that would allow additional funders (beyond Jaan) to distribute funding to important causes.
It also had the more immediate positive effect of increasing the diversity and impact of Jaan’s funding, though that’s not where I expected the big wins to come from and not my primary motivation for working on it. I still feel quite excited about this, but stopped working on it in substantial parts because Open Phil cut funding for all non-LW programs to Lightcone.
The ask here was for development cost and operations, not for any regranting money.
Basically.
To be clear, I think it’s not a crazy decision for Open Phil to think that Jaan is in a better position to fund our SFF and Lightspeed work (though not funding us for Lightspeed did have a pretty substantial effect on our ability to work on it). The bigger effect came from both the implicit and explicit discouragement of working on SFF and Lightspeed over the years, mostly picked up from random conversations with Open Phil staff and people very close to Open Phil staff.
I generally don’t have a ton of bandwidth with my grantmakers at Open Phil, but during our last funding request around 14 months ago, I got the strong sense they thought working on SFF and Lightspeed was a waste of time and money (and indeed, they didn’t give us approximately any funding for Lightspeed when we asked for money for it then). Also, to be clear, they never got to the point of asking us how much money we wanted, or how much it would cost, and just kind of told us out of the blue, after 6 months of delays, that they aren’t interested in funding any non-LW projects when I was still expecting to communicate more of our plans and needs to them, so my best guess is they never actually considered it, or it was dismissed at a pretty early stage.
I feel quite aligned with you spiritually on this topic: we both want more diversified funding for these causes, and see Good Ventures-as-sole-funder both getting in the way of that outcome and creating a variety of second order problems. I’m sorry to hear that OP may have actively dissuaded you from pursuing diversification. Folks like you, as well as OP staff, will now spend more of their time pursuing other funding sources, and I expect that to be strongly positive for the causes and the general EA project over the long run.
In the short run, I agree we have recently lost valuable time and energy via this path, and I regret that and accept a good deal of responsibility. When FTX showed up on the scene, it felt like plurality was happening, and as soon as they went away, I became quite fixated on this topic again.
Thanks for your thoughts, Dustin. I think it was a mistake at the time—and I said as much—to think that FTX and OpenPhil represented sufficient plurality. But I definitely didn’t think FTX would blow up as it did and given that people can only do so many things, it’s understandable that people didn’t focus enough on donor diversification.
I didn’t think of it as sufficient, but I did think of it as momentum. “Tomorrow, there will be more of us” doesn’t feel true anymore.
Thanks for clarifying! That sounds like a pretty unpleasant experience from a grantee perspective, I’m sorry that happened.
Is it possible to elaborate on how certain grants but not others would unusually draw on GV’s bandwidth? For example, what is it about digital minds work that draws so much more bandwidth than technical AI safety grants? Personally I find that this explanation doesn’t actually make any sense as offered without more detail.
I think the key quote from the original article is “In the near term, we want to concentrate our giving on a more manageable number of strategies on which our leaders feel more bought in and with which they have sufficient time and energy to engage.” Why doesn’t Good Ventures just want to own the fact that they’re just not bought in on some of these grant areas? “Using up limited capacity” feels like a euphemism.
My best guess at the situation is that the “limited capacity” is a euphemism for “Dustin/Cari don’t think these things are valuable and Open Phil doesn’t have time to convince them of their value”.
Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari’s public reputation in an adverse way, or are generally “weird” in a way that might impose more costs on Dustin and Cari.
I generally believe they are all valuable (in expectation anyway).
Your second statement is basically right, though my personal view is they impose costs on the movement/EA brand and not just us personally. Digital minds work, for example, primes the idea that our AI safety concerns are focused on consciousness-driven catalysts (“Terminator scenarios”), when in reality that is just one of a wide variety of ways AI can result in catastrophe.[What I thought I was affirming here seems to be universally misunderstood by readers, so I’m taking it back.]
I hope to see everything funded by a more diverse group of actors, so that their dollar and non-dollar costs are more distributed. Per my other comment, I believe you (Oliver) want that too.
I would be very surprised if digital minds work of all things would end up PR-costly in relevant ways. Indeed, my sense is many of the “weird” things that you made a call to defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem, and I expect will continue to be the attractor for both funding and talent to many of the world’s most important priorities.
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path. Not doing so because some people will walk away with a very shallow understanding of “consciousness” does not seem to me like a good reason to not do that work.
I think you absolutely have a right to care and value your personal reputation, but I do not think your judgement of what would hurt the “movement/EA brand” is remotely accurate here.
I think something worth noting is that most (all?) the negative PR on EA over the last year has focused on areas that will continue to be funded. The areas that were cut for the most part have, to my knowledge, not drawn negative media coverage (maybe Wytham Abbey is an exception — not sure if the sale was related to this though).
Of course, these areas could still be PR risks, and the other areas could be worth funding in spite of the PR risks.
Edit: For disagree voters, I’m curious why you disagree? A quick Google of the negative coverage of OpenPhil or EA all appear to be areas that OpenPhil has not pulled out of, at least to my knowledge. I’m not arguing that they shouldn’t have made this determination, but I’d be interested in counter-examples if you disagree (negative media coverage of EA work in an area they are not granting in any more). I’m sure there are some, but my read is that most the negative media is covering work they are still doing. I see some minor off hand remarks about digital sentience, but negative media is overwhelmingly focused on AI x-risk work or FTX/billionaire philanthropy.
Yes, there will definitely still be a lot of negative attention. I come from the “less is less” school of PR.
And of course a lot less positive attention. Indeed a very substantial fraction of the current non-OP funding (like Vitalik’s and Jed’s and Jaan’s funding) is downstream of these “weirder” things. “Less is less” requires there to be less, but my sense is that your actions substantially decrease the total support that EA and adjacent efforts will receive both reputationally and financially.
Can you say more about that? You think our prior actions caused additional funding from Vitalik, Jed, and Jaan?
We’re still going to be funding a lot of weird things. I just think we got to a place where the capital felt ~infinite and we assumed all the other inputs were ~infinite too. AI safety feels like it deserves more of those resources from us, specifically, in this period of time. I sincerely hope it doesn’t always feel that way.
I don’t know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world’s most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
more serious consideration from the world’s smartest people,
or more people respecting the integrity and honesty of the people working on AI Safety,
or the ability for people to successfully coordinate around the safe development of extremely powerful AI systems,
I highly doubt that the changes you made will achieve this, and am indeed reasonably confident they will harm them (in as much as your aim is to just have fewer people get annoyed at you, my guess is you still have chosen a path of showing vulnerability to reputational threats that will overall increase the unpleasantness of your life, but I have less of a strong take here).
In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas. Lightcone Infrastructure obviously works on AI Safety, but apparently not in a way that would allow Open Phil to grant to us (despite what seems to me undeniably large effects on thousands of people working on AI Safety, in myriad ways).
Based on the email I have received, and things I’ve picked up through the grapevine, you did not implement something that is best described as “reducing the number of core priority areas” but instead is better described as “blacklisting various specific methods, associations or conclusions that people might arrive at in the pursuit of the same aims as you have”. That is what makes me much more concerned about the negative effects here.
The people I know who are working on digital minds are clearly doing so because of their models of how AI will play out, and this is their best guess at the best way to make the outcomes of that better. I do not know what they will find in their investigation, but it sure seems directly relevant to specific technical and strategic choices we will need to make, especially when pursuing projects like AI Control as opposed to AI Safety.
AI risk is too complicated of a domain to enshrine what conclusions people are allowed to reach. Of course, we still need to have standards, but IMO those standards should be measured in intellectual consistency, accurate predictions, and a track record of improving our ability to take advantage of new opportunities, not in distant second-order effects on the reputation of one specific actor, and their specific models about political capital and political priorities.
It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here. I’m trying to pick my battles more, since I feel we picked too many. In pulling back, we focused on the places somewhere in the intersection of low conviction + highest pain potential (again, beyond “reputational risks”, which narrows the mind too much on what is going on here).
>> In general, I think people value intellectual integrity a lot, and value standing up for one’s values. Building communities that can navigate extremely complicated domains requires people to be able to follow arguments to their conclusions wherever that may lead, which over the course of one’s intellectual career practically always means many places that are socially shunned or taboo or reputationally costly in the way that seems to me to be at the core of these changes.
I agree with the way this is written spiritually, and not with the way it is practiced. I wrote more about this here. If the rationality community wants carte blanche in how they spend money, they should align with funders who sincerely believe more in the specific implementation of this ideology (esp. vis a vis decoupling). Over time, it seemed to become a kind of purity test to me, inviting the most fringe of opinion holders into the fold so long as they had at least one true+contrarian view; I am not pure enough to follow where you want to go, and prefer to focus on the true+contrarian views that I believe are most important.
My sense is that such alignment is achievable and will result in a more coherent and robust rationality community, which does not need to be inextricably linked to all the other work that OP and EA does.
I find the idea that Jaan/Vitalik/Jed would not be engaged in these initiatives if not for OP pretty counterintuitive (and perhaps more importantly, that a different world could have created a much larger coalition), but don’t really have a good way of resolving that disconnect farther. Evidently, our intuitions often lead to different conclusions.
And to get a little meta, it seems worth pointing out that you could be taking this whole episode as an empirical update about how attractive these ideas and actions are to constituents you might care about and instead your conclusion is “no, it is the constituents who are wrong!”
>> Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time.
Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default. We started by trying to take a scalpel to the problem, and I could not tie initial impressions at grant time to those outcomes well enough to feel that was a good solution. Empirically, I don’t sufficiently trust OPs judgement either.
There is no objective “view from EA” that I’m standing against as much as people portray it that way here; just a complex jumble of opinions and path dependence and personalities with all kinds of flaws.
>> Also, to be clear, my current (admittedly very limited sense) of your implementation, is that it is more of a blacklist than a simple redirecting of resources towards fewer priority areas.
So with that in mind this is the statement that felt like an accusation of lying (not an accusation of a history of lying), and I think we have arrived at the reconciliation that doesn’t involve lying: broad strokes were pragmatically needed in order to sufficiently reduce the priority areas that were causing issues. I can’t know all our grantees, and my estimation is I can’t divorce myself from responsibility for them, reputationally or otherwise.
After much introspection, I came to the conclusion that I prefer to leave potential value on the table than persist in that situation. I don’t want to be responsible for that community anymore, even if it seems to have positive EV.
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
I do think the top-level post could have done a better job at communicating the more blacklist nature of this new policy, but I greatly appreciate you clarifying that more in this thread (and also would have not described what’s going on in the top-level post as “lying”).
Your summary here also seems reasonable, based on my current understanding, though of course the exact nature of the “broad strokes” is important to be clear about.
Of course, there is lots of stuff we continue to disagree on, and I will again reiterate my willingness to write back and forth with you, or talk with you, about these issues as much as you are interested, but don’t want to make you feel like you are stuck in a conversation that realistically we are not going to make that much progress on in this specific context.
I definitely think some update of that type is appropriate, our discussion just didn’t go that direction (and bringing it up felt a little too meta, since it takes the conclusion of the argument we are having as a given, which in my experience is a hard thing to discuss at the same time as the object level).
I expect in a different context where your conclusions here aren’t the very thing we are debating, I will concede the cost of you being importantly alienated by some of the work I am in favor of.
Though to be clear, I think an important belief of mine, which I am confident the vast majority of readers here will disagree with me, is that the aggregate portfolio of Open Phil and Good Ventures is quite bad for the world (especially now, given the updated portfolio).
As such, it’s unclear to me what I should feel about a change where some of the things I’ve done are less appealing to you. You are clearly smart and care a lot about the same things as I care about, but I also genuinely think you are causing pretty huge harm for the world. I don’t want to alienate you or others, and I would really like to maintain good trade relationships in as much as that is possible, since we we clearly have identified very similar crucial levers in the world, and I do not want to spend our resources in negative-sum conflict.
I still think hearing that the kind of integrity I try to champion and care about did fail to resonate with you, and failed to compel you to take better actions in the world, is crucial evidence that I care a lot about. You clearly are smart and thoughtful about these topics and I care a lot about the effect of my actions on people like you.
(This comment overall isn’t obviously immediately relevant, and probably isn’t worth responding to, but I felt bad having my previous comment up without giving this important piece of context on my beliefs)
Can you elaborate on this? Your previous comments explain why you think OP’s portfolio is suboptimal, but not why you think it is actively harmful. It sounds like you may have written about this elsewhere.
My experience of reading this thread is that it feels like I am missing essential context. Many of the comments seem to be responding to arguments made in previous, perhaps private, conversations. Your view that OP is harmful might not be immediately relevant here, but I think it would help me understand where you are coming from. My prior (which is in line with your prediction that the vast majority of readers would disagree with your comment) is that OP is very good.
He recently made this comment on LessWrong, which expresses some of his views on the harm that OP causes.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don’t currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
I don’t really know what you mean by this. I don’t want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don’t have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don’t blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
I don’t really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don’t completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description “into the fold”. The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the “inviting into the fold” here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn’t seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like “you feel like you have to personally defend or engage with everything that OP funds”.
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn’t generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don’t know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don’t want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.
You and I disagree on this, but it feels important to say we disagree on this. To me LessWrong has about a similar amount of edgelordism to the forum (not much).
Who are these “fringe opinion holders” brough “into the fold”. To the extent that this is maybe a comment about Hanania, it seems pretty unfair to blame that on rationality. Manifest is not a rationalist event significantly more than it is an EA one (
if anything most of the money was from OP, not SFF right?).To the extent this is a load-bearing part of your decision-making it just seems not true, rationalism isn’t getting more fringe, rationality seems to have about as much edginess as EA.
My view is that rationalists are the force that actively makes room for it (via decoupling norms), even in “guest” spaces. There is another post on the forum from last week that seems like a frankly stark example.
I cannot control what the EA community chooses for itself norm-wise, but I can control whether I fuel it.
Anyone know what post Dustin was referring to? EDIT: as per a DM, probably this one.
I didn’t mean to argue against the Digital Minds work here and explicitly see them as within the circle of moral concern. However, I believe that a different funder within the EA community would still mitigate the costs I’m talking about tactically. By bringing up the topic there, I only meant to say this isn’t all about personal/selfish ends from my POV (*not* that I think it nets out to being bad to do).
It is not a lot of money to fund at this stage as I understand it, and I hope to see if funded by someone who will also engage with the intellectual and comms work. For GV, I feel more than fully occupied with AI Safety.
Appreciate you engaging thoughtfully with these questions!
I’m slightly confused about this specific point—it seems like you’re saying that work on digital minds (for example) might impose PR costs on the whole movement, and that you hope another funder might have the capacity to fund this while also paying a lot of attention to the public perception.
But my guess is that other funders might actually be less cautious about the PR of the whole movement, and less invested in comms that don’t blow back on (for example) AI safety.
Like, personally I am in favour of funder diversity but it seems like one of the main things you lose as things get more decentralised is the ability to limit the support that goes to things that might blow back on the movement. To my taste at least, one of the big costs of FTX was the rapid flow of funding into things that looked (and imo were) pretty bad in a way that has indirectly made EA and OP look bad. Similarly, even if OP doesn’t fund things like Lighthaven for maybe-optics-ish reasons, it still gets described in news articles as an EA venue.
Basically, I think better PR seems good, and more funding diversity seems good, but I don’t expect the movement is actually going to get both?
(I do buy that the PR cost will be more diffused across funders though, and that seems good, and in particular I can see a case for preserving GV as something that both is and seems reasonable and sane, I just don’t expect this to be true of the whole movement)
“PR risk” is an unnecessarily narrow mental frame for why we’re focusing.
Risky things are risky in multiple ways. Diffusing across funders mitigates some of them, some of the time.
AND there are other bandwidth issues: energy, attention, stress, political influence. Those are more finite than capital.
First I feel like you are conflating 2 issues here. You start and finish by talking about PR, but in the middle you argue the important of the future I think it’s important to separate these two issues to avoid confusion, I’ll just discuss the PR angle
I disagree and think there’s a smallish but significant risk of PR badness here. From my experience talking to even my highly educated friends who aren’t into EA, they find it very strange that money is invested into researching the welfare of future AI minds at all and often flat out disagree that money should be spent on that. That indicates to me (weakly from anecdata) that there is at least some PR risk here.
I also think there are pretty straightforward framings like “millions poured into welfare of robot minds which don’t Even exist yet” which could certainly be bad for PR. If I were anti EA I could write a pretty good hit piece about rich people in silicon valley prioritizing their digital mind AI hobby horse ahead of millions of real minds that are suffering right now.
What are your grounds for thinking that this has a almost insignificant chance of being “PR costly”?
I also didn’t like this comment because it seemed unnecessarily arrogant, and also dismissive of the many working in areas not defunded, who I hope you would consider at least part of the heart of the wonderful EA intellectual ecosystem.
“defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem,”
That said I probably do agree with this...
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path”
But don’t want to conflate that with the PR risk....
For what it’s worth, as a minor point, the animal welfare issues I think are most important, and the interventions I suspect are the most cost-effective right now (e.g. shrimp stunning), are basically only fundable because of EA being weird in the past and willing to explore strange ideas. I think some of this does entail genuine PR risk in certain ways, but I don’t think we would have gotten most of the most valuable progress that EA has made for animal welfare if we paid attention to PR between 2010 and 2021, and the animal welfare space would be much worse off. That doesn’t mean PR shouldn’t be a consideration now, but as a historical matter, I think it is correct that impact in the animal space has largely been driven by comfort with weird ideas. I think the new funding environment is likely a lot worse for making meaningful progress on the most important animal welfare issues.
The “non-weird” animal welfare ideas that are funded right now (corporate chicken campaigns and alternative proteins?) were not EA innovations and were already being pursued by non-EA animal groups when EA funding entered the space. If these are the best interventions OpenPhil can fund due to PR concerns, animals are a lot worse off.
I personally would rather more animal and global health groups distanced themselves from EA if there were PR risks, than EA distancing itself from PR risks. It seems like groups could just make determinations about the right strategies for their own work with regard to PR, instead of there being top down enforcement of a singular PR strategy, which I think is likely what this change will mostly cause. E.g. I think that the EA-side origins of wild animal welfare work are highly risky from a PR angle, but the most effective implementation of them, WAI, both would not have occurred without that PR risky work (extremely confident), and is now exceedingly normal / does not pose a PR risk to EA at all (fairly confident) nor does EA pose one to it (somewhat confident). It just reads as just a normal wild animal scientific research group to basically any non-EA who engages with it.
Thanks for the reply! I wasnt actually aware that animal welfare has run into major PR issues. I didn’t think the public took much interest in wild animal or shrimp welfare. I probably missed it but would be interested to see the articles / hit pieces.
I don’t think how “weird” something is necessarily correlates to PR risk. It’s definitely a factor but there are others too. For example buying Wytham Abbey wasn’t weird, but appeared to many in the public at least inconsistent with EA values.
I don’t think these areas have run into PR issues historically, but they are perceived as PR risks.
I agree that I make two separate points. I think evaluating digital sentience seems pretty important from a “try to be a moral person” perspective, and separately, I think it’s just a very reasonable and straightforward question to ask that I expect smart people to be interested in and where smart people will understand why someone might want to do research on this question. Like, sure, you can frame everything in some horribly distorting way, and find some insult that’s vaguely associated with that framing, but I don’t think that’s very predictive of actual reputational risk.
Most of the sub-cause areas that I know about that have been defunded are animal welfare priorities. Things like insect suffering and wild animal welfare are two of the sub-cause areas that are getting defunded, which I both considered to be among the more important animal welfare priorities (due to their extreme neglectedness). I am not being dismissive of either global health or animal welfare people, they are being affected by this just as much (I know less about global health, and my sense is the impact of these changes are less bad there, but I still expect a huge negative chilling effect on people trying to think carefully about the issues around global health).
Specifically with digital minds I still disagree that it’s a super unlikely area to be as PR risk. To me it seems easier than other areas to take aim at, the few people I’ve talked to about it find it more objectionable than other EA stuff I’ve talked about. and there seems to me some prior as it could be associated with other long termist EA work that has already taken PR hits.
Thanks for the clarification about the defunded areas I just assumed it was only long termist areas defunded my bad I got that wrong. Have corrected my reply.
Would be good to see an actual list of the defunded areas...
Do you think that these “PR” costs would be mitigated if there were more large (perhaps more obscure) donors? Also, do you think that “weird” stuff like artificial sentience should be funded at all or just not by Good Ventures?
[edit: see this other comment by Dustin]
Yes, I’m explicitly pro-funding by others. Framing the costs as “PR” limits the way people think about mitigating costs. It’s not just “lower risk” but more shared responsibility and energy to engage with decision making, persuading, defending, etc.
@Dustin Moskovitz I think some of the confusion is resulting from this:
In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
Then, people started challenging things like “how much should reputational risk to the EA movement matter and what really are the second-order effects of things like digital minds research.”
Then, I was expecting you to just say something like “yeah, we probably disagree on the importance of reputation and second-order effects.”
But instead, it feels (to me) like you kind of backtracked and said “no actually, it’s not really about reputation. It’s more about limited capacity– we have finite energy, attention, stress, etc. Also shared responsibility.”
It’s plausible that I’m misunderstanding something, but it felt (at least to me) like your earlier message made it seem like PR/reputation was the central factor and your later messages made it seem like it’s more about limited capacity/energy. These feel like two pretty different rationales, so it might be helpful for you to clarify which one is more influential (or present a clearer synthesis of the two rationales).
(Also, I don’t think you necessarily owe the EAF an explanation– it’s your money etc etc.)
>> In my reading of the thread, you first said “yeah, basically I think a lot of these funding changes are based on reputational risk to me and to the broader EA movement.”
I agree people are paraphrasing me like this. Let’s go back to the quote I affirmed: “Separately, my guess is one of the key dimensions on which Dustin/Cari have strong opinions here are things that affect Dustin and Cari’s public reputation in an adverse way, or are generally “weird” in a way that might impose more costs on Dustin and Cari.”
I read the part after “or” as extending the frame beyond reputation risks, and I was pleased to see that and chose to engage with it. The example in my comment is not about reputation. Later comments from Oliver seem to imply he really did mean just PR risk so I was wrong to affirm this.
If you look at my comments here and in my post, I’ve elaborated on other issues quite a few times and people keep ignoring those comments and projecting “PR risk” on to everything.
I feel incapable of being heard correctly at this point, so I guess it was a mistake to speak up at all and I’m going to stop now.[Sorry I got frustrated; everyone is trying their best to do the most good here] I would appreciate if people did not paraphrase me from these comments and instead used actual quotes.I want to echo the other replies here, and thank you for how much you’ve already engaged on this post, although I can see why you want to stop now.
I did in fact round off what you were saying as being about PR risk yesterday, and I commented as such, and you replied to correct that, and I found that really helpful—I’m guessing a lot of others did too. I suppose if I had already understood, I wouldn’t have commented.
At the risk of overstepping or stating the obvious:
It seems to me like there’s been less legibility lately, and I think that means that a lot more confusion brews under the surface. So more stuff boils up when there is actually an outlet.
That’s definitely not your responsibility, and it’s particularly awkward if you end up taking the brunt of it by actually stepping forward to engage. But from my perspective, you engaging here has been good in most regards, with the notable exception that it might have left you more wary to engage in future.
Ah, gotcha. This makes sense– thanks for the clarification.
I’ve looked over the comments here a few times, and I suspect you might think you’re coming off more clearly than you actually are. It’s plausible to me that since you have all the context of your decision-making, you don’t see when you’re saying things that would genuinely confuse others.
For example, even in statement you affirmed, I see how if one is paying attention to the “or”, one could see you technically only/primarily endorsing the non-PR part of the phrase.
But in general, I think it’s pretty reasonable and expected that people ended up focusing on the PR part.
More broadly, I think some of your statements have been kind of short and able to be interpreted in many ways. EG, I don’t get a clear sense of what you mean by this:
I think it’s reasonable for you to stop engaging here. Communication is hard and costly, misinterpretations are common and drain energy, etc. Just noting that– from my POV– this is less of a case of “people were interpreting you uncharitably” and more of a case of “it was/is genuinely kind of hard to tell what you believe, and I suspect people are mostly engaging in good faith here.”
Sorry to hear that, several people I’ve spoken to about this offline also feel that you are being open and agreeable and the discussion reads from the outside as fairly civil, so except perhaps with the potential heat of this exchange with Ollie, I’d say most people get it and are happy you participated, particularly given that you didn’t need to. For myself, the bulk of my concern is with how I perceive OP to have handled this given their place in the EA community, rather than my personal and irrelevant partial disagreement with your personal funding decisions.
[edited to add “partial” in the last sentence]
Noooo, sorry you feel that way. T_T I think you sharing your thinking here is really helpful for the broader EA and good-doer field, and I think it’s an unfortunate pattern that online communications quickly feels (or even is) somewhat exhausting and combative.
Just an idea, maybe you would have a much better time doing an interview with e.g. Spencer Greenberg on his Clearer Thinking podcast, or Robert Wiblin on the 80,000 Hours podcast? I feel like they are pretty good interviewers who can ask good questions that make for accurate and informative interviews.
To be clear, I definitely didn’t just mean PR risks! (Or I meant them in a way that was intended in a quite broad way that includes lots of the other things you talk about). I tried to be quite mindful of that in for example my latest comment.
Can you give an example of a non-PR risk that you had in mind?
I wonder if this is an argument for Good Ventures funding more endowments. If they endow a re-granter that funds something weird, they can say “well the whole point of this endowment was to diversify decision-making; it’s out of our hands at this point”. From the perspective of increasing the diversity of the funding landscape, a no-strings endowment seems best, although it could have other disadvantages.
[By ‘endowment’ I’m suggesting a large, one-time lump sum given to a legally independent organization. That organization could choose to give away the endowment quickly and then dissolve, or give some legally mandated minimum disbursement every year, or anything in between.]
>> If they endow a re-granter that funds something weird, they can say “well the whole point of this endowment was to diversify decision-making; it’s out of our hands at this point”.
I proposed this myself at one point, and the team politely and quite correctly me informed me that projecting this response from critics was naive. We are ultimately responsible for the grants downstream of our decisions in the eyes of the world, regardless of who made intermediate decisions.
As an example of how this has played out in practice, we’re known (and largely reviled) locally in San Francisco for supporting the controversial DA Chesa Boudin. In fact, I did not even vote for him (and did not vote in the recall), and the only association is the fact that we made a grant in 2019 to a group that supported him in later years, for totally unrelated work. We made a statement to clarify all this, which helped a little on the margin, but did not substantially change the narrative.
Just to clarify, since I commented in a sibling comment. I agree with the above and think that Good Ventures would still be reputationally on the hook of what an endowment would fund (edit: and generally think that just putting more layers between you and a thing you want to support but not be associated, in order to reduce the risk of association, is a tool that comes with large negative externalities and loss of trust).
The reason why I think it would help is because it would nevertheless genuinely increase funder diversity in the ecosystem, which is something that you said you cared about. I do think that might still expose your and Cari’s reputation to some risk, which I understand is something you also care a bunch about, but I don’t think is a good argument on altruistic grounds (which is fine, it’s your money).
Like I said, I proposed it myself. So I’m sympathetic to the idea, and maybe we’ll come back to it in some years if it truly becomes impossible to achieve real plurality.
Yes, I think funding an endowment would have been a great thing to do (and something I advocated for many times over the years). My sense is that it’s too late now.
(Mostly agreeing)
I feel like:
1. I haven’t seen any writing about how disagreements between Dustin/Cari, and other OP execs, have changed priorities. (Or how other “political” considerations changed priorities)
2. I’m sure that there have been disagreements between them, that have changed priorities.
3. I would naively expect many organizations to swipe “changes made, just because some other exec wanted them” under the rug of some other reason or other, like, “Well, we have a lot of uncertainty on this topic.” Likewise, I don’t trust the reasons that many organizations give to many of their non-obvious high-level decisions.
Therefore, I think it’s pretty natural to conclude that there’s probably something funky going on, as I’d similar expect for other institutions. That some/many of the reasons for high-level decisions are political instead of epistemic.
I’d similarly assume that many high-level people at OP would like to signal these differences, but it would be difficult for them to do so (as is usually the case), so wouldn’t mind EAs making conclusions like this.
That said—if it is the case that there are important political reasons for things, I think it would be really useful if people at OP could signal that more, in some fashion.
Like, “Again, we want to remind people that many of our high-level assessments are made in ways specific to opinions of Dustin, Cari, and OP execs, often opinions we expect that other EAs would disagree with. Many of these opinions are private. So please don’t assume that the conclusions we find should mirror ones that others would conclude on.”
I’ve heard from a few people who have taken some of OP’s high-level prioritization far too seriously as a conclusive epistemic take, in my opinion. Like, “OP has split it’s top-level budget this way, so I assume that I’d also conclude that for my own spending or time.”
I wrote at length about my views on epistemic confidence here https://medium.com/@moskov/works-in-progress-the-long-journey-to-doing-good-better-9dfb68e50868
Kudos for commenting here, and in the rest of this thread!
Just fyi, I find your comments in threads like these a lot more informative than blog posts like the one you linked to.
I think that blog post is reasonable, but it is fairly high-level, and I find that the devil is typically in the details. I feel like I’ve seen other people, both good and bad, post high-level epistemic takes that seemed good to me, so I’m just not how much I can take away from posts like that specifically.
But comments to questions explaining specific decisions is something I find quite useful!
I’m not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is “give an inch, fight a mile”. So I’ll choose to be less legible instead.
As a datapoint (which you can completely ignore), I feel like in the circles I travel in, I’ve heard a lot more criticism of OP that look more like “shady non-transparent group that makes huge decisions/mistakes without consulting anyone except a few Trusted People who all share the same opinions.”
There are certainly some cases in which the attack surface is increased when you’re fully open/transparent about reasoning.
But I do think it can be easy to underestimate the amount of reputational damage that OP (and you, by extension) take from being less legible/transparent. I think there’s a serious risk that many subgroups in EA will continue to feel more critical of OP as it becomes more clear that OP is not interested in explaining its reasoning to the broader community, becomes more insular, etc. I also suspect this will have a meaningful effect on how OP is perceived in non-EA circles. I don’t mean e/accs being like “OP are evil doomers who want to give our future to China”– I mean neutral third-parties who dispassionately try to form an impression of OP. When they encounter arguments like “well OP is just another shady billionaire-funded thing that is beholden to a very small group of people who end up deciding things in non-transparent and illegible ways, and those decisions sometimes produce pretty large-scale failures”, I expect that they will find these concerns pretty credible.
Caveating that not all of these concerns would go away with more transparency and that I do generally buy that more transparency will (in some cases) lead to a net increase on the attack surface. The tradeoffs here seem quite difficult.
But my own opinion is that OP has shifted too far in the “worry a lot about PR in the conventional sense” direction in ways that have not only led to less funding for important projects but also led to a corresponding reduction in reputation/status/prestige, both within and outside of EA circles.
Thanks for explaining your position here!
(Again, feel free to stop this at any time)
> The attitude in EA communities is “give an inch, fight a mile”. So I’ll choose to be less legible instead.
I assume you meant that EAs are the ones fighting OP, with things like poor comments?[1]
If you feel that way, that seems intensely bad, for both you and the rest of the community.
I’m obviously saddened that reactions from this community seem to have been so frustrating. It also generally seems unhealthy to have an epistemic environment like this.
I’m really curious about ways to make these communications go better. My impression is that:
1. Communication is very important.
2. Most (though not all) people in OP+EA are very reasonable and altruistic.
3. It is often the case that posting important content on the EA Forum is unpleasant. Sometimes the most upset people might respond, for instance, and upvotes/downvotes can be scary. I know a lot of people who very much avoid this.
This doesn’t seem like it should be an impossible knot to me. Even if it cost $10M+ to hire some professional business coaches or something, that seems very likely worth it to me. We could move key conversations to any possible platforms that might be smoother. (Conversations, podcasts, AMAs, private or public, etc).
I’m not sure if I could be useful here, but definitely happy to help if there are ideas. If there are suggestions you have for me or community members to make things go better, I’d be eager.
I personally strongly care about communication between these parties feeling open and not adversarial—I like and value both these groups (OP + EA) a lot, and it really seems like a pity (a huge amount of EV lost) if communication is seriously strained.
[1](It could also mean external journalists anti-EA doing the fighting, with blog posts)
Nice points, Ozzie! For reference, Alex wrote:
I asked Alex about the above 3 months ago:
There was no answer.
[Edit: I’m not an expert in digital minds at all, don’t take too much from this!]
Minor point, but I’m still really not convinced by research into digital minds. My personal model is that most of that can be figured out post-AGI, and we first should model epistemic-lock-in, before we are worried about it in specific scenarios. Would be happy to see writing explaining the expected value! (That said, I’m not really making decisions in this area, just a passing observer)
Update: I see above that Oliver seems to really be bought into it, which I didn’t realize. I would flag that I’m very uncertain here—there definitely could be some strong reasons I’m not aware of.
Also, this isn’t “me not liking weird things”—I’m a big fan of weird research, just not all weird research (and I’m not even sure this area seems that weird now)
Hi Ozzie,
One could also have argued for figuring out farmed animal welfare after cheap animal food (produced in factory-farms) is widely available? Now that lots of people are eating factory-farmed animals, it is harder to role back factory-farming.
Not sure if this helps, but I currently believe:
1. Relatively little or no AI suffering will happen, pre-AGI.
2. There’s not going to actually be much lock-in on this, post-AGI.
3. When we get to AGI, we’ll gain much better abilities to reason through these questions. (making it different from the “figuring out animal welfare” claim.
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.
Apparently there will be a debate week about this soon! I hope that that covers territory similar to what I’m thinking (which I assumed was fairly basic). It’s very possible I’ll be convinced to the other side, I look forward to the discussion.
I might write a short post if it seems useful then.
Some quick takes on this from me: I agree with 2 and 3, but it’s worth noting that “post-AGI” might be “2 years after AGI while there is a crazy singularity on going and vast amounts of digital minds”.
I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time you’re not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.
I think there is a strong case for work on making deals with AIs and investigating what preferences AIs have (if any) for mitigating AI takeover risk. I think paying AIs to reveal their misalignment and potentially to work for us and prevent AI takeover seems like a potentially very promising intervention.
This work is strongly connected to digital minds work.
Further, I think there is a substantial chance that AI moral patienthood becomes a huge issue in coming years and thus it is good to ensure that field has better views and interventions.
I’m pretty skeptical of this. (Found a longer explanation of the proposal here.)
An AI facing such a deal would be very concerned that we’re merely trying to trick it into revealing its own misalignment (which we’d then try to patch out). It seems to me that it would probably be a lot easier for us to trick an AI into believing that we’re honestly presenting it such a deal (including by directly manipulating it’s weights and activations), than to actually honestly present such an deal and in doing so cause the AI to believe it.
I agree with this part.
I’m hoping this doesn’t happen anytime soon. This assumes that AI would themselves own property and be seen as having legal persons or similar.
My strong guess is that GPT4 to GPT7 or so won’t have much sentience, and won’t have strong claims to owning property (unless weird political stuff happens).
I’m sure it’s theoretically possible to make beings that we’d consider as important to humans or more, I just don’t expect these LLMs to be those beings.
Hmm, I think the time when deals with AIs are important is pretty much “when AIs pose serious risk via misalignment”. (I also hope this isn’t soon all else equal.) Even if such AIs have absolutely no legal rights at this time, it still seems like we can make deals with them and give them assets (at least assets they will eventually be able to use). E.g., make a foundation which is run by AI obligation honoring purists with the mission of doing what the AI wants and donate to the foundation.
This sounds like we’re making some property ownership system outside regular US law, which seems precarious to me. The government really doesn’t seem to like such systems (i.e. some bitcoin), in part because the government really wants to tax and oversee all important trade/commerce. It’s definitely possible for individuals today to make small contracts with each other, ultimately not backed by US law, but I think the government generally doesn’t like these—it just doesn’t care because these are relatively small.
But back to the main issue:
This scenario seems to imply a situation where AIs are not aligned enough to be straightforward tools of their owners (in which case, the human owners would be the only agents to interact with), but not yet powerful enough to take over the world. Maybe there are just a few rogue/independent AIs out there, but humans control other AIs, so now we have some period of mutual trade?
I feel like I haven’t seen this situation be discussed much before. My hunch is that it’s very unlikely this will be relevant, at least for long time (and thus, it’s not very important in the scheme of things), but I’m unsure.
I’d be curious if there’s some existing write-ups on this scenario. If not, and if you or others think it’s likely, I’d be curious for it to be expanded in some longer posts or something. Again, it’s unique enough that if it might be a valid possibility, it seems worth it to me to really probe at further.
(I don’t see this as separate from my claim that “AI sentience” seems suspect—as I understand this to be a fairly separate issue. But it could be the case that this issue is just going under the title of “AI sentience” by others, in which case—sure—if it seems likely, that seems useful to study—perhaps mainly so that we could better avoid it)
I want to express a few things.
First, empathy, for:
People who were receiving (or hoping for) funding from OP, and now won’t;
OP staff, navigating comms when there may be tensions between what they personally believe in, and their various duties (to donors and to grantees);
Cari and Dustin’s evident desire not to throw their weight around too much, but also not to be pushed into funding things they don’t properly believe in.
Second, a little local disappointment in OP. At some point in the past it seemed to me like OP was trying pretty hard to be very straightforward and honest. I no longer get that vibe from OP’s public comms; they seem more like they’re being carefully crafted for something like looking-good or being-defensible while only saying true things. Of course I don’t know all the constraints they’re under so I can’t be sure this is a mistake. But I personally feel a bit sad about it — I think it makes it harder for people to make useful updates from things OP says, which is awkward because I think a bunch of people kind of look to OP for leadership. I don’t think anything is crucially wrong here, but I’m worried about people missing the upside from franker communication, and I wanted to mention it publicly so others could express (dis)agreement and make it easier for everyone to get a sense of the temperature of the room. (I also get this vibe a little from the GV statement, especially in not discussing which areas they’re dropping, but it doesn’t matter so much since people less look to them for leadership; and I don’t get this vibe from Dustin’s comments on this thread, which seem to me to be straightforward and helpful.)
Third, some optimism. When I first heard this news I felt like things were somehow going wrong. Having read Dustin’s elaborations I more feel like this is a step towards things working as they should[1]. Over the last few years I’ve deepened my belief in the value of doing things properly. And it feels proper for things to be supported by people who wholeheartedly believe in the things (and if PR or other tangles make people feel it’s headachey to support an area, that can be a legitimate impediment to wholeheartedness). I think that if GV retreats from funding some areas, the best things there are likely to attract funding from people who more believe in them, and that feels healthy and good. (I also have some nervousness that some great projects will get hurt in the process of shaking things out; I don’t really have a view on Habryka’s claim that the implementation is bad.)
Fourth, a sense that perhaps there was a better path here? (This is very speculative.) It seems like a decent part of what Dustin is saying is that each different funding area brings additional bandwidth costs for the funder (e.g. necessity to think about the shape of the field being crafted; and about possible missteps that warrant funder responses). That makes sense. But then the puzzle is: given that so much of funding work is successfully outsourced to OP, why is it not working to outsource these costs too? If I imagine myself in Cari and Dustin’s shoes, and query my internal sense of why that doesn’t work, I get:
Some degree of failure-of-PR-insulation, as discussed in comment threads elsewhere on this post
(This also applies to responsibility for non-PR headaches)
Some feeling like OP may be myopically looking for the best funding opportunities, and not sufficiently taking responsibility for how their approach as a funder changes the landscape of things people are striving to do
And therefore leaving that responsibility more with GV
(It does sound from things Alexander has written elsewhere that OP is weighing PR concerns more now than in the past; I’m curious how much that’s driven by deference to Cari and Dustin)
That might be off, but it is sparking thoughts about the puzzle of wanting people to take ownership of things without being controlling, and how to get that. From a distance, it seems like there’s some good chance there might have been a relationship GV and OP could have settled into that they’d have been mutually happier with than the status quo (maybe via OP investing more in trying to be good agents of Cari and Dustin, and less backing their own views about what’s good?); although navigating to anything like that seems delicate, so even if I’m right I’m not trying to ascribe blame here.
Fifth, thoughts on implications for other donors:
If you believe in some of the work that isn’t being funded by GV any more, your dollars just got more valuable
It wouldn’t be surprising if it was correct for a bunch of people to shift a lot of their giving explicitly to things OP won’t support for donor-preference reasons
(I’d feel happier if OP or GV had shared both thoughts about what they weren’t funding, and thoughts about what the hazards in funding those areas were, so that other donors could make maximally informed decisions; but I’m currently assuming that that won’t happen, for some reason like “the impetus is basically from Cari and Dustin, and they don’t have the bandwidth to explain publicly”)
I say “a lot of their giving” rather than “all of their giving”, because of a feeling that for donor coordination/cooperation reasons, it may be a bit off to leave GV picking up the tab for everything that they’re willing to, even when others think these things may be excellent
I’d prefer other people to continue to give to excellent opportunities that come up when they see them; I just think that they should expect a larger proportion of these opportunities to come in areas GV won’t fund
If you liked being able to defer to OP (and don’t want to take responsibility for the donor oversight things yourself more directly), it’s possible it would be worth investigating the possibility of an OP-discretion fund, where donors can give to things which OP (or maybe individual OP staff members) chooses at their discretion
That is, given what I understand of Cari and Dustin’s views, it seems proper for them not to support these things. I’m not here taking a view on which parts of the object level they’re correct on.
Speculating on your point 4: The messaging so far has been framed as “Good Ventures is pulling out of XYZ areas; since OP is primarily funded by GV, they are also pulling out.” But perhaps, if the sweet spot here is “Cari and Dustin aren’t controlling EA but also are obviously allowed not to fund things they don’t buy/want to” one solution would be to leave OP open to funding XYZ areas if a new funder appears who wants to partner with them to do so. This would, to me, seem to allow GV and OP to overtime develop more PR and bandwidth breathing room between the two orgs.
My sense is that this is not what’s happening now. As in my other comment on this thread, I don’t want to reveal my sources because I like this account being anonymous, but I’m reasonably confident that OP staff have been told “we are not doing XYZ anymore” not “Cari and Dustin don’t want to fund XYZ anymore, so if you want to work on it, we need to find more funding.”
My suspicion (from some conversations with people who interact directly with OP leadership) is that it isn’t only Cari and Dustin who don’t want to support the dropped areas, but also at least some leadership at OP. If that’s right, it explains why they haven’t taken the approach I’m suggesting here, but not why they didn’t say so (perhaps connecting this back to your point 2).
Just flagging that I think “OP [is] open to funding XYZ areas if a new funder appears who wants to partner with them to do so” accurately describes the status quo. In the post above we (twice!) invited outreach from other funders interested in the some of these spaces, and we’re planning to do a lot more work to try to find other funders for some of this work in the coming months.
I am skeptical that a new large philanthropist would be well-advised by doing their grantmaking via OP (though I do think OP has a huge amount of knowledge and skill as a grantmaker). At least given my current model, it seems hard to avoid continuing conflict about the shared brand and indirect-effects of OP on Good Ventures.
I think any new donor, especially one that is smaller than GV (as is almost guaranteed to the the case), would end up still having their donations affect Good Ventures and my best understanding of the things Dustin is hoping to protect via this change.
If Dustin can’t communicate that he doesn’t endorse every aspect of, or can’t take responsibility for, everything that is currently funded through OP, I doubt that the difficult-to-track “from which source did OP spend this money” aspect of an additional donor would successfully avoid the relevant conflicts.
My best guess is that if OP attracts an additional large donor above ~$200M, it seems best for some people to leave OP, establish a new organization that the now donor would actually be confident will be independent from Dustin’s preferences, while maintaining a collaborative relationship with OP to share thoughts and insights.
I am not super confident of this, but I don’t see a clear line that would allow a new donor to have confidence that OP staff isn’t still going to heavily take into account Dustin’s reputation and non-financial priorities in the recommendations to the new donor.
That could well be, but my experience was having another foundation, like FTX, didn’t insulate me from reputation risks either. I’m just another “adherent of SBF’s worldview” to outsiders.
I’d like to see a future OP that is not synonymous with GVF, because we’re just one of the important donors instead of THE important donor, and having a division of focus areas currently seems viable to me. If other donors don’t agree or if staff behaves as if it isn’t true, then of course it won’t happen.
Yeah, makes sense.
This to be clear is the primary reason why in my model it is much much better for an additional donor to not be part of an institution that you have a huge amount of influence over.
It’s going to be very hard for their actions to not reflect on you, and if they are worried that staff at their grantmaker will be unduly affected by that, the best way I see forward is for them to be have a separate institution where even if you are unhappy about their choices, you are not in a position to influence them as much.
My hope is that having other donors for OP would genuinely create governance independence as my apparent power comes from not having alternate funding sources*, not from structural control. Consequently, you and others lay blame on me even for the things we don’t do. I would be happy to leave the board even, and happy to expand it to diminish my (non-controlling) vote further. I did not want to create a GVF hegemony any more than you wanted one to exist. (If the future is a bunch of different orgs, or some particular “pure” org, that’s good by me too; I don’t care about OP aggregating the donors if others don’t see that as useful.)
But I do want agency over our grants. As much as the whole debate has been framed (by everyone else) as reputation risk, I care about where I believe my responsibility lies, and where the money comes from has mattered. I don’t want to wake up anymore to somebody I personally loathe getting platformed only to discover I paid for the platform. That fact matters to me.
* Notably just for the “weird” stuff. We do successfully partner with other donors now! I don’t get in their way at all, as far as I know.
Just chiming in to have more than Habryka’s view represented here. I think it’s not unreasonable in principle to think that OP and GV could create PR distance between themselves. I think it will just take time, and Habryka is being moderately too pessimistic (or, accurately pessimistic in the short term but not considering reasonable long-term potential). I’d guess many think-tank type organizations have launched on the strength of a single funder and come to advise many funders, having a distinct reputation from them—OP seems to be pursuing this more strongly now than in the past, and while it will take time to work, it seems like a reasonable theory of change.
I have updated, based on this exchange, that OP is more open to investment in non-GV-supported activities than previously came through. I’m happy to hear that.
This is eminently reasonable. Any opposition to these changes I’ve aired here is more about disappointment in some of the specific cause areas being dropped and the sense that OP couldn’t continue on them without GV; I’m definitely not intending to complain about GV’s decision, and overall I think OP attempting to diversify funding sources (and EA trying to do so at well) is very, very healthy and needed.
Thanks, I am glad that you are willing to do this and am somewhat relieved that your perspective on this seems more amenable to other people participating with stuff on their own terms than I was worried about. I am still concerned, and think it’s unlikely that other donors for OP would be capable of getting the appropriate levels of trust with OP for this to work (and am also more broadly quite worried about chilling effects of many types here), but I do genuinely believe you are trying to mitigate those damages and see many of the same costs that I see.
Yeah, I think that’s life. Acts by omission are real acts, and while the way people are judged for them are different, and the way people try to react to them generally tend to be more fraught, there are of course many acts of omission that are worthy of judgement, and many acts of omission that are worthy of praise.
I don’t think reality splits neatly into “things you are responsible for” and “things you are not responsible for”, and I think we seem to have a deeper disagreement here about what this means for building platforms, communities and societal institutions, which, if working correctly, practically always platform people its creators strongly disagree with or find despicable (I have found many people on LW despicable in many different ways, but my job as a discourse platform provider is to set things up to allow other people to form their own beliefs on that, not to impose my own beliefs on the community that I am supporting).
Neglecting those kinds of platforms, or pouring resources into activities that will indirectly destroy those platforms (like pouring millions of dollars into continued EA and AI Safety growth without supporting institutions that actually allow those communities and movements to have sane discourse, or to coordinate with each other, or to learn about important considerations), is an act with real consequences that of course should be taken into account when thinking about how to relate to the people responsible for that funding and corresponding lack of funding of their complement.
A manager at a company who overhires can easily tank the company if they try to not get involved with setting the right culture and onboarding processes in place and are absolutely to blame if the new hires destroy the company culture or its internal processes.
But again, my guess is we have deep, important and also (to me) interesting disagreements here, which I don’t want you to feel pressured to hash out here. This isn’t the kind of stuff I think one should aim to resolve in a single comment thread, and maybe not ever, but I have thought about this topic a lot and it seemed appropriate to share.
I’ve long taken for granted that I am not going to live in integrity with your values and the actions you think are best for the world. I’m only trying to get back into integrity with my own.
OP is not an abstraction, of course, and I hope you continue talking to the individuals you know and have known there.
To clarify, I did see the invitations to other funders. However, my perception was that those are invitations to find people to hand things off to, rather than to be a continuing partner like with GV. Perhaps I misunderstood.
I also want to be clear that the status quo you’re articulating here does not match what I’ve heard from former grantees about how able OP staff are to participate in efforts to attract additional funding. Perhaps there has been quite a serious miscommunication.
+1 to Alexander’s POV
This was also my impression. To the extent that the reason why OP doesn’t want to fund something is because of PR risks & energy/time/attention costs, it’s a bit surprising that OP would partner with another group to fund something.
Perhaps the idea here is that the PR/energy/time/attention costs would be split between orgs? And that this would outweigh the costs of coordinating with another group?
Or it’s just that OP feels better if OP doesn’t have to spend its own money on something? Perhaps because of funding constraints?
I’m also a bit confused about scenarios where OP wouldn’t fund X for PR reasons but would want some other EA group to fund X. It seems to me like the PR attacks against the EA movement would be just as strong– perhaps OP as an institution could distance itself, but from an altruistic standpoint that wouldn’t matter much. (I do see how OP would want to not fund something for energy/capacity reasons but then be OK with some other funder focusing on that space.)
In general, I feel like communication from OP could have been clearer in a lot of the comments. Or OP could’ve done a “meta thing” just making it explicit that they don’t currently want to share more details.
But EG phrasing like this makes me wonder if OP believes it’s communicating clearly and is genuinely baffled when commentators have (what I see as quite reasonable) misunderstandings or confusions.
I’m confused about what you’re saying here. It feels like maybe you’re conflating OP with GV? (Which may be functionally a reasonable approximation 99% of the time, but gets in the way of the point of the conversation here in the 1%.) e.g. at one point you say “better if OP doesn’t have to spend its own money”, but as far as I understand things, to a first approximation Open Philanthropy doesn’t have any money; rather, Good Ventures does.
oops yup— was conflating and my comment makes less sense once the conflation goes away. good catch!
In the opening post, Alexander invites people interested in funding research on the potential moral patienthood of digital minds to reach out to OP, which I took as indicative of continued interest.
This isn’t true for any of the other sub-focus areas that will be exited though, which I thought was strange. Given that nothing other than digital minds work was listed, how would any potential donors or people who know potential donors know about OP exiting things like invertebrates or wild animal welfare?
(I’m basing those sub-focus areas on the comments in this thread because of exactly this problem—I’m unclear if there’s more being exited or if those two are definitely part of it)
I assume that basically all senior EAs agree, and that they disprefer the current situation.
I’d flag though that I’m really not sure how well-equipped these nonprofits are to change this, especially without more dedicated resources.
Most nonprofit projects I see in EA, seem to be funded approximately at-cost. An organization would get a $200k grant to do work, that costs them approximately $200k to do.
Compare this to fast-growing businesses. These businesses often have significant sales and marketing budgets, sometimes this makes up 20-60% of the cost of the business. This is how these businesses expand. If these businesses charged at-cost, they would never grow.
I feel like it’s assumed that the nonprofits have the responsibility to find ways of growing, despite them not getting much money from donors to actually do so. Maybe it’s assumed that they’ll do this in their spare time, and that it will be very easy?
It seems very reasonable to me that if we think growth is possible and valuable, potentially 20-40% of OP money should go to fund this growth, either directly (OP directly spending to find other opportunities), or indirectly (OP gives nonprofits extra overheads, which are used for fundraising work). I’m curious what you and others at OP think this rough number should be, and what the corresponding strategies should be.
I agree but I want to be clear that I don’t think senior EAs are innocent here. I agree with Habryka that this is a situation that was made by a lot of the senior EAs themselves who actively went all in on only two funders (now down to one) and discouraged a lot of attempts to diversify philanthropy.
I encouraged people against earning to give before (though I updated sharply after 2022), and I largely regret that move. (I don’t think of myself as senior, especially at the time, but I’m unusually vocal online so I wouldn’t be surprised if I had a disproportionate influence).
Due to my borderline forum addiction, you probably have a disproportionate influence on on me haha. Will probably never earn to give though so no harm done here ;).
I don’t know about that, your other work sounds pretty great!
Yep, this also makes sense.
I imagine responsibility is shared, and also the opportunity to improve things from here is shared.
I don’t feel like I’ve witnessed too many cases of organizations discouraging attempts to diversify funding, but trust that you have.
I’m not thinking just discouraging attempts to diversify funding of one’s own org, but also discouraging earning to give, discouraging projects to bring in more donors, etc.
Yea, that seems bad. It felt like there was a big push a few years ago to make a huge AI safety researcher pipeline, and now I’m nervous that we don’t actually have the funding to handle all of that pipeline, for example.
For sure. Not only the lack of funding to handle the pipeline, but there seems to be increasing concern around the benefits to harm tradeoff of technical AI research too.
Perhaps like with the heavy correction against earning to give a few years ago which now seems likely a mistake, maybe theres a lesson to be learned against overcorrecting against the status quo in any direction too quickly...
One obvious way that EA researchers could help improve the situation, is to use comments like these to highlight that it is lacking, and try to discuss where to improve things. :)
is… that rot13′d for a reason? (it seemed innocuous to me)
I also hope this doesn’t come off as me being upset with OP / EA Funders. I think they’re overall great (compared to most alternatives), but also that they are very important—so when they do get something wrong, that’s a big deal and we should think through it.
Yeah, Cari and Dustin especially are a large part of what made a lot of this ecosystem possible in the first place, they seem like sincere people, and ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good.
I generally agree, but with reservations.
I think that Cari and Dustin’s funding has obviously created a lot of value. Maybe, even ~60% of all shapley EA value.
I personally don’t feel like I know much of what Cari and Dustin actually believe about things, other than that they funded OP. They both seem to have been fairly private.
At this point, I’m hesitant to trust any authority figure that much. “Billionaires in tech, focused on AI safety issues” currently has a disappointingly mixed track record.
‘ultimately it’s “their money” and their choice to do with their money what they genuinely believe can achieve the most good. ’ → Sure, but this is also true of
allmany[1] big donors. I think that all big donors should probably get more evaluation and criticism. I’m not sure how many specific actions to take differently when knowing “it’s their choice to do with their money what they genuinely believe can achieve the most good.”“genuinely believe can achieve the most good” → Small point, but I’m sure some of this is political. “Achieve the most good” often means, “makes the funder look better, arguably so that they could do more good later on”. Some funders pay for local arts museums as a way for getting favor, EA funders sometimes pay for some causes with particularly-good-optics for similar reasons. My guess is that the EA funders generally do this for ultimately altruistic reasons, but would admit that this set of incentives is pretty gnarly.
[1] Edited, after someone flagged. I had a specific reference class in mind, “all” is inaccurate.
Thinking about this more:
I think there’s some frame behind this, like,
”There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides.”
In contrast, I try to think about it more like,
”There’s a bunch of humans, doing human things with human motivations. Some wind up producing more value than others. There’s expected value to be had by understanding that value, and understanding the positives/negatives of the most important human institutions. Often more attention should be spent in trying to find mistakes being made, than in highlighting things going well.”
Thanks for elucidating your thoughts more here.
I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.
That said, all of this is hard to navigate well in practice.
Agreed! Ideally, “getting a lot of attention and criticism, but people generally are favorable”, should be looked at far more favorably than “just not getting attention”. I think VCs get this, but many people online don’t.
Looking through some of these comment threads, I wonder if we’re being perhaps a little too harsh and expecting a little too much of our funders at times. Being a funder is sacrificial and stressful. When I see comments from @Dustin Moskovitz like the ones below, I wonder if it might be better if we were a little more gentle and a little more kind in the way we have these discussions.
“I’m not detailing specific decisions for the same reason I want to invest in fewer focus areas: additional information is used as additional attack surface area. The attitude in EA communities is “give an inch, fight a mile”. So I’ll choose to be less legible instead.”“
”It is the case that we are reducing surface area. You have a low opinion of our integrity, but I don’t think we have a history of lying as you seem to be implying here.”
”Indeed, if I have the time is precisely the problem. I can’t know everyone in this community, and I’ve disagreed with the specific outcomes on too many occasions to trust by default.”
″AND there are other bandwidth issues: energy, attention, stress, political influence. Those are more finite than capital.”
But I could well be wrong and the debate could just be very robust and completely fine. Unsure...
Is more information available about the process and timetable of GV transitioning out of these subareas? Some of word choice here (surprise, not better anticipating) makes it sound like the transition may be too quick.
Part of being a responsible funder working toward fostering a healthy ecosystem is not pulling the rug out on departing grantees too quickly—especially where the change in direction has little or nothing to do with the merits of the grantees’ work, and where the funder’s departure will have significant systemic effects on a subarea. To be clear, I don’t think a funder is obliged to treat its prior funding decisions as some sort of binding precedent. But the funder’s prior actions will have often contributed to the pickle in which the subarea’s actors find themselves, and that usually calls for grace and patience during the exit transition.
I might analogize to the rules for laying off workers in the US; we expect more notice when there is a mass layoff. It makes sense that because the presence of a lot of new jobseekers in the same line of work and geographic area makes life harder for those who lost their job in a mass layoff vs. a isolated/small layoff. So it likely is with grantees as well.
Are OP staff planning to make any comments about grants they would counterfactually have recommended to GV, without this policy change?
My guess is that myself and other donors would potentially be interested in funding these.
Am I right in saying that OP will no longer be able to do what it thinks is most effective in some sense? That seems important to note.
An alternative theory is that OP doesn’t think these cause areas are the most effective one but wants to create new buckets and has been told ‘no’ which seems less notable.
I think this is a complicated question—it’s always been the case that individual OP staff had to submit grants to an overall review process and were not totally unilateral decision makers. As I said in my post above, they (and I) will now face somewhat more constraints. I think staff would differ in terms of how costly they would assess the new constraints as being. But it’s true this was a GV rather than OP decision; it wasn’t a place where GV was deferring to OP to weigh the costs and benefits.
Come on, I am confident if you did a survey of your staff you would find that people are expecting a huge shift and would meaningfully say that “yes, I think we will now be unable to make a non-trivial chunk of our most valuable grants via OP”. Of course OP has always faced some constraints here, but this is clearly a huge shift in your relation towards your OP-internal cost-effectiveness beliefs.
The question is inseparable from the lack of other donors. Of course it is true right now, because they have no one else to refer the grants to.
FWIW I certainly agree with “non-trivial”; “huge” is a judgment call IMO. We’ll see!
Totally! I agree that this situation would change again as soon as OP were to find other donors who would feel comfortable filling in the gaps, or deferring to OP in the way you had previously deferred your funding allocation to OP. But such a donor does not currently exist, and as such, the situation has very importantly changed, which is what I understood Nathan to be asking about.
I don’t think it’s true that no other donors exist for these areas. My understanding is Alexander and his colleagues are engaging some folks already and expect to get more inbounds now that this is better known.
It seems clear you actually do not want them to recruit donors for the grantees you’re focused on, which is ok, but there are also areas that have nothing to do with you.
I am pretty sure that the donors that Open Phil is talking to would not meaningfully undo the huge shift in OPs relationship to its own cost-effectiveness estimates. Maybe you disagree here, I would be happy to bet about survey outcomes of OP staff.
I am not sure what you mean by this. I would love for Open Phil staff to find additional donors for domains I support. I also think that donor would then probably be well-advised to hire away some of the OP staff, or hire additional staff of their own, and my guess is would end up with a very different relationship than you have to OP, but that doesn’t really bear on the question of whether I would OP staff to try to recruit donors for these things. I would like to see more funding to stuff that I care about, including from OP.
Oh sorry I wasn’t speaking precisely enough—I only meant you wouldn’t want them working with OP and would advise them not to. I didn’t mean to put words in your mouth and I agree they could help recruit a donor to work with another group.
Ah, cool. Yeah, that makes sense. I think that’s my current position, though to be clear, I am far from confident on this, it’s just my best guess about how to navigate the (to me) very tricky seeming dynamics here.
Yes, I am quite confident that Open Philanthropy staff will no longer be in a position to fund the things they consider most cost-effective (though it seems good for someone at OP or GV to confirm this). This does indeed seem like a huge shift in how OP operates.